# [Official] AMD Radeon RX 6900 XT Owner's Club



## Avacado

1st w00t.


----------



## Sonac

Can't wait to sell my 3080 to get this. Will actually profit slightly in all likelihood.

I'd also like to dispel the rumours that:
A. 6900XT will be AMD only
B. 6900XT will be limited production

Not only are these rumours clearly false, but whoever came up with them were dropped on their heads early in life. If either of these were true it would be a MAJOR ****up on AMD's part.

To myself and many enthusiasts this is the only appealing card in AMD's lineup because of the potential for crushing raster performance. 10% higher than 6800xt combined with the fact the 6900xt will be a higher bin thus higher overclocking potential makes this card a real contender. So no, it won't be limited production.

Furthermore, why would you limit the potential of these cards to what ever AMD comes up with instead of unleashing them with high power limits on AIB boards with many more power stages. This is their top end card... their door into the enthusiast community. There is no way they don't get AIB boards.

These are the kind of rumours which get perpetuated by the smaller youtubers for clickbait titles. You know who you are... and it's not okay to spread misinformation. I'd be okay with it if the rumours were just being discussed with emphasis on scepticism. But these people treat it like fact. Even big channel like Linus does this.

Do the right thing and don't give them your views. Always treat rumours are false until they are proven, because they are most likely made up.


----------



## LancerVI

I'm torn between the 6800xt and 6900xt. I hate spending a $1k on a GPU, but I've never really bought cards at that level before. I got my 1080ti for $550 during the mining boom from Amazon Marketplace. Used, but still under warranty, with an Amazon receipt to boot. 

Definitely going AMD this go around.


----------



## Sonac

LancerVI said:


> I'm torn between the 6800xt and 6900xt. I hate spending a $1k on a GPU, but I've never really bought cards at that level before. I got my 1080ti for $550 during the mining boom from Amazon Marketplace. Used, but still under warranty, with an Amazon receipt to boot.
> 
> Definitely going AMD this go around.


Easy. What screen do you use? If you have 1440p 240hz / 4k 144hz then 6900xt. If 1440p 144hz or lower than 6800xt is plenty.


----------



## LancerVI

Sonac said:


> Easy. What screen do you use? If you have 1440p 240hz / 4k 144hz then 6900xt. If 1440p 144hz or lower than 6800xt is plenty.


I'm ultrawide 3440x1440, but with the cavaet that I'm a flight simmer w/ 4k VR. So I need a beast. DCS @4k on a HMD will bring almost any system to its knees.


----------



## Sonac

LancerVI said:


> I'm ultrawide 3440x1440, but with the cavaet that I'm a flight simmer w/ 4k VR. So I need a beast. DCS @4k on a HMD will bring almost any system to its knees.


Yeah flight sim is heavy. Definitely gonna want some power VR too (assuming you play VR games aswell). Definitely pick up the 6900xt then.


----------



## bigjdubb

Well I'm curious to see what the actual difference is. They are comparing them to two cards that aren't very far apart. I don't like the idea of paying 50% more if it's going to be 10% better.


----------



## gengar

LancerVI said:


> I'm torn between the 6800xt and 6900xt. I hate spending a $1k on a GPU, but I've never really bought cards at that level before. I got my 1080ti for $550 during the mining boom from Amazon Marketplace. Used, but still under warranty, with an Amazon receipt to boot.
> 
> Definitely going AMD this go around.


Everything pending 3rd party benchmarks of course, but If the 6800 XT is actually available on the 18th, I'm going with that first. Then may grab the 6900 XT when it comes out.


----------



## falcon26

Only problem is these cards the 6900 and the 6800 will probably sell out in seconds and not be available for the masses until next year just like Nvidia's 30 series. And if you do find one it will be on Ebay for twice the MSRP.


----------



## bigjdubb

There are a lot of people waiting to see if that comes true. We don't actually know what the root cause is for the supply issues nvidia is having, but I think a lot of people are hoping it is Samsung that is the problem.


----------



## falcon26

What maker is AMD using for its chips?


----------



## Avacado

TSMC 7nm


----------



## falcon26

Well I work in the chipmaker area here in the US, and I can tell you that supply right now for lots of materials to make certain things is extremely low and extremely back up due to this whole pandemic thing. I would bet that YSMC is going to have the same supply issues that Samsung is having. I know where I work things that we build in a normal time would be say 2 weeks. But now because of supply issues we are quoting our customers at least a 6-8 week delay. And this delay is showing no signs of slowing down at least until next year....


----------



## bigjdubb

I think that is true for a lot of places. If it's all the other bits (besides the gpu itself) that are creating the problem then AMD will very likely have the same issues. If the problem for Nvidia is that Samsung isn't getting enough chips out the door then there is hope that AMD won't have the same issue with TSMC.


----------



## Emmanuel

Can't wait to see eBay flooded with used nVidia 3000s, and scalpers unable to make any profits! I just hope those gaming performance figures are close to reality. Never had an AMD/ATI GPU but I'm ready to teach nVidia a lesson if AMD's offering is adequate.


----------



## falcon26

Yeah for my business it's all the little bits and pieces so to speak of parts coming from China that are delayed.


----------



## zhrooms

bigjdubb said:


> Well I'm curious to see what the actual difference is. They are comparing them to two cards that aren't very far apart. I don't like the idea of paying 50% more if it's going to be 10% better.


6900 XT should be almost exactly 10% faster than 6800 XT, the specs are near identical except for the amount of shader processors (and ray accelerators), 5120 (80 CUs) and 4608 (72 CUs). The reason it costs so much more ($350 / 54%) is because it's difficult to manufacture those last 8 CUs, read more about yields here, this image also shows the full 80 CUs, 40 each side, in other words the 6900 XT is the full-fat Radeon, similar to RTX 3090 which uses 82 of 84 SMs so almost full. The 6900 XT clearly has an awful price/performance compared to the 6800 XT (amazing value against RTX 3090), but that's what you gotta pay to get those last 10%, so far I haven't seen any other advantage, maybe we get some unique partner models exclusively on 6900 XT, only time will tell, but it's very clear 6800 XT will be the popular model. Personally only interested in the 6900 XT and RTX 3090 since I'm already on an overclocked 2080 Ti with unlocked power limit and water cooling, which performs the same as a 6800 XT or RTX 3080.


----------



## DNMock

Thought only the reference design was going to be available for the 6900XT


----------



## falcon26

That is what I heard for the 6900, that it will only be made by AMD itself no 3rd party vendors will have it, at least in the beginning I would assume.


----------



## bigjdubb

The RVII was that way so it wouldn't be a new thing for them.


----------



## falcon26

But again I am 99% sure that these will sell out in seconds and end up on Ebay. But I guess time will tell...


----------



## DNMock

We will see. Personally I'm not worried about limited access to these GPU's for any reason other than AMD sending the majority of their big Navi chips to the console makers first so they can meet their demand.


----------



## hotrod717

Finally. I started with red and stayed there until I could no longer compete in benchmarks following 290x. Cannot wait to get one these.


----------



## Zam15

falcon26 said:


> at is what I heard for the 6900, that it will only be made by AMD itself no 3rd party


Hope not.. AIBs could have some fun with this card. Also I need 2+ HDMI 2.1 ports...


----------



## megahmad

I read somewhere that the 6900 XT will be made in very limited quantities. Hopefully these AMD cards (especially the 6900 XT) won't be a paper launch like the 3000 nvidia cards.


----------



## Unkzilla

The cards look great. 6900XT at $999US is of interest to me as a 2080ti owner. Not keen on 10gb 3080 and the current pricing on the 3090 is rough

No doubt there will be significant supply issues though. Demand is at an all time high as shown by google search hits .. looks like Nvidia has met about 10% of demand so far for the 3080/3090, all of those pending customers will try to jump on these cards too.

Remember how hard it was to buy Radeon VII/Vega cards etc etc? Even 3950x/3900x was tough to buy. This will be no different.

I'm happy to wait until next year regardless


----------



## ZealotKi11er

Let it begin.


----------



## darkphantom

So many of my fellow gamers in the same boat as me!
8700k / 1080ti / 32gb / m.2 1TB - Flight simmer and VR guy wanting the 6800XT but leaning towards the 6900 if they have availability but not sure if I should be spending $1k on a GPU LOL


----------



## PriestOfSin

darkphantom said:


> So many of my fellow gamers in the same boat as me!
> 8700k / 1080ti / 32gb / m.2 1TB - Flight simmer and VR guy wanting the 6800XT but leaning towards the 6900 if they have availability but not sure if I should be spending $1k on a GPU LOL


Very similar boat here. Replacing a 1080Ti, so a 3080 was never in the cards (lmao 10GB vram), but debating between 6800XT and 6900XT. On the one hand, I'm "just" at 1440/144. On the other, I love VR stuff. On the third hand, spending $1k on a GPU is ludicrous... decisions, decisions.


----------



## PiotrMKG

PriestOfSin said:


> Very similar boat here. Replacing a 1080Ti, so a 3080 was never in the cards (lmao 10GB vram), but debating between 6800XT and 6900XT. On the one hand, I'm "just" at 1440/144. On the other, I love VR stuff. On the third hand, spending $1k on a GPU is ludicrous... decisions, decisions.


Paying 51% more for 10% more performance?! Just no.


----------



## chris89

I was under the impression the 6800/6800 XT were 96 ROPs & only the 6900 XT was 128 ROPs?


----------



## zhrooms

We just learned the dimensions of all 3 cards thanks to SAPPHIRE, they list the products already, they're selling Reference design;

RX 6800 XT and RX 6900 XT
Dimension: 266.7(L)X 119.75(W)X 49.75 (H)mm
267 x 120 x 50mm (2.50 Slot)

RX 6800
Dimension: 266.7(L)X 119.75(W)X 39.75 (H)mm
267 x 120 x 40mm (2.00 Slot)

They are seriously tiny, and can beat both 3080 and 3090, very impressive.


----------



## megahmad

wow only 26cm  that's really amazing!


----------



## TK421

megahmad said:


> wow only 26cm  that's really amazing!


out of context quotes


----------



## TK421

oh did amd ever publish performance slide without the smart memory access feature?


----------



## PriestOfSin

PiotrMKG said:


> Paying 51% more for 10% more performance?! Just no.


Yeah. After much deliberation, the 6800XT will do me just fine. I do wonder why AMD didn't equip the 6900XT with more VRAM. You'd think they'd throw 24GB on there or something just to twist the knife on the 3090.


----------



## AKUMUOU

LancerVI said:


> I'm torn between the 6800xt and 6900xt. I hate spending a $1k on a GPU, but I've never really bought cards at that level before. I got my 1080ti for $550 during the mining boom from Amazon Marketplace. Used, but still under warranty, with an Amazon receipt to boot.
> 
> Definitely going AMD this go around.


6800XT 100% 3 reasons

1. Performance gains the in benchmarks of the 6900XT provided were shown with additional settings enabled that increased the performance 5-10%. Meaning that with these settings enabled on the 6800XT you'd see closer performance.
2. 6900XT is reference only, no custom cards
3. 6800XT custom models will have a closer gap to the 6900XT than the benchmarks at AMDs presentation.


----------



## LancerVI

AKUMUOU said:


> 6800XT 100% 3 reasons
> 
> 1. Performance gains the in benchmarks of the 6900XT provided were shown with additional settings enabled that increased the performance 5-10%. Meaning that with these settings enabled on the 6800XT you'd see closer performance.
> 2. 6900XT is reference only, no custom cards
> 3. 6800XT custom models will have a closer gap to the 6900XT than the benchmarks at AMDs presentation.



Yeah. The more I thought about it, I essentially came to these conclusions as well. There is the "idiot" part of me that has always wanted to own a "Titan" or a proper flagship; however, as the information from AMD's reveal has sunk in, I came to this result as well.


----------



## zhrooms

AKUMUOU said:


> 6900XT is reference only, no custom cards


Source? Because I haven't seen it yet, not a word from anyone about it, but what I have seen, is guesses that it'll be reference only because flagship, which doesn't make any sense because RX 6900 XT is not comparable to VII in any way, completely different cards.


----------



## TeslaHUN

So , can we buy custom cards on nov18 or just reference model ?
Im most likely buying a 6800XT , but the reference cooler seems too weak for me ( i like silent cards ) .


----------



## chris89

So the 6800 XT is truly going to offer 128 ROPs like the 6900 XT?


----------



## Ark-07

Can someone give me some insight as to why there are gpu shortages and what are my chances of finding a watercooled 6900xt on release? Should i try find a way to preorder or just try but it like everyone else and hope for the best?


----------



## shiokarai

Soooo..... will it be faster or as fast as a 480w/500w 3090?  your predictions?


----------



## Forsaken1

Arrived


----------



## mevorach

Sonac said:


> Can't wait to sell my 3080 to get this. Will actually profit slightly in all likelihood.
> 
> I'd also like to dispel the rumours that:
> A. 6900XT will be AMD only
> B. 6900XT will be limited production
> 
> Not only are these rumours clearly false, but whoever came up with them were dropped on their heads early in life. If either of these were true it would be a MAJOR ****up on AMD's part.
> 
> To myself and many enthusiasts this is the only appealing card in AMD's lineup because of the potential for crushing raster performance. 10% higher than 6800xt combined with the fact the 6900xt will be a higher bin thus higher overclocking potential makes this card a real contender. So no, it won't be limited production.
> 
> Furthermore, why would you limit the potential of these cards to what ever AMD comes up with instead of unleashing them with high power limits on AIB boards with many more power stages. This is their top end card... their door into the enthusiast community. There is no way they don't get AIB boards.
> 
> These are the kind of rumours which get perpetuated by the smaller youtubers for clickbait titles. You know who you are... and it's not okay to spread misinformation. I'd be okay with it if the rumours were just being discussed with emphasis on scepticism. But these people treat it like fact. Even big channel like Linus does this.
> 
> Do the right thing and don't give them your views. Always treat rumours are false until they are proven, because they are most likely made up.


I wish i could trade my 6800xt with your 3080 . 6800xt is garbage


----------



## ilmazzo

Forsaken1 said:


> Arrived


Nice!!!

let us see what’s inside


----------



## helis4life

mevorach said:


> I wish i could trade my 6800xt with your 3080 . 6800xt is garbage


Why is it garbage?


----------



## ilmazzo

he needs attention I think


----------



## Forsaken1

ilmazzo said:


> Nice!!!
> 
> let us see what’s inside





https://hardforum.com/threads/i-bought-a-6900-xt-today.2004712/page-3#post-1044848190


----------



## coelacanth

I received the XFX Speedster MERC 319 RX 6900 XT Ultra (what a name) today. I wanted a 6800 XT but in the feeding frenzy for GPUs I'll take what I can get.


----------



## helis4life

coelacanth said:


> I received the XFX Speedster MERC 319 RX 6900 XT Ultra (what a name) today. I wanted a 6800 XT but in the feeding frenzy for GPUs I'll take what I can get.
> 
> View attachment 2469427


Wow didn't realise these were coming out so quickly. Got myself a MBA 6900xt


----------



## 2080tiowner

Hi, do you know how to change RADEON logo color on the founder 6900XT ?


----------



## helis4life

2080tiowner said:


> Hi, do you know how to change RADEON logo color on the founder 6900XT ?


Looking in to this myself. Techpowerup says it's Radeon RGB control software. Haven't found it yet though and it's not part of Wattman


----------



## majestynl

Got mine too...

Is there a link for the newest MorePowerTool ?


----------



## Forsaken1

2080tiowner said:


> Hi, do you know how to change RADEON logo color on the founder 6900XT ?


New drivers and app on AMD.com


----------



## helis4life

majestynl said:


> Got mine too...
> 
> Is there a link for the newest MorePowerTool ?











RED BIOS EDITOR and MorePowerTool for Polaris, Navi and Big Navi - MPT 1.3.18 | igor'sLAB


New curve options have been added. Of course we would be very happy about a feedback in the forum about the use of this new function! The More PowerTool (MPT) has been also revised once again for the…




www.igorslab.de





Changing the tdc of the card works but it seems like the 15% power limit is bios locked and doesnt work at least for me


----------



## Roacoe717

Got mine. Overclocks decent but memory oc is limited.


----------



## alexp247365

Hello all, 

New to AMD!. 

Just swapped from a 2080TI to a 6900XT, but my monitor OSD is showing still as GSNYC on. Odd. If I enable, or disable Adaptive Sync compatible, it doesn't change the OSD showing GYSNC -

















Is this normal? Some games work fine, other struggle to get over 30 fps, even when the clock speed sits at 1200mhz - it's like it doesn't know its a game or something.,

Anyone else trying to use an AMD card with a Gysync/Adaptive sync compatible monitor?

Alex


----------



## helis4life

Roacoe717 said:


> View attachment 2470320
> 
> 
> Got mine. Overclocks decent but memory oc is limited.


Limited how?


----------



## Roacoe717

Its limited 


helis4life said:


> Limited how?


Its limited to 2150mhz, I used radeon and afterburner same result.


----------



## chispy

Anything new regarding flashing this new gpus for higher power limit and removal of artificial clocks on gpu and memory ? This cards look very interesting to tweak  .


----------



## helis4life

More power tool can increase wattage and current limits. XFX Merc has a watt limit of 289. So that power should be alright I guess with good cooling on reference models


----------



## chispy

helis4life said:


> More power tool can increase wattage and current limits. XFX Merc has a watt limit of 289. So that power should be alright I guess with good cooling on reference models


Thank you for the information , appreciate it.


----------



## majestynl

helis4life said:


> RED BIOS EDITOR and MorePowerTool for Polaris, Navi and Big Navi - MPT 1.3.18 | igor'sLAB
> 
> 
> New curve options have been added. Of course we would be very happy about a feedback in the forum about the use of this new function! The More PowerTool (MPT) has been also revised once again for the…
> 
> 
> 
> 
> www.igorslab.de
> 
> 
> 
> 
> 
> Changing the tdc of the card works but it seems like the 15% power limit is bios locked and doesnt work at least for me


What settings are successful for you (wattage / tdc etc) ?

Just curious for comparison when I start tweaking this baby!


----------



## helis4life

majestynl said:


> What settings are successful for you (wattage / tdc etc) ?
> 
> Just curious for comparison when I start tweaking this baby!


Went 300 for tgp, left tdc at 320. Bumped up power to 345w with extra 15% power but was getting a bit warm with reference cooler so didn't go any higher. Was just for a timespy run. Just run it at stock 255 +15% for now

Got a block sitting here, just waiting on my AMD CPU to finish my main build then do some proper tuning


----------



## majestynl

I see thanks !

I haven't ordered a block yet. Looking for one. EK blocks are in stock but the Alphacool not yet. There is no comparison yet. On my 2080ti the EK was really the worst block. Changed to a Heatkiller. But Watercool didn't announced a block yet for the 6000 series.


----------



## helis4life

majestynl said:


> I see thanks !
> 
> I haven't ordered a block yet. Looking for one. EK blocks are in stock but the Alphacool not yet. There is no comparison yet. On my 2080ti the EK was really the worst block. Changed to a Heatkiller. But Watercool didn't announced a block yet for the 6000 series.


Does the alphacool 6800 not fit the 6900?


----------



## majestynl

helis4life said:


> Does the alphacool 6800 not fit the 6900?


It does, but not in stock. ETA was few days away.
Delayed few more weeks sadly


----------



## 2080tiowner

hey !









I scored 42 952 in Fire Strike


AMD Ryzen 9 5900X, AMD Radeon RX 6900 XT x 1, 32768 MB, 64-bit Windows 10}




www.3dmark.com


----------



## knightriot

got mine , watercooling it and replace for 3080 aorus wb


----------



## LtMatt

What is everyones out of the box, stock core clock on the AMD MBA 6900 XT? Mine is 2499Mhz.








My 6800 XT MBA was 2364Mhz for reference.


----------



## chispy

Anyone knows the trick to surpass the artificial wall of memory clocks 1750Mhz ? I have a seen already a few guys doing 2000+ on memory on 6900xt / 6900xt and 6800.

example - Fitvelkiz`s 3DMark - Fire Strike Ultra score: 15465 marks with a Radeon RX 6900 XT


https://www.3dmark.com/3dm/55419177


----------



## Stag

LtMatt said:


> What is everyones out of the box, stock core clock on the AMD MBA 6900 XT? Mine is 2499Mhz.
> View attachment 2470893
> 
> My 6800 XT MBA was 2364Mhz for reference.



500/2504



chispy said:


> Anyone knows the trick to surpass the artificial wall of memory clocks 1750Mhz ? I have a seen already a few guys doing 2000+ on memory on 6900xt / 6900xt and 6800.
> 
> example - Fitvelkiz`s 3DMark - Fire Strike Ultra score: 15465 marks with a Radeon RX 6900 XT
> 
> 
> https://www.3dmark.com/3dm/55419177
> 
> 
> View attachment 2470913


Change in wattman.Enable vram tunning.


----------



## LtMatt

Stag said:


> 500/2504


Similar to mine then. Asked someone else on a different forum and they said 2440. 

Sample size of 3, not great. Trying to figure out if we are above average or not for out of the box clocks.


----------



## chispy

Since i was a good boy in year 2020 Santa Claus delivered my gift one day earlier 😁


----------



## ZealotKi11er

LtMatt said:


> Similar to mine then. Asked someone else on a different forum and they said 2440.
> 
> Sample size of 3, not great. Trying to figure out if we are above average or not for out of the box clocks.


2534MHz for mine. I think the best ones are close to 1577MHz.


----------



## Stag

LtMatt said:


> Similar to mine then. Asked someone else on a different forum and they said 2440.
> 
> Sample size of 3, not great. Trying to figure out if we are above average or not for out of the box clocks.


Intresting.May have to do with ambient possibly.Waiting on case.Motherboard on box.

Edit.MPT enabled may change core clock?


----------



## ilmazzo

chispy said:


> Since i was a good boy in year 2020 Santa Claus delivered my gift one day earlier 😁
> 
> View attachment 2470939


first custom queen, interesting...


----------



## St4rky

LtMatt said:


> What is everyones out of the box, stock core clock on the AMD MBA 6900 XT? Mine is 2499Mhz.
> My 6800 XT MBA was 2364Mhz for reference.


2489 here dude


----------



## Starkinsaur

chispy said:


> Anyone knows the trick to surpass the artificial wall of memory clocks 1750Mhz ? I have a seen already a few guys doing 2000+ on memory on 6900xt / 6900xt and 6800.


Do you mean 2150MHz (below)? MPT seems to be able to do more but then the driver software reduces GPU core clock to 500MHz. Would also be interested in a workaround for that.


----------



## chispy

^ exactly


----------



## ZealotKi11er

chispy said:


> ^ exactly


Even if you could it would not go any higher because of errors. Its 16Gbps memory. Almost all the benchamark and people overclocking memory with these cards start to lose perf after 2100. Very few card can maybe do 2130-2150. If someone had a 2200MHz card, you maybe losing like 0.1%


----------



## chispy

LtMatt said:


> What is everyones out of the box, stock core clock on the AMD MBA 6900 XT? Mine is 2499Mhz.
> View attachment 2470893
> 
> My 6800 XT MBA was 2364Mhz for reference.


2504 out of the box


----------



## chispy

ZealotKi11er said:


> Even if you could it would not go any higher because of errors. Its 16Gbps memory. Almost all the benchamark and people overclocking memory with these cards start to lose perf after 2100. Very few card can maybe do 2130-2150. If someone had a 2200MHz card, you maybe losing like 0.1%


I have been testing for about an hour the max memory clocks and i can see an increase benchmark score on firestrike from 2100Mhz , 2125Mhz and 2150mhz and always the score increase a lot at 2150mhz ( always get lower score with 2125 and 2100 ). Card is capable of 2150 stable and i guess because is custom pcb 3x8 power pin.

Now to test the core clocks i go 

ninja edit: core maxed out at 2705Mhz only using amd overdrive and stock cooling fans 100% , it bounces from 2700~2610Mhz during FS. will try now with morepowertool


----------



## coelacanth

ilmazzo said:


> first custom queen, interesting...











[Official] AMD Radeon RX 6900 XT Owner's Club


So , can we buy custom cards on nov18 or just reference model ? Im most likely buying a 6800XT , but the reference cooler seems too weak for me ( i like silent cards ) .




www.overclock.net





Though in fairness it was still in the box.


----------



## chispy

Naked pcb of the Asrock RX 6900xt Phantom Gaming D


----------



## Stag

knightriot said:


> got mine , watercooling it and replace for 3080 aorus wb
> View attachment 2470842





chispy said:


> Naked pcb of the Asrock RX 6900xt Phantom Gaming D
> 
> View attachment 2471030
> View attachment 2471031


Interesting comparison


----------



## chispy

It seems this Asrock 6900xt do not like cold  . Hook it up to my strong single stage at -51c and it cold boot bug. Turned off the ss and let it go + temps to 10c and it booted. tested for ~10-12 hours trying to get it run cold but it seems this cards have problems with low temps , even on ss cold boot bug at -51c , so Ln2 is out of the equation for me. Sad but that's my finding on this 6900xt card. I did it for fun and for competitive benchmarking for hwbot points. Will run on plain water on universal water block later.


----------



## helis4life

LtMatt said:


> Similar to mine then. Asked someone else on a different forum and they said 2440.
> 
> Sample size of 3, not great. Trying to figure out if we are above average or not for out of the box clocks.


You mean max setting? Mine was 2499. MBA 6900xt


----------



## helis4life

ZealotKi11er said:


> 2534MHz for mine. I think the best ones are close to 1577MHz.


2534 for MBA?


----------



## ZealotKi11er

helis4life said:


> 2534 for MBA?


Yes.


----------



## helis4life

ZealotKi11er said:


> Yes.


Very nice. Tried any OCing? Any 3dmark runs?


----------



## ZealotKi11er

helis4life said:


> Very nice. Tried any OCing? Any 3dmark runs?


Waiting for WB. I have some 3DMark FS runs 2700MHz.


----------



## Starkinsaur

Edit: To be clear. By default, there are two memory timing options available in the Radeon Software - "Default" and "Fast Timing".

So it looks like we can unlock some more memory timings "Fast Timing Level2" by setting Memory Timing Control to 2 in MorePowerTool.
Mine crashed Unigine Superposition 4k at 2000mhz with this setting.
What else is hiding in there...?


----------



## HeLeX63

I just got my hands on the Red Devil 6900XT Limited Edition. Current Max OC is 2150mhz memory, and 2600/2740 for GPU.

Any idea how to increase power limit instead of the 115% cap ? Mine seems to average around 320W max power draw and has 3 8-Pin PCIE connectors. Waiting for EK to release a WB for the AIB models.

I have seen people use MorePowerTool, but am not sure how to use it. Can you boost frequencies higher using that tool ?


----------



## helis4life

HeLeX63 said:


> I just got my hands on the Red Devil 6900XT Limited Edition. Current Max OC is 2150mhz memory, and 2600/2740 for GPU.
> 
> Any idea how to increase power limit instead of the 115% cap ? Mine seems to average around 320W max power draw and has 3 8-Pin PCIE connectors. Waiting for EK to release a WB for the AIB models.
> 
> I have seen people use MorePowerTool, but am not sure how to use it. Can you boost frequencies higher using that tool ?
> 
> View attachment 2471116


If your card is power limited at 320w then increasing it should let the boost go higher. Use morepowertool, load your bios rom, adjust the wattage values and click write sppt, restart and should be good to go


----------



## majestynl

HeLeX63 said:


> I just got my hands on the Red Devil 6900XT Limited Edition. Current Max OC is 2150mhz memory, and 2600/2740 for GPU.
> 
> Any idea how to increase power limit instead of the 115% cap ? Mine seems to average around 320W max power draw and has 3 8-Pin PCIE connectors. Waiting for EK to release a WB for the AIB models.
> 
> I have seen people use MorePowerTool, but am not sure how to use it. Can you boost frequencies higher using that tool ?
> 
> View attachment 2471116


Welcome on board!

As @helis4life explained, you can use MPT. You will boost probably higher freqs..

Alphacool has the block allready. Shipping in january if I'm right.









Alphacool Eisblock Aurora Acryl GPX-A Radeon RX 6800(XT)/6900XT Red Devil mit Backplate


Der Alphacool Eisblock Aurora Acryl GPX Grafikkarten Wasserkühler mit Backplate vereint Style mit Performance. Extreme Kühlperformance und eine umfangreiche digital RGB Beleuchtung zeichnen ihn aus. Erfahrung und technisches Know-how aus...




www.alphacool.com





Dunno about EK. Performance was not there in previous blocks. And strange part is. Nobody got a review from the many YouTubers I saw getting the vector block together with the 6800xt. Strange!!??!!


----------



## chispy

Did some testing on my water chiller with the non-reference Asrock Phantom 6900xt. max clocks were 2850/2150Mhz.


----------



## helis4life

majestynl said:


> Welcome on board!
> 
> As @helis4life explained, you can use MPT. You will boost probably higher freqs..
> 
> Alphacool has the block allready. Shipping in january if I'm right.
> 
> 
> 
> 
> 
> 
> 
> 
> 
> Alphacool Eisblock Aurora Acryl GPX-A Radeon RX 6800(XT)/6900XT Red Devil mit Backplate
> 
> 
> Der Alphacool Eisblock Aurora Acryl GPX Grafikkarten Wasserkühler mit Backplate vereint Style mit Performance. Extreme Kühlperformance und eine umfangreiche digital RGB Beleuchtung zeichnen ihn aus. Erfahrung und technisches Know-how aus...
> 
> 
> 
> 
> www.alphacool.com
> 
> 
> 
> 
> 
> Dunno about EK. Performance was not there in previous blocks. And strange part is. Nobody got a review from the many YouTubers I saw getting the vector block together with the 6800xt. Strange!!??!!


I just put an Ek quantum vector Ron my reference. Playing games and benches card gets up to low 50s with hot spot about 20 degrees more. This is with one PE360 in a ghetto setup. Waiting for my 5950x to put proper build back together with 3 360 rads and hopefully see temps back below 50. From what I've seen this seems to be about right temp wise with the EK block. Igors lab reviewed the alphacool, it ran about 25 degrees above water temp, so I'd say close to the same. No idea what my water temp is right now but probably 30ish at least. Does 20 degree spread between edge and hot spot sound about right? Igor's lab reported 15 for the alpha cool block


----------



## ZealotKi11er

chispy said:


> Did some testing on my water chiller with the non-reference Asrock Phantom 6900xt. max clocks were 2850/2150Mhz.
> 
> 
> View attachment 2471190


Post that score. It will beat the top 3090 score.


----------



## helis4life

chispy said:


> Did some testing on my water chiller with the non-reference Asrock Phantom 6900xt. max clocks were 2850/2150Mhz.
> 
> 
> View attachment 2471190


Vcore at 1175?


----------



## HeLeX63

majestynl said:


> Welcome on board!
> 
> As @helis4life explained, you can use MPT. You will boost probably higher freqs..


Thank you very much !!! Excited to test the new card.

I just increased the total power limit after the 15% allowable in the AMD software to boost to 340W. Clock speeds now stay locked between 2600 and 2690 MHz on the core.


----------



## HeLeX63

helis4life said:


> You mean max setting? Mine was 2499. MBA 6900xt


My stock clock is 2504 MHz in the AMD software tool.


----------



## majestynl

helis4life said:


> I just put an Ek quantum vector Ron my reference. Playing games and benches card gets up to low 50s with hot spot about 20 degrees more. This is with one PE360 in a ghetto setup. Waiting for my 5950x to put proper build back together with 3 360 rads and hopefully see temps back below 50. From what I've seen this seems to be about right temp wise with the EK block. Igors lab reviewed the alphacool, it ran about 25 degrees above water temp, so I'd say close to the same. No idea what my water temp is right now but probably 30ish at least. Does 20 degree spread between edge and hot spot sound about right? Igor's lab reported 15 for the alpha cool block


Low 50s with a block sounds a bit to much for me.. I'm more used to have them between 40-50. But again I have no block yet to have some numbers for a 6900x!

20c difference for the Junction sounds fair. With previous blocks i had more or less the same numbers!

Btw: saw a post from Watercool talking about a block in January. So i will wait for the Heatkiller. 



HeLeX63 said:


> Thank you very much !!! Excited to test the new card.
> 
> I just increased the total power limit after the 15% allowable in the AMD software to boost to 340W. Clock speeds now stay locked between 2600 and 2700 MHz on the core.


Great to see you have fixed it! I have the roughly the same freqs. Memory at 2150 with no issues in Benches and Games. 

Tomorrow I will try to lower the vcore a bit more. Currently 1025mv!


----------



## majestynl

HeLeX63 said:


> Thank you very much !!! Excited to test the new card.
> 
> I just increased the total power limit after the 15% allowable in the AMD software to boost to 340W. Clock speeds now stay locked between 2600 and 2700 MHz on the core.


Double post!! Can't delete


----------



## HeLeX63

majestynl said:


> Low 50s with a block sounds a bit to much for me.. I'm more used to have them between 40-50. But again I have no block yet to have some numbers for a 6900x!
> 
> 20c difference for the Junction sounds fair. With previous blocks i had more or less the same numbers!
> 
> Btw: saw a post from Watercool talking about a block in January. So i will wait for the Heatkiller.
> 
> 
> 
> Great to see you have fixed it! I have the roughly the same freqs. Memory at 2150 with no issues in Benches and Games.
> 
> Tomorrow I will try to lower the vcore a bit more. Currently 1025mv!


Nice. My card tops out at 1.175V. Card now peaks at 345W. Is 2600 to 2690Mhz core AVG frequency decent, poor or good ? Trying to gauge how my card stacks up against other people who own 6900XT's.

Will be purchasing an EKWB as soon as available. Hopefully soon.


----------



## chispy

helis4life said:


> Vcore at 1175?


1.35v v.core + 500w pl raised in mpt


----------



## ZealotKi11er

chispy said:


> 1.35v v.core + 500w pl raised in mpt


How did you get the voltage to 1.35v?


----------



## chispy

ZealotKi11er said:


> How did you get the voltage to 1.35v?


I use morepowertool , that's it. I will show you later because the rig is on the Bench Station hooked to my single stage on cpu and water chiller on gpu. Chiller is running right now but i need to wait until it reaches -21c before i start benching.


----------



## helis4life

chispy said:


> I use morepowertool , that's it. I will show you later because the rig is on the Bench Station hooked to my single stage on cpu and water chiller on gpu. Chiller is running right now but i need to wait until it reaches -21c before i start benching.


1.35 is huge power increase. how close were you to the 500w powerlimit? Is that the power you needed to be stable with max clock 2850?


----------



## chispy

helis4life said:


> 1.35 is huge power increase. how close were you to the 500w powerlimit? Is that the power you needed to be stable with max clock 2850?


Yes sir 1.35v and 500w needed to be able to be stable at 2850+ . No idea how much power it was using. This cards are the full fat chip hence why it might need more volts to be stable at very high clocks. I'm still new to mpt , is a learning process but i'm getting the hang of it. Will post some pictures of my mpt settings and wattman in a few minutes as i will start benching and testing some stuff on this rig.


----------



## ZealotKi11er

I could not edit voltage for my MBA card. Maybe ur custom card is bypassing AMDs limits.


----------



## chispy

ZealotKi11er said:


> I could not edit voltage for my MBA card. Maybe ur custom card is bypassing AMDs limits.


Here are my settings on wattman and morepowertool , it is working 100% i probe the v.core with my multimeter and mpt is very accurate on the v.core voltage ( set 1.35v for v.core mpt 1.35v. - multimeter reading is 1.348v. Might be the non-reference pcb bypass AMD limit. I hope this helps.

Kind Regards: Chispy


----------



## helis4life

ZealotKi11er said:


> I could not edit voltage for my MBA card. Maybe ur custom card is bypassing AMDs limits.


Yeah same. Tried setting vcore 1.2 in mpt but applications crash immediately. Must be something bios related that the AIB cards don't have


----------



## ZealotKi11er

Then why cant you go over 2150 in memory? It crashes or GPU goes to 500MHz?


----------



## chispy

ZealotKi11er said:


> Then why cant you go over 2150 in memory? It crashes or GPU goes to 500MHz?


i tried 2200Mhz on memory but i get lower scores and sometimes artifacts , it seems 2150 is the max on this card.


----------



## ZealotKi11er

Can you upload your bios. Want to try to flash it to my 6900 XT.


----------



## kundica

majestynl said:


> Low 50s with a block sounds a bit to much for me.. I'm more used to have them between 40-50. But again I have no block yet to have some numbers for a 6900x!
> 
> 20c difference for the Junction sounds fair. With previous blocks i had more or less the same numbers!
> 
> Btw: saw a post from Watercool talking about a block in January. So i will wait for the Heatkiller.


I have the EK Quantum Vector on my 6800XT in a 2x360 setup using the O11D. With ambient at 22 I usually see a max of about 45 degrees with the hotspot 16-17 over that but usually the core is 42 or 43 under full load. For comparison my RVII hit 42 on the core with the same rad setup on an EK block.


Sent from my GM1917 using Tapatalk


----------



## kazukun

With the 2875MHz setting, I was able to confirm that the actual clock finally exceeded 2800MHz.
The 6900XT reference has a water cooling block installed for cooling.


----------



## HeLeX63

majestynl said:


> Welcome on board!
> 
> As @helis4life explained, you can use MPT. You will boost probably higher freqs..
> 
> Alphacool has the block allready. Shipping in january if I'm right.
> 
> 
> 
> 
> 
> 
> 
> 
> 
> Alphacool Eisblock Aurora Acryl GPX-A Radeon RX 6800(XT)/6900XT Red Devil mit Backplate
> 
> 
> Der Alphacool Eisblock Aurora Acryl GPX Grafikkarten Wasserkühler mit Backplate vereint Style mit Performance. Extreme Kühlperformance und eine umfangreiche digital RGB Beleuchtung zeichnen ihn aus. Erfahrung und technisches Know-how aus...
> 
> 
> 
> 
> www.alphacool.com
> 
> 
> 
> 
> 
> Dunno about EK. Performance was not there in previous blocks. And strange part is. Nobody got a review from the many YouTubers I saw getting the vector block together with the 6800xt. Strange!!??!!


Is Alphacool better in block design and thermal performance than EK do you think ?


----------



## Besty

chispy said:


> Did some testing on my water chiller with the non-reference Asrock Phantom 6900xt. max clocks were 2850/2150Mhz.
> 
> 
> View attachment 2471190


You may not realise it but you are probably the only serious overclocker in the world with a custom 6900XT under a chiller.

The world would be grateful for some Timespy/Port Royal/Shadow Of Tomb Raider bench runs.


----------



## chispy

ZealotKi11er said:


> Can you upload your bios. Want to try to flash it to my 6900 XT.


Here you go buddy - Navi 21

Careful , be aware this is a custom pcb non-reference RX 6900xt 3x 8 Power Pins. _I'm not responsible if something goes wrong , use it at your own risk._ Enjoy it !


----------



## Starkinsaur

I wasn’t aware of any software which supported flashing big Navi, what are you guys intending to use?
Please report back on whether the AIB bios works on your ref card and which utility you used to flash!


----------



## chispy

Besty said:


> You may not realise it but you are probably the only serious overclocker in the world with a custom 6900XT under a chiller.
> 
> The world would be grateful for some Timespy/Port Royal/Shadow Of Tomb Raider bench runs.


I was testing TS + PR but i trash the OS testing different tweaks + mpt + different drivers = Too many bsod ( The joy of amd drivers 😣. lol ) . Will re-try tomorrow on a fresh OS and will post the results here.


----------



## majestynl

chispy said:


> Here are my settings on wattman and morepowertool , it is working 100% i probe the v.core with my multimeter and mpt is very accurate on the v.core voltage ( set 1.35v for v.core mpt 1.35v. - multimeter reading is 1.348v. Might be the non-reference pcb bypass AMD limit. I hope this helps.
> 
> Kind Regards: Chispy
> 
> 
> Spoiler
> 
> 
> 
> 
> View attachment 2471219
> View attachment 2471220
> View attachment 2471221
> View attachment 2471222


Nice! Will try to edit voltage and check if it accepts it. Can't do too much it's still on air. But will definitely play more when block arrives.

Will let you guys know the outcome.

Ps: what card are you having?


----------



## ilmazzo

Starkinsaur said:


> I wasn’t aware of any software which supported flashing big Navi, what are you guys intending to use?
> Please report back on whether the AIB bios works on your ref card and which utility you used to flash!


go on igor’s lab and you will find everything

i flashed my 5700xt on the second day I had it


----------



## helis4life

kundica said:


> I have the EK Quantum Vector on my 6800XT in a 2x360 setup using the O11D. With ambient at 22 I usually see a max of about 45 degrees with the hotspot 16-17 over that but usually the core is 42 or 43 under full load. For comparison my RVII hit 42 on the core with the same rad setup on an EK block.
> 
> 
> Sent from my GM1917 using Tapatalk


With stock settings but increased power slider(2400ish MHz) temps sit in the 40s. If i push clocks up to 2700 temps increase until 50s, power sits at around 300w.


----------



## helis4life

kazukun said:


> With the 2875MHz setting, I was able to confirm that the actual clock finally exceeded 2800MHz.
> The 6900XT reference has a water cooling block installed for cooling.
> 
> View attachment 2471231


Were you able to complete any benches at that setting?


----------



## helis4life

majestynl said:


> Nice! Will try to edit voltage and check if it accepts it. Can't do too much it's still on air. But will definitely play more when block arrives.
> 
> Will let you guys know the outcome.
> 
> Ps: what card are you having?


Do you have a reference model?


----------



## helis4life

ilmazzo said:


> go on igor’s lab and you will find everything
> 
> i flashed my 5700xt on the second day I had it


Can AIB bios be flashed on to non AIB cards?


----------



## majestynl

kundica said:


> I have the EK Quantum Vector on my 6800XT in a 2x360 setup using the O11D. With ambient at 22 I usually see a max of about 45 degrees with the hotspot 16-17 over that but usually the core is 42 or 43 under full load. For comparison my RVII hit 42 on the core with the same rad setup on an EK block.
> 
> Sent from my GM1917 using Tapatalk


Nice. Sounds good!



HeLeX63 said:


> Is Alphacool better in block design and thermal performance than EK do you think ?


If you ask me, yes. That's what I saw in my tests and some of reviewers.



helis4life said:


> Do you have a reference model?


Yep. Wanted to have some choices when choosing a block.



helis4life said:


> Can AIB bios be flashed on to non AIB cards?


As far as I know we can't flash bios on the 6000 series. ATIflash doesnt support (yet).

On the Nvidia 2000 series me and many others flashed non aib on aib. Even the ones with 3x8 pins vs 2x8 and also Kingpin card bios on Reference etc etc. For normal OC it won't be a issue if it accepts it. But when you go much further like LN2 etc then this could be a issue because of the powerdraw.


----------



## knightriot

Mine 6900Xt underwater with bykski block:








Temp:








Some pic cyberpunk (high res)
http://upanh.run/files/2020.12.25-22.16.png 
http://upanh.run/files/2020.12.25-22.16_01.png 
http://upanh.run/files/2020.12.25-22.07.png 
http://upanh.run/files/2020.12.25-22.12.png

And I playing gow3 [email protected] on it :





BTW i can see image and color quality better than my old 3080 aorus wb (enabled full srgb in nvi panel) on my LG 34WK95U-W 😂 Happy when back to AMD VGA (my last was 290X)


----------



## ilmazzo

helis4life said:


> Can AIB bios be flashed on to non AIB cards?


dunno about that
Igor’s process is regarding write in bios what you change in MPT to make it become pemanent so you cook your own bios and then you mod it and reflash it

so since navi 1 does not have bios locks for voltage and power like navi 2 is useless use other bioses imho


----------



## ZealotKi11er

knightriot said:


> Mine 6900Xt underwater with bykski block:
> View attachment 2471280
> 
> Temp:
> View attachment 2471279
> 
> Some pic cyberpunk (high res)
> http://upanh.run/files/2020.12.25-22.16.png
> http://upanh.run/files/2020.12.25-22.16_01.png
> http://upanh.run/files/2020.12.25-22.07.png
> http://upanh.run/files/2020.12.25-22.12.png
> 
> And I playing gow3 [email protected] on it :
> 
> 
> 
> 
> 
> BTW i can see image and color quality better than my old 3080 aorus wb (enabled full srgb in nvi panel) on my LG 34WK95U-W 😂 Happy when back to AMD VGA (my last was 290X)


Cant wait for my block. What is ur stock core clock?

Also, you can see how little power AMD uses for 256-Bit G6 vs Nvidia 384-Bit G6X. 
ASIC AMD uses a lot more power. A 300W 6800XT for Core power is like 450W for Nvidia.


----------



## knightriot

ZealotKi11er said:


> Cant wait for my block. What is ur stock core clock?
> 
> Also, you can see how little power AMD uses for 256-Bit G6 vs Nvidia 384-Bit G6X.
> ASIC AMD uses a lot more power. A 300W 6800XT for Core power is like 450W for Nvidia.


My stock underwater around ~2300~2400 at 250W , I don't know why amd rated at 300W tdp


----------



## ZealotKi11er

knightriot said:


> My stock underwater around ~2300~2400 at 250W , I don't know why amd rated at 300W tdp


AMD rates the power after VR.
Nvidia rates the power before VR.

So if Nvidia says GPU uses 320W it will pull 320W from the PSU. From the wall, it will be 320W/PSU efficiency.
AMD is after VR. If you include VR efficiency 255W/0.85 = 300W (Assuming VR are 85% efficiency).


----------



## knightriot

ZealotKi11er said:


> AMD rates the power after VR.
> Nvidia rates the power before VR.
> 
> So if Nvidia says GPU uses 320W it will pull 320W from the PSU. From the wall, it will be 320W/PSU efficiency.
> AMD is after VR. If you include VR efficiency 255W/0.85 = 300W (Assuming VR are 85% efficiency).


And about power ,yes . My old 3080 aorus wb at 370W > my old 2080ti at 380W (both underwater < 50*c) just ~15% ~20% in performance . I use a 5120x2160 10bpc monitor , so DLSS just give me blur and RT give me unplayable  , now I very happy with my new 6900xt


----------



## LtMatt

Managed to snag a 6900 XT to go alongside my existing 6800 XT. 

No coil whine on my 6900 XT unless in menu screens when FPS are 1000-2000. My 6800 XT has a bit more whine though, but I've dampened it down a bit since purchase.



Spoiler















Done some testing and with a tuned 6800 XT vs a tuned 6900 XT, identical power limits, clock speeds (to within a few Mhz) and voltages (to within a few mv) and the 6900 XT is 10-12% faster in my testing when GPU bound.

6900 XT runs at a lower voltage (around 20-30mv roughly) for the same clocks, so runs cooler and quieter for me.

Call of Duty Black Ops Cold War
2560x1440
Max Settings, with RT/Blur and Film Grain off
2500Mhz/2100Mhz with Fast timings

6800 XT


Spoiler
















6900 XT


Spoiler















12% faster in the screenshots above.

Took some videos in Call of Duty Black Ops Cold War/Modern Warfare online and single player with the 6900 XT.

5950X + 6900 XT | Call of Duty Modern Warfare | Cheshire Park Part 4 - YouTube 

5950X + 6900 XT | Call of Duty Black Ops Cold War | Campaign Scene 1 - YouTube

5950X + 6800 XT | Call of Duty Black Ops Cold War | Campaign Scene 1 - YouTube

I was able to run this on my 650W PSU with a 50a 12v, however I get shut downs if I try to overclock beyond 2500Mhz on the 6900 XT only, so have ordered an 850W Watt with 65a 12v so that should give me more headroom.


----------



## Starkinsaur

ilmazzo said:


> go on igor’s lab and you will find everything


Which tool are you referring to? I read Igors before asking but don’t see any utility which states support for RX 6000 bios flash.


----------



## helis4life

Starkinsaur said:


> Which tool are you referring to? I read Igors before asking but don’t see any utility which states support for RX 6000 bios flash.


Right now i don't think there is one for bios flashing, just MPT


----------



## helis4life

Repasted my EK block. This time used supplied TIM and spread it evenly over the GPU. Temps now begin in the mid 40s and slowly creep up as water temp increases. More importantly though hotspot now maxes out 13 degrees above edge. 

Previous attempt i used kryonaut and applied it according to EK instructions but tbh i think it's too thick and there's not enough mounting pressure on the block to spread it thinnly enough


Has anyone with a SAM enabled system checked whether Sam helps with timespy/superposition benches?


----------



## ZealotKi11er

helis4life said:


> Repasted my EK block. This time used supplied TIM and spread it evenly over the GPU. Temps now begin in the mid 40s and slowly creep up as water temp increases. More importantly though hotspot now maxes out 13 degrees above edge.
> 
> Previous attempt i used kryonaut and applied it according to EK instructions but tbh i think it's too thick and there's not enough mounting pressure on the block to spread it thinnly enough
> 
> 
> Has anyone with a SAM enabled system checked whether Sam helps with timespy/superposition benches?


I don't think it helps synthetic test. Only tested FireStrike though.


----------



## kazukun

helis4life said:


> Were you able to complete any benches at that setting?


It works fine.


----------



## ilmazzo

Starkinsaur said:


> Which tool are you referring to? I read Igors before asking but don’t see any utility which states support for RX 6000 bios flash.





helis4life said:


> Right now i don't think there is one for bios flashing, just MPT


yup

I think I got it wrong, right now seems only MPT is supporting navi2.....


----------



## helis4life

kazukun said:


> It works fine.


What were your scores? This a reference card right?


----------



## majestynl

LtMatt said:


> Managed to snag a 6900 XT to go alongside my existing 6800 XT.
> 
> No coil whine on my 6900 XT unless in menu screens when FPS are 1000-2000. My 6800 XT has a bit more whine though, but I've dampened it down a bit since purchase.
> 
> 
> 
> Spoiler
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> Done some testing and with a tuned 6800 XT vs a tuned 6900 XT, identical power limits, clock speeds (to within a few Mhz) and voltages (to within a few mv) and the 6900 XT is 10-12% faster in my testing when GPU bound.
> 
> 6900 XT runs at a lower voltage (around 20-30mv roughly) for the same clocks, so runs cooler and quieter for me.
> 
> Call of Duty Black Ops Cold War
> 2560x1440
> Max Settings, with RT/Blur and Film Grain off
> 2500Mhz/2100Mhz with Fast timings
> 
> 6800 XT
> 
> 
> Spoiler
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 6900 XT
> 
> 
> Spoiler
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 12% faster in the screenshots above.
> 
> Took some videos in Call of Duty Black Ops Cold War/Modern Warfare online and single player with the 6900 XT.
> 
> 5950X + 6900 XT | Call of Duty Modern Warfare | Cheshire Park Part 4 - YouTube
> 
> 5950X + 6900 XT | Call of Duty Black Ops Cold War | Campaign Scene 1 - YouTube
> 
> 5950X + 6800 XT | Call of Duty Black Ops Cold War | Campaign Scene 1 - YouTube
> 
> I was able to run this on my 650W PSU with a 50a 12v, however I get shut downs if I try to overclock beyond 2500Mhz on the 6900 XT only, so have ordered an 850W Watt with 65a 12v so that should give me more headroom.


Thanks for the share m8!



helis4life said:


> Repasted my EK block. This time used supplied TIM and spread it evenly over the GPU. Temps now begin in the mid 40s and slowly creep up as water temp increases. More importantly though hotspot now maxes out 13 degrees above edge.
> 
> Previous attempt i used kryonaut and applied it according to EK instructions but tbh i think it's too thick and there's not enough mounting pressure on the block to spread it thinnly enough
> 
> 
> Has anyone with a SAM enabled system checked whether Sam helps with timespy/superposition benches?


Good to see you have better results. I always have better results with Kryonaut vs EK.
I bet you get at least same results if you repaste with Kryo and spread it out.



ilmazzo said:


> yup
> 
> I think I got it wrong, right now seems only MPT is supporting navi2.....


Yeap told this earlier. 

Maybe AMD blocked this bc people will flash 6900 over a 6800! Anyways, let's hope AtiFlash gets a update or a mod.


----------



## HeLeX63

majestynl said:


> Nice! Will try to edit voltage and check if it accepts it. Can't do too much it's still on air. But will definitely play more when block arrives.
> 
> Will let you guys know the outcome.
> 
> Ps: what card are you having?


Did you order the Alphacool Block ??? And do you own reference or an AIB model ?


----------



## HeLeX63

helis4life said:


> Repasted my EK block. This time used supplied TIM and spread it evenly over the GPU. Temps now begin in the mid 40s and slowly creep up as water temp increases. More importantly though hotspot now maxes out 13 degrees above edge.
> 
> Previous attempt i used kryonaut and applied it according to EK instructions but tbh i think it's too thick and there's not enough mounting pressure on the block to spread it thinnly enough
> 
> 
> Has anyone with a SAM enabled system checked whether Sam helps with timespy/superposition benches?


The Kryonaut is a superior paste with higher conductivity rating. Maybe you re-mounted it slightly better ?


----------



## helis4life

HeLeX63 said:


> The Kryonaut is a superior paste with higher conductivity rating. Maybe you re-mounted it slightly better ?


The only part of the process i changed was the application of the thermal paste. But as i mentioned the kryonaut is quite thick and there is no tension loading on the cold plate like you find on CPU coolers. I just think i put too much on first and without high tension it wasn't able to spread thinnly enough. If I had reapplied the kryonaut thinnly it would have produce the same or better results im sure. This wasn't a reflection on the kryonaut but rather my method of applying initially


----------



## HeLeX63

helis4life said:


> The only part of the process i changed was the application of the thermal paste. But as i mentioned the kryonaut is quite thick and there is no tension loading on the cold plate like you find on CPU coolers. I just think i put too much on first and without high tension it wasn't able to spread thinnly enough. If I had reapplied the kryonaut thinnly it would have produce the same or better results im sure. This wasn't a reflection on the kryonaut but rather my method of applying initially


Yeah fair enough. What are you max and average boost frequencies under water ?


----------



## ZealotKi11er

HeLeX63 said:


> Yeah fair enough. What are you max and average boost frequencies under water ?


If you are not power limited it should be 40-60 MHz less than ur Max Frequency and stay there.


----------



## HeLeX63

ZealotKi11er said:


> If you are not power limited it should be 40-60 MHz less than ur Max Frequency and stay there.


My average frequency is 2650 to 2680MHz. Max Freq set to 2725.


----------



## danny9428

My 6900xt reference running Timespy
















I scored 17 373 in Time Spy


Intel Xeon Processor E5-1660 v3, AMD Radeon RX 6900 XT x 1, 131072 MB, 64-bit Windows 10}




www.3dmark.com





Since I still haven’t got my new board for my 5950X, I only ran a brief test with MPT

will retest it’s potential with new rig


----------



## majestynl

HeLeX63 said:


> Did you order the Alphacool Block ??? And do you own reference or an AIB model ?


İ ordered few weeks ago but it's postponed. So looking for the Heatkiller. But if it takes to long i will go with EK


----------



## LtMatt

danny9428 said:


> My 6900xt reference running Timespy
> View attachment 2471414
> 
> 
> 
> 
> 
> 
> 
> 
> 
> I scored 17 373 in Time Spy
> 
> 
> Intel Xeon Processor E5-1660 v3, AMD Radeon RX 6900 XT x 1, 131072 MB, 64-bit Windows 10}
> 
> 
> 
> 
> www.3dmark.com
> 
> 
> 
> 
> 
> Since I still haven’t got my new board for my 5950X, I only ran a brief test with MPT
> 
> will retest it’s potential with new rig


Nice graphics score. 

What clock speeds did you set in Radeon Software and what was the average core speed shown when the result was compared online?


----------



## dagget3450

I will be joining this club soon, have a 6900xt on the way. coming from vega FE/64 this should be a nice upgrade.


----------



## untouchable247

Pulled the trigger on a 6900XT Red Devil Limited Edition today. Worthy successor to my Radeon VII? 

But it won't hold it's value like the R VII, that's for sure. And I need a new PSU. I'll try my Seasonic Titanium 650W but not long term.


----------



## ZealotKi11er

untouchable247 said:


> Pulled the trigger on a 6900XT Red Devil Limited Edition today. Worthy successor to my Radeon VII?
> 
> But it won't hold it's value like the R VII, that's for sure. And I need a new PSU. I'll try my Seasonic Titanium 650W but not long term.


Radeon 7 has value because its not a gaming card.


----------



## ZealotKi11er

Just got this bad boy.


----------



## kazukun

ZealotKi11er said:


> Just got this bad boy.


Excellent.


----------



## LtMatt

untouchable247 said:


> Pulled the trigger on a 6900XT Red Devil Limited Edition today. Worthy successor to my Radeon VII?
> 
> But it won't hold it's value like the R VII, that's for sure. And I need a new PSU. I'll try my Seasonic Titanium 650W but not long term.


Good luck. My 6900 XT trips my 650 Seasonic Focus Plat frequently even at stock. I had to undervolt to get things stable, got a Corsair HX1000 on the way.


----------



## untouchable247

LtMatt said:


> Good luck. My 6900 XT trips my 650 Seasonic Focus Plat frequently even at stock. I had to undervolt to get things stable, got a Corsair HX1000 on the way.


Powercolor recommends a 900W PSU so I'll probably run the 6900 XT undervolted in silent bios for the first week or so. Ordered a EVGA 1000 T2 but it'll take a few days.


----------



## No-one-no1

majestynl said:


> Nice! Will try to edit voltage and check if it accepts it. Can't do too much it's still on air. But will definitely play more when block arrives.
> 
> Will let you guys know the outcome.
> 
> Ps: what card are you having?


Very interesting! Wondering if I need to order a AIB model to replace my reference 6900xt to be able to bump up the voltages (while testing custom cooling stuff).
Don't really want to take off the stock cooler to take measurments if I need to swap to a different pcb anyway.


----------



## OrionBG

Hey guys, I've been contemplating what my next GPU is going to be. It is either the RX6900XT of the RTX3080Ti (when it gets released) 
I know that the RTX cards have better RayTracing and better drivers but the RX6900XT seems to be more fun to OC and it is ...something different...
Can someone that has the RX6900XT on water and north of 2700MHz post some results from 3DMark Port Royal and the DirectX RayTracing Feature tests?
Thanks!


----------



## ZealotKi11er

Don't remove the stock cooler unless you put a water block. 
Played a bit with 6900 XT overclocking. Got FireStrike to run 2700 and TimeSpy at 2600.


----------



## ZealotKi11er

OrionBG said:


> Hey guys, I've been contemplating what my next GPU is going to be. It is either the RX6900XT of the RTX3080Ti (when it gets released)
> I know that the RTX cards have better RayTracing and better drivers but the RX6900XT seems to be more fun to OC and it is ...something different...
> Can someone that has the RX6900XT on water and north of 2700MHz post some results from 3DMark Port Royal and the DirectX RayTracing Feature tests?
> Thanks!


I have 3080/6800XT/6900XT.
If you want to do FireStrike and TimeSpy get AMD
If you want to do Port Roral get Nvidia

The thing with 6900XT is that you will break some records if you get a good card. The difference is quite large though. 
Some cards can do 2600, some 2800+.

3080 Ti will never make it to anything at the top because of 3090.


----------



## OrionBG

ZealotKi11er said:


> I have 3080/6800XT/6900XT.
> If you want to do FireStrike and TimeSpy get AMD
> If you want to do Port Roral get Nvidia
> 
> The thing with 6900XT is that you will break some records if you get a good card. The difference is quite large though.
> Some cards can do 2600, some 2800+.
> 
> 3080 Ti will never make it to anything at the top because of 3090.


To be fair I don't really need to break any records. The 3090 has a price tag that at least for me is more than ridiculous. The rx6900xt is a more palatable option at nearly the same (or even better) rasterization performance. The problem (for me at least) is that the two games that I currently play are Cyberpunk and sometimes Minecraft. I have a 4K 120Hz OLED "monitor"  and currently at 4K Ultra RayTracing with DLSS on Balanced, I get an astonishing 35FPS with a single RTX2080Ti. While currently, non of those two games support RT on the RX6900XT, they will soon and then I wonder, if I should stick with what has proven itself (and the price is the same as the AMD offering) or should I go with AMD and have an all Red Team build.
One more thing, with the RX6900XT I will lose the VRR capability of the LG OLED as it has been certified only for G-Sync and not Freesync...although it should support it as VRR is part of the HDMI2.1 spec and AMD/LG should do something about this...


----------



## toxick

Hi, 

Today I received RX 6900 XT. 








I scored 10 448 in Port Royal


AMD Ryzen 9 5950X, AMD Radeon RX 6900 XT x 1, 16384 MB, 64-bit Windows 10}




www.3dmark.com






https://www.3dmark.com/spy/16876712


----------



## No-one-no1

No-one-no1 said:


> Very interesting! Wondering if I need to order a AIB model to replace my reference 6900xt to be able to bump up the voltages (while testing custom cooling stuff).
> Don't really want to take off the stock cooler to take measurments if I need to swap to a different pcb anyway.


Stock volt/MHz curve is:
<=1900MHz 800mV
.>1900 and <2600, "mV setting" in the amd radeon tool affects curve
.>=2600MHz 1175mV

Tested 1200 and 1250mV max with morepowertool 1.3.2. Settings don't work.
1200 resulted in 1018 max mV, same 800mV minimum
1250 results in system crashes. Probably due to too low mV for the MHz, or due to the "curve calculation" failing completely and doing screwy things.

Not sure if the messed-up-ness is caused by the radeon tool interfering after morepowertoll has set values or not. But don't feel like testing more, since this seems to have the potential of corrupting the windows install.
Hopefully there's a update to morepowertool to hack this amd issue. But... might also just need to order a AIB 6900xt and wait around for that


----------



## ZealotKi11er

OrionBG said:


> To be fair I don't really need to break any records. The 3090 has a price tag that at least for me is more than ridiculous. The rx6900xt is a more palatable option at nearly the same (or even better) rasterization performance. The problem (for me at least) is that the two games that I currently play are Cyberpunk and sometimes Minecraft. I have a 4K 120Hz OLED "monitor"  and currently at 4K Ultra RayTracing with DLSS on Balanced, I get an astonishing 35FPS with a single RTX2080Ti. While currently, non of those two games support RT on the RX6900XT, they will soon and then I wonder, if I should stick with what has proven itself (and the price is the same as the AMD offering) or should I go with AMD and have an all Red Team build.
> One more thing, with the RX6900XT I will lose the VRR capability of the LG OLED as it has been certified only for G-Sync and not Freesync...although it should support it as VRR is part of the HDMI2.1 spec and AMD/LG should do something about this...


I also use OLED but the older one. I dont think 3080/Ti/90 offer much of an upgrade in this game. It its just that demanding. DLSS also not that good even in quality relative to native. CX should have FreeSync, C9 i think you need to hack the driver for VRR.

Only issue is that 3080 Ti might take time to come out.


----------



## ZealotKi11er

My best Port Royal score so far.

2750/2100

AMD Radeon 6900 XT video card benchmark result - Intel Core i9-9900K Processor,ASUSTeK COMPUTER INC. ROG MAXIMUS XI HERO (WI-FI) (3dmark.com)


----------



## SirMaxxi

Hi Folks, I just received this 6900xt ASRock Phantom OC Card and i'm now actually not sure if I want to keep it for the following reasons!

I Do play Games and I do video editing and both of those use NVENC for the Video Editing which is far more optimized than AMD and also the Ray Tracing which as im sure you all know its much better.

I do have a 5950x and ROG dark Hero x570 Mobo but i'm still not sure about this GPU and may have bought it in haste.

I would like some honest opinions if I should sell this brand new 6900xt still boxed for maybe a 3090 or wait for a 3080Ti etc

Whats everyone's thoughts on this honestly I would like to know because I don't want to regret this down the track

Thanks in advance!


----------



## ZealotKi11er

SirMaxxi said:


> View attachment 2471963
> 
> 
> Hi Folks, I just received this 6900xt ASRock Phantom OC Card and i'm now actually not sure if I want to keep it for the following reasons!
> 
> I Do play Games and I do video editing and both of those use NVENC for the Video Editing which is far more optimized than AMD and also the Ray Tracing which as im sure you all know its much better.
> 
> I do have a 5950x and ROG dark Hero x570 Mobo but i'm still not sure about this GPU and may have bought it in haste.
> 
> I would like some honest opinions if I should sell this brand new 6900xt still boxed for maybe a 3090 or wait for a 3080Ti etc
> 
> Whats everyone's thoughts on this honestly I would like to know because I don't want to regret this down the track
> 
> Thanks in advance!


What GPU do you have now? If you already are questioning yourself, return it and go Nvidia.


----------



## SirMaxxi

ZealotKi11er said:


> What GPU do you have now? If you already are questioning yourself, return it and go Nvidia.


I have a trusty old 1080Ti but im going to be doing 4K Video Editing and Gaming!


----------



## ZealotKi11er

Which video editing software do you use?


----------



## SirMaxxi

ZealotKi11er said:


> Which video editing software do you use?


I use Cyberlink Power Director, amazing software for Video Editing!


----------



## OrionBG

ZealotKi11er said:


> I also use OLED but the older one. I dont think 3080/Ti/90 offer much of an upgrade in this game. It its just that demanding. DLSS also not that good even in quality relative to native. CX should have FreeSync, C9 i think you need to hack the driver for VRR.
> 
> Only issue is that 3080 Ti might take time to come out.


I'm not in a big hurry. I had two RTX2080Ti cards and already sold one. the second one is going to a friend but this will happen either next month or even in February. Then I'll have the funds to get a new GPU. Basically what I'm trying to decide is which of the two cards (if they are similarly priced) is a better buy...
I guess we will see...

BTW I'm using the LG B9. Still has the same input capabilities as C9 but an older processing chip that doesn't play any role in Game Console/PC modes anyway...


----------



## bluezone

I haven't been on the forum for a while. So hi everyone. I was looking for a RX 6800 for a few weeks now and ended up picking up a Sapphire RX 6900 XT. hope this proves to be a a good clocker. I have not tried anything with it yet.


----------



## majestynl

SirMaxxi said:


> I use Cyberlink Power Director, amazing software for Video Editing!


CPD is not that amazing my friend. Sorry to say. Anyways.

As far as I know Nvidia cuda/ etc wasn't fully supported a while ago. Dunno if that has changed. 

But as another member is saying. If you already questioning then sell the 6900 and go for green.

6000 series are great cards for (Rasterized) Gaming, but the best part for Enthusiast is the OC capabilities....more fun


----------



## LtMatt

Safe to say I have not won the lottery on either my 6800 XT or my 6900 XT, both reference design and uses MPT the ensure I have not power limits.

Both GPUs top out at 2685Mhz in Radeon Software at max voltage for each GPU. This is bench stable only. Not sure but it looks like this is below average.


----------



## ZealotKi11er

As far as Ray Tracing goes, AMD is not that far behind 6800 XT OCed 10.xK while 3080 OCed is 12.xK. 20% faster is not that much considering this was build for Nvidia GPUs. The only thing that makes RT more usable with Nvidia is in Nvidia sponsored games with DLSS.


----------



## LtMatt

LtMatt said:


> Safe to say I have not won the lottery on either my 6800 XT or my 6900 XT, both reference design and uses MPT the ensure I have not power limits.
> 
> Both GPUs top out at 2685Mhz in Radeon Software at max voltage for each GPU. This is bench stable only. Not sure but it looks like this is below average.


Best scores i was able to manage with my 6900 XT reference with MPT in Timespy Extreme, Standard and FS Ultra and Extreme. 









I scored 10 279 in Time Spy Extreme


AMD Ryzen 9 5950X, AMD Radeon RX 6900 XT x 1, 32768 MB, 64-bit Windows 10}




www.3dmark.com












I scored 20 264 in Time Spy


AMD Ryzen 9 5950X, AMD Radeon RX 6900 XT x 1, 32768 MB, 64-bit Windows 10}




www.3dmark.com












I scored 15 281 in Fire Strike Ultra


AMD Ryzen 9 5950X, AMD Radeon RX 6900 XT x 1, 32768 MB, 64-bit Windows 10}




www.3dmark.com












I scored 28 628 in Fire Strike Extreme


AMD Ryzen 9 5950X, AMD Radeon RX 6900 XT x 1, 32768 MB, 64-bit Windows 10}




www.3dmark.com





These were run one after the other so could perhaps improve on the bottom three a tiny amount with a cold run.


----------



## ZealotKi11er

LtMatt said:


> Best scores i was able to manage with my 6900 XT reference with MPT in Timespy Extreme, Standard and FS Ultra and Extreme.
> 
> 
> 
> 
> 
> 
> 
> 
> 
> I scored 10 279 in Time Spy Extreme
> 
> 
> AMD Ryzen 9 5950X, AMD Radeon RX 6900 XT x 1, 32768 MB, 64-bit Windows 10}
> 
> 
> 
> 
> www.3dmark.com
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> I scored 20 264 in Time Spy
> 
> 
> AMD Ryzen 9 5950X, AMD Radeon RX 6900 XT x 1, 32768 MB, 64-bit Windows 10}
> 
> 
> 
> 
> www.3dmark.com
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> I scored 15 281 in Fire Strike Ultra
> 
> 
> AMD Ryzen 9 5950X, AMD Radeon RX 6900 XT x 1, 32768 MB, 64-bit Windows 10}
> 
> 
> 
> 
> www.3dmark.com
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> I scored 28 628 in Fire Strike Extreme
> 
> 
> AMD Ryzen 9 5950X, AMD Radeon RX 6900 XT x 1, 32768 MB, 64-bit Windows 10}
> 
> 
> 
> 
> www.3dmark.com
> 
> 
> 
> 
> 
> These were run one after the other so could perhaps improve on the bottom three a tiny amount with a cold run.


I have noticed that TimeSpy is the most stresful for RX6800. I need like 100MHz to be stable. 

TimeSpy 2600MHz
FireStrike 2725MHz
Port Royal 2750MHz

I played at 2700MHz for about 2 hours of CP2077 with 0 issues.


----------



## LtMatt

ZealotKi11er said:


> I have noticed that TimeSpy is the most stresful for RX6800. I need like 100MHz to be stable.
> 
> TimeSpy 2600MHz
> FireStrike 2725MHz
> Port Royal 2750MHz
> 
> I played at 2700MHz for about 2 hours of CP2077 with 0 issues.


Yeah no chance I could get near that, best I can hope for would be 2600 under gaming load but that would require almost maximum fan speed I think.

I’ve settled on 2500Mhz-2520Mhz in game clock speed at 1.081v-1.106v. This requires a fan speed of 60% which is fairly quiet I find. I definitely have poor samples I think.


----------



## ZealotKi11er

LtMatt said:


> Yeah no chance I could get near that, best I can hope for would be 2600 under gaming load but that would require almost maximum fan speed I think.
> 
> I’ve settled on 2500Mhz-2520Mhz in game clock speed at 1.081v-1.106v. This requires a fan speed of 60% which is fairly quiet I find. I definitely have poor samples I think.


Those are with 6900 XT with MPT at 400W with Water Block.
Also, it is set CLK. The actual clk is 50-60MHz lower. So when I play 2700 in CP2077 I am getting 2630-2660MHz.


----------



## LtMatt

ZealotKi11er said:


> Those are with 6900 XT with MPT at 400W with Water Block.
> Also, it is set CLK. The actual clk is 50-60MHz lower. So when I play 2700 in CP2077 I am getting 2630-2660MHz.


Yeah I get similar lower clocks but it’s more like 60-70Mhz for me. 
I have to set 2570 in Radeon Software to get 2500Mhz in game with almost no drops below 2500.


----------



## bluezone

Anyone know where the junction sensors are located?, just on die or across the pcb.


----------



## ZealotKi11er

bluezone said:


> Anyone know where the junction sensors are located?, just on die or across the pcb.


Junction Sensor is the Hot Spot. There are across the die. Whichever is the highest temp is the one that is reported.


----------



## bluezone

ZealotKi11er said:


> Junction Sensor is the Hot Spot. There are across the die. Whichever is the highest temp is the one that is reported.


Thanks.
That's what i thought. I've have a 30-35 deg differential compared to the general GPU temperature. Most everything else is safe range when in use. I think i have a bad thermal pad on the die.


----------



## ZealotKi11er

bluezone said:


> Thanks.
> That's what i thought. I've have a 30-35 deg differential compared to the general GPU temperature. Most everything else is safe range when in use. I think i have a bad thermal pad on the die.


I had the same as you increase the power. With water block i get way better delta so far 15c difference


----------



## bluezone

ZealotKi11er said:


> I had the same as you increase the power. With water block i get way better delta so far 15c difference


Not bad.

i think I'm going to try a few tricks i used On my R9 Nano. Kept up pretty good with the Fury X's.


----------



## helis4life

ZealotKi11er said:


> As far as Ray Tracing goes, AMD is not that far behind 6800 XT OCed 10.xK while 3080 OCed is 12.xK. 20% faster is not that much considering this was build for Nvidia GPUs. The only thing that makes RT more usable with Nvidia is in Nvidia sponsored games with DLSS.


Which bench?


----------



## ZealotKi11er

helis4life said:


> Which bench?


Port Royal?


----------



## bluezone

Best I've managed without tearing the card apart. 
Ya I know my rig is a bit old. lol.


















I scored 16 261 in Fire Strike Extreme


Intel Core i7-2600 Processor, AMD Radeon RX 6900 XT x 1, 16384 MB, 64-bit Windows 10}




www.3dmark.com


----------



## helis4life

ZealotKi11er said:


> Port Royal?


Try the ray tracing feature test. 6000 series doesn't come close unfortunately


----------



## ZealotKi11er

helis4life said:


> Try the ray tracing feature test. 6000 series doesn't come close unfortunately


Yeah, which is expected. You will never get that level of RT in games. Its like tesselation back in Fermi days. Only features test would really hurt AMD, in games the perf hit was the same.


----------



## dagget3450

got my 6900xt today, what a beastly card. Nice upgrade from my mgpu vega fe setup. i havent tinkered too much but did a few quick timespy runs..

i am still on a r5 3600 pcie 3.0, and my ram timings arent very good. I do have a x570 mobo coming in a week though, and if i get a chance soon ill grab a 5600x or a 5800x and that should help my fps/scores a decent chunk.

https://www.3dmark.com/3dm/55953466?


----------



## knightriot

6900xt underwater, MPT 350W and [email protected] , got 2600~2700 in games  , my card can't go 2800  crash anyway









odyssey bench at 4k 









SOTB at 4k


----------



## LtMatt

knightriot said:


> 6900xt underwater, MPT 350W and [email protected] , got 2600~2700 in games  , my card can't go 2800  crash anyway
> View attachment 2472133
> 
> 
> odyssey bench at 4k
> 
> View attachment 2472134
> 
> SOTB at 4k
> View attachment 2472138


Nice! What score do you get in Timespy Extreme with these settings?


----------



## knightriot

LtMatt said:


> Nice! What score do you get in Timespy Extreme with these settings?


here is it








I scored 10 598 in Time Spy Extreme


AMD Ryzen Threadripper 3970X, AMD Radeon RX 6900 XT x 1, 65536 MB, 64-bit Windows 10}




www.3dmark.com


----------



## danny9428

LtMatt said:


> Nice graphics score.
> 
> What clock speeds did you set in Radeon Software and what was the average core speed shown when the result was compared online?


MPT 320W +15% = Chip power 368W
Radeon Software Core Clock = 2650Mhz (3dmark shows average clock at 2599Mhz)
Memory at 2150Mhz + Fast timing

I also deliberately limited the V.Core of my 6900XT to 1.15V instead of 1.175V
yet the card still maxes out the extra chip power limit set by MPT and constantly power throttles at 368W chip power
at this clock speed the extra V.Core still appears unnecessary which hints there's still a lot of potential
(Once again a bit disappointed AMD feels these cards needs to be limited at 3000Mhz max)

Sadly my old 650D case with inadequate intake and 8-year old HX750 psu is not quite up to the task (The PSU ran very loud when I punched both CPU and GPU at OC)
I'm still waiting for my new case and motherboard to come, buying PC parts these days have not been easy


----------



## LtMatt

danny9428 said:


> MPT 320W +15% = Chip power 368W
> Radeon Software Core Clock = 2650Mhz (3dmark shows average clock at 2599Mhz)
> Memory at 2150Mhz + Fast timing
> 
> I also deliberately limited the V.Core of my 6900XT to 1.15V instead of 1.175V
> yet the card still maxes out the extra chip power limit set by MPT and constantly power throttles at 368W chip power
> at this clock speed the extra V.Core still appears unnecessary which hints there's still a lot of potential
> (Once again a bit disappointed AMD feels these cards needs to be limited at 3000Mhz max)
> 
> Sadly my old 650D case with inadequate intake and 8-year old HX750 psu is not quite up to the task (The PSU ran very load when I punched both CPU and GPU at OC)
> I'm still waiting for my new case and motherboard to come, buying PC parts these days have not been easy


Thanks. I am still running on the reference cooler so when not benching i limit the card to 300W PPT, voltage locked at 1.075v. 

This is good for 2500Mhz in game core clock and 2100Mhz in game memory clock. 

Seems a nice compromise between performance and quiet gaming without needing fan speeds over 60%.


knightriot said:


> here is it
> 
> 
> 
> 
> 
> 
> 
> 
> I scored 10 598 in Time Spy Extreme
> 
> 
> AMD Ryzen Threadripper 3970X, AMD Radeon RX 6900 XT x 1, 65536 MB, 64-bit Windows 10}
> 
> 
> 
> 
> www.3dmark.com
> 
> 
> 
> 
> View attachment 2472147


Very nice, here is mine. 
AMD Radeon 6900 XT video card benchmark result - AMD Ryzen 9 5950X,ASUSTeK COMPUTER INC. ROG CROSSHAIR VIII HERO (3dmark.com)

Your CPU score is great.


----------



## danny9428

I'm also running my card at 2500Mhz while daily gaming, except I still max out the memory slider and I have voltage at 1.065V
I've tried lower voltages but it'll be unstable and prompt a driver restart when a 2D static render kicks the card, it appears 1.065V would be the absolute lowest I could run while at roughly stock clock


----------



## LtMatt

danny9428 said:


> I'm also running my card at 2500Mhz while daily gaming, except I still max out the memory slider and I have voltage at 1.065V
> I've tried lower voltages but it'll be unstable and prompt a driver restart when a 2D static render kicks the card, it appears 1.065V would be the absolute lowest I could run while at roughly stock clock


Seems similar to mine. 

If i run 3dMark Timespey Extreme at the settings i mentioned above, i get a crash. But so far in all the games I've tested i don't get any instability. 

It's on that second Timespy Extreme scene, about half way through, something in that part causes a big spike in power draw and i crash on that point. 

I probably need to bump the voltage a couple of notches, but if it's stable in games without crashing then that will do. 

If you have Timespy Extreme, but if with a standard fan profile like mine and see if you pass it without crashing.


----------



## Stag

Am i missing something?Set card to 1.0v 500-2600 gaming.Runs fine there.Voltages & clocks are all over the place.
When setting volts in wattman.Is it min,max or other?
When setting mhz in wattman.Is it min,max or other?


----------



## ZealotKi11er

If you want to undervolt probably is best to use MPT and limit voltage there.


----------



## ZealotKi11er

danny9428 said:


> MPT 320W +15% = Chip power 368W
> Radeon Software Core Clock = 2650Mhz (3dmark shows average clock at 2599Mhz)
> Memory at 2150Mhz + Fast timing
> 
> I also deliberately limited the V.Core of my 6900XT to 1.15V instead of 1.175V
> yet the card still maxes out the extra chip power limit set by MPT and constantly power throttles at 368W chip power
> at this clock speed the extra V.Core still appears unnecessary which hints there's still a lot of potential
> (Once again a bit disappointed AMD feels these cards needs to be limited at 3000Mhz max)
> 
> Sadly my old 650D case with inadequate intake and 8-year old HX750 psu is not quite up to the task (The PSU ran very loud when I punched both CPU and GPU at OC)
> I'm still waiting for my new case and motherboard to come, buying PC parts these days have not been easy


I have seen some 6800 XT reach 2800 but no 6900 XT come close to 3000.


----------



## LtMatt

ZealotKi11er said:


> If you want to undervolt probably is best to use MPT and limit voltage there.


This. I get more stability by just locking voltage rather than allowing the fluctuations from setting it in Radeon Software. Setting it in MPT to 1.075v should get you around a 100Mhz overclock from stock and give you a nice undervolt that will keep power draw at 300W or lower under full gaming load.


----------



## ZealotKi11er

LtMatt said:


> This. I get more stability by just locking voltage rather than allowing the fluctuations from setting it in Radeon Software. Setting it in MPT to 1.075v should get you around a 100Mhz overclock from stock and give you a nice undervolt that will keep power draw at 300W or lower under full gaming load.


Since I water-cooled mine, I can run 1.175v no problem and will use 330-380w. In reality, after 2.4GHz you are not going to get much gaming perf but for some people every % matters. 2.4GHz to 2.7GHz will net you about 5-7% perf but you are going to draw 100-150W more power.


----------



## Stag

When i use MPT for voltages.It did not work for me.
I change it here.Correct place?
Appears igor ran 1.2v
This card has boosted as high as 2893 briefly in gpuz.
What is your auto OC in wattman?May give a indication of silcone quality.
2574 is this cards auto oc.


----------



## LtMatt

Stag said:


> When i use MPT for voltages.It did not work for me.
> I change it here.Correct place?
> Appears igor ran 1.2v
> This card has boosted as high as 2893 briefly in gpuz.
> What is your auto OC in wattman?May give a indication of silcone quality.
> 2574 is this cards auto oc.
> View attachment 2472204


I've not tested adding more voltage, only reducing it. But i can confirm that works.


----------



## danny9428

Stag said:


> Am i missing something?Set card to 1.0v 500-2600 gaming.Runs fine there.Voltages & clocks are all over the place.
> When setting volts in wattman.Is it min,max or other?
> When setting mhz in wattman.Is it min,max or other?


If I remember correctly, setting either power or voltage too low and the card would try itself to throttle down instead of crashing straight like older cards do.
You'll need to monitor FPS and Benchmark scores to see if the vcore and power limit you've punched in are actually good for the clock speed you've targeted 
(i.e. clock curve as flat and close to target clock as it can)

Of course if you're too aggressive with the settings you'll end up with radeon software crash and reset to default (sometimes a black screen where you have to restart PC)



Stag said:


> When i use MPT for voltages.It did not work for me.
> I change it here.Correct place?
> Appears igor ran 1.2v
> This card has boosted as high as 2893 briefly in gpuz.
> What is your auto OC in wattman?May give a indication of silcone quality.
> 2574 is this cards auto oc.
> View attachment 2472204


Mine is 2579Mhz
I feel anything between 1.15 ~ 1.175 should do most of the tricks below 400W power
though I have not tried overvolting (it's not what air coolers could handle sadly) so I can't give any info either.


----------



## danny9428

ZealotKi11er said:


> I have seen some 6800 XT reach 2800 but no 6900 XT come close to 3000.


With a little chill on water, 6900XTs probably would slam the 3000 ceiling
We just need more people to fiddle with these cards (especially the AIB ones) on MPT

The ASUS Strix LC 6800XT easily get close to it's 2800Mhz Ceiling so I believe same story would happen on 6900XTs


----------



## ZealotKi11er

danny9428 said:


> With a little chill on water, 6900XTs probably would slam the 3000 ceiling
> We just need more people to fiddle with these cards (especially the AIB ones) on MPT
> 
> The ASUS Strix LC 6800XT easily get close to it's 2800Mhz Ceiling so I believe same story would happen on 6900XTs


Temp will not help that much. At 2800 most cards are the limit. 3000MHz will not be possible with 1.175v. Now if you are going over 1.175v then its a different story.


----------



## kazukun

I was able to get my 6900XT reference card to run at 2800 by simply cooling it with water.








[Official] AMD Radeon RX 6900 XT Owner's Club


Did some testing on my water chiller with the non-reference Asrock Phantom 6900xt. max clocks were 2850/2150Mhz. Post that score. It will beat the top 3090 score.




www.overclock.net


----------



## No-one-no1

Stag said:


> When i use MPT for voltages.It did not work for me.
> I change it here.Correct place?
> Appears igor ran 1.2v
> ...


I'm really bad at internet searches apparently! Do you have a link to Igors 1.2v? I need to figure out how to use MPT for overvolt


----------



## ZealotKi11er

No-one-no1 said:


> I'm really bad at internet searches apparently! Do you have a link to Igors 1.2v? I need to figure out how to use MPT for overvolt


You cant. I have seen only one person do it with an aftermarket Asrock 6900 XT card.


----------



## Stag

No-one-no1 said:


> I'm really bad at internet searches apparently! Do you have a link to Igors 1.2v? I need to figure out how to use MPT for overvolt


Head over to igors lab.Videos on topic.
_der8auer _increased voltage also.


----------



## ZealotKi11er

Stag said:


> Head over to igors lab.Videos on topic.
> _der8auer _increased voltage also.


What be interest what GPUs they used. MBA cards you cant overvolt. No card you should be able to overvolt unless you do externally and not with soft mods (MPT)


----------



## NeeDforKill

Hello guys. Soon i will get my PowerColor 6900XT Red Devil any hints for start to get maximum performance without watercooling. I am using openstand STREACOM BC1.
Happy New Year!


----------



## ZealotKi11er

NeeDforKill said:


> Hello guys. Soon i will get my PowerColor 6900XT Red Devil any hints for start to get maximum performance without watercooling. I am using openstand STREACOM BC1.
> Happy New Year!


Increase power, increase core clk.


----------



## chris89

Why aren't there any Luxmark results of the 80 CU AMD Radeon RX 6900 XT? Or is the NVIDIA RTX 3090 just that much better, it puts the RX 6900 XT to shame?


----------



## Stag

chris89 said:


> Why aren't there any Luxmark results of the 80 CU AMD Radeon RX 6900 XT? Or is the NVIDIA RTX 3090 just that much better, it puts the RX 6900 XT to shame?


Not a popular bench mark.Like asking for SANDRA's gpgpu benchmark.Google is your friend.


----------



## chris89

Stag said:


> Not a popular bench mark.Like asking for SANDRA's gpgpu benchmark.Google is your friend.


I'd love to see a result for Lux Ball on the results page. The 3090 is dominating on Luxmark v3.0.


----------



## LtMatt

chris89 said:


> I'd love to see a result for Lux Ball on the results page. The 3090 is dominating on Luxmark v3.0.


Is there a bench thread?

Can you link to the correct version to download as the latest version appears to be later than 3,0.

EDIT

Here's a stock 6900 XT run. 

Seems like it only detects half the compute units.


----------



## chris89

LtMatt said:


> Is there a bench thread?
> 
> Can you link to the correct version to download as the latest version appears to be later than 3,0.
> 
> EDIT
> 
> Here's a stock 6900 XT run.
> 
> Seems like it only detects half the compute units.


Top LuxBall HDR results | LuxMark

Looks like its performing about as fast as a Radeon VII. I wonder why its only detecting half the Compute Units. Weird.

With 2660Mhz core clock I think it would break 100,000 points & beat the 3090 with all 80 Compute Units working.


----------



## ZealotKi11er

It has 40 WGP.


----------



## chris89

ZealotKi11er said:


> It has 40 WGP.


Whats that mean?


----------



## helis4life

chris89 said:


> Whats that mean?


A work group processor , WGP, consists of 2 compute units. 40 WGP = 80CU


----------



## helis4life

I'm having a strange issue with my 6900xt. When waking the monitor up after it has turned off the res is 1280 X 800. No option to select the proper res on windows display settings either(3440x1440). Only way to fix it is to restart. 

Has happened on two different windows 10 installations in two different computers, same GPU. Anyone else experiencing this?


----------



## majestynl

helis4life said:


> I'm having a strange issue with my 6900xt. When waking the monitor up after it has turned off the res is 1280 X 800. No option to select the proper res on windows display settings either(3440x1440). Only way to fix it is to restart.
> 
> Has happened on two different windows 10 installations in two different computers, same GPU. Anyone else experiencing this?


Hmm sounds shat. Hopefully not HW related.
Looks like the card doesnt properly get out of his sleeping states.

Did you check if the 6900xt was still recognized while displaying 1280x800? Check HW list, even if it's listed you don't want to see the famous mark-icon next to it. 

And what about older/other drivers ? And did you used MPT before the issue ? Asking this because I get the feeling things could get borked while using the tool. After a while few setting where locked with me. (Mem 2150). Couldn't change anymore.


----------



## helis4life

majestynl said:


> Hmm sounds shat. Hopefully not HW related.
> Looks like the card doesnt properly get out of his sleeping states.
> 
> Did you check if the 6900xt was still recognized while displaying 1280x800?
> 
> And what about older/other drivers ? And did you used MPT before the issue ? Asking this because I get the feeling things could get borked while using the tool. After a while few setting where locked with me. (Mem 2150). Couldn't change anymore.


Yeah still reconqnises 6900xt and monitor. happens with 12.1 and 12.2. doesn't always do it, usually happens when there is nothing open in windows, ie a blank desktop. If a program is open it won't do it, ie 3d mark. 


Also I'm running an ultrawide [email protected] and the vram is running at max at this refresh. Anyway to sort this out? Tried the method of changing the display timing and reducing the refresh to 142 but that doesn't seem to work at this res


----------



## majestynl

helis4life said:


> Yeah still reconqnises 6900xt and monitor. happens with 12.1 and 12.2. doesn't always do it, usually happens when there is nothing open in windows, ie a blank desktop. If a program is open it won't do it, ie 3d mark.


You can try to disable screensaver or set to blank? 

What I also remember their was something with installing the monitor drivers to solve certain issues.



helis4life said:


> Also I'm running an ultrawide [email protected] and the vram is running at max at this refresh. Anyway to sort this out? Tried the method of changing the display timing and reducing the refresh to 142 but that doesn't seem to work at this res


Does it bother you? If you don't get issues then never get worried when a system is using all the available ram. That's the whole point of RAM


----------



## ZealotKi11er

helis4life said:


> Yeah still reconqnises 6900xt and monitor. happens with 12.1 and 12.2. doesn't always do it, usually happens when there is nothing open in windows, ie a blank desktop. If a program is open it won't do it, ie 3d mark.
> 
> 
> Also I'm running an ultrawide [email protected] and the vram is running at max at this refresh. Anyway to sort this out? Tried the method of changing the display timing and reducing the refresh to 142 but that doesn't seem to work at this res


That is normal. Try 120Hz. Memory is not fast enough to keep up with 144Hz while running 100MHz.


----------



## Skinnered

I also recieved my RX 6900 XT and its dooing allready 2600-2700 mhx, with enhanced powerlimit for the GPU to 300 W with MPT and slightly increased fan fequenties.
I also plan to cool it on water, but I've seen no AIO, like the Eisswolf 2, with fits currently. The compleet solution of EK, the "EK-Quantum Reaction AIO RX 6800/6900 D-RGB P240 - AMD RadeonEdition" is a bit pricy for ~15%? extra power.
Does anybody know a good AIO available now?


----------



## DirtyScrubz

kazukun said:


> I was able to get my 6900XT reference card to run at 2800 by simply cooling it with water.
> 
> 
> 
> 
> 
> 
> 
> 
> [Official] AMD Radeon RX 6900 XT Owner's Club
> 
> 
> Did some testing on my water chiller with the non-reference Asrock Phantom 6900xt. max clocks were 2850/2150Mhz. Post that score. It will beat the top 3090 score.
> 
> 
> 
> 
> www.overclock.net


Just the thing i came to see. I have a merc 6800 xt that hits 2750 mhz on air that I’m sending back since i managed to get a 6900 xt today via BB. I wanted a reference card so i could watercool it since the merc will sadly never get an EK block. Hopefully my reference card under water can OC well too.


----------



## ZealotKi11er

DirtyScrubz said:


> Just the thing i came to see. I have a merc 6800 xt that hits 2750 mhz on air that I’m sending back since i managed to get a 6900 xt today via BB. I wanted a reference card so i could watercool it since the merc will sadly never get an EK block. Hopefully my reference card under water can OC well too.


Water did not make my card OC anymore. It only allows to run that OC no matter the load.


----------



## Starkinsaur

ZealotKi11er said:


> Water did not make my card OC anymore. It only allows to run that OC no matter the load.


You found it was reducing clocks due to high hotspot temp before the water block went on?
I’m finding that on the stock (ref) cooler, exactly that is happening in heavy loads like superposition 4k. Hoping this EK block with liquid metal will keep the temp down so it’s not throttling itself so hard. Will report back in a few days.


----------



## ir88ed

I would like to use MPT to increase my power limit. Card is under an EK wb and runs in the low 40's when the card is boosted to 2650 or so at a set voltage of 1020mV.

Not new to OC, but I am new to AMD gpu OC, so I am flailing helplessly here trying to learn the process beyond just fiddling with the sliders in the radeon software.

I have tried to follow along with Igor's instructions, but haven't been successful. I saved out the bios from GPUz, modifed in MPT, saved out as a *.mpt file. It looks like I need to load the original *.rom file into RBE, then apply the *.mpt settings, but when I try to load that bios file I get a "not supported" error. Sapphire 6900xt, if it helps. 
I also tried modifying the soft power tables using the "write sppt" buttons in MPT, but that didn't seem to do anything.

Any help is appreciated.


----------



## majestynl

ir88ed said:


> I would like to use MPT to increase my power limit. Card is under an EK wb and runs in the low 40's when the card is boosted to 2650 or so at a set voltage of 1020mV.
> 
> Not new to OC, but I am new to AMD gpu OC, so I am flailing helplessly here trying to learn the process beyond just fiddling with the sliders in the radeon software.
> 
> I have tried to follow along with Igor's instructions, but haven't been successful. I saved out the bios from GPUz, modifed in MPT, saved out as a *.mpt file. It looks like I need to load the original *.rom file into RBE, then apply the *.mpt settings, but when I try to load that bios file I get a "not supported" error. Sapphire 6900xt, if it helps.
> I also tried modifying the soft power tables using the "write sppt" buttons in MPT, but that didn't seem to do anything.
> 
> Any help is appreciated.


Silly question, but did you download the newest version of MPT that got support for the 6900xt.?


----------



## ir88ed

majestynl said:


> Silly question, but did you download the newest version of MPT that got support for the 6900xt.?


Yes. I am running version 1.3.2 downloaded from igor's website
The error occurs when I try to load the original bios (Navi21.rom) into RBE. Just get an error of "not supported".


----------



## Starkinsaur

RBE isn’t used for these MPT mods

Use GPUZ to download the rom from the gpu 
Load the rom into MPT
Make changes in MPT
Write sppt
Reboot


----------



## ir88ed

Starkinsaur said:


> RBE isn’t used for these MPT mods
> 
> Use GPUZ to download the rom from the gpu
> Load the rom into MPT
> Make changes in MPT
> Write sppt
> Reboot


Thank you. I did not reboot. I am only changing the "GPU power limit" to 320 to increase total power. That sound about right?


----------



## HeLeX63

ir88ed said:


> Thank you. I did not reboot. I am only changing the "GPU power limit" to 320 to increase total power. That sound about right?


Remember increasing it to 320W assumes no additional wattage applied for power limit in Radeon software. So if you want to apply 15%, it will be 15% on top of 320W. So 368W. Quite a lot if you are not on water cooling. So increase fan speeds to compensate.


----------



## Starkinsaur

Further to that, remember that you’re setting the power limit for only one part of the chip there. So you’ve got the whole rest of the board to cool too. Another ~40W probably. I reckon you’ll start to find that temperature is what’s preventing higher frequency at that point (320W gfx)


----------



## No-one-no1

Starkinsaur said:


> RBE isn’t used for these MPT mods
> 
> Use GPUZ to download the rom from the gpu
> Load the rom into MPT
> Make changes in MPT
> Write sppt
> Reboot


Heads up: On my setup with a 6900xt reference card, adjusting the slider in the radeon software afterwards would not work. The power limit from MPT works though, if "left alone".
Also, I tried applying other changes. Nothing else worked correctly. Increasing the max voltage will limit the gpu core to 500MHz. Increasing that lower limit to 600MHz will not boot into windows any more etc. (safe mode boot and MPT reset from there fixed this)
I could only get the power increase to work. This required going into the radeon software to re-apply the other sections to get the card to work as "normal", with just the increased power limit.


----------



## HeLeX63

Is anyone noticing in some games, at least at 21:9 3440x1440 on Max Settings, where GPU usage is totally trash and unstable ? Far Cry 5, shows 40 to 80% and Destiny 2 is unplayable with 50fps drops and stuttering ?

Refer to my youtube video. Notice the GPU Utilization in the top left corner.


----------



## LtMatt

HeLeX63 said:


> Is anyone noticing in some games, at least at 21:9 3440x1440 on Max Settings, where GPU usage is totally trash and unstable ? Far Cry 5, shows 40 to 80% and Destiny 2 is unplayable with 50fps drops and stuttering ?
> 
> Refer to my youtube video. Notice the GPU Utilization in the top left corner.


The issue is your CPU my friend, if you switch to a Ryzen 5800X/5900X/5950X you will get a locked 99% GPU utilization in this bench. Your Intel CPU is a little dated in comparison.

I've spent a lot of time playing this on a 5950X and a 6900 XT and it's locked Utilization at 99% for 95%+ of the time.

If you use the resolution slider in game and increase it to 1.2-1.5, you will see GPU Utilization climb as the bottleneck moves from your CPU to the GPU. 

Nice video btw.


----------



## HeLeX63

LtMatt said:


> The issue is your CPU my friend, if you switch to a Ryzen 5800X/5900X/5950X you will get a locked 99% GPU utilization in this bench. Your Intel CPU is a little dated in comparison.
> 
> I've spent a lot of time playing this on a 5950X and a 6900 XT and it's locked Utilization at 99% for 95%+ of the time.
> 
> If you use the resolution slider in game and increase it to 1.2-1.5, you will see GPU Utilization climb as the bottleneck moves from your CPU to the GPU.
> 
> Nice video btw.


Thanks for the reply. Whoa, seriously didn't think my 8700K would be that much of a bottleneck ? At 1080P, I can understand but a resolution just under 4k, still seems odd. I tried upping the resolution a bit, yielded same FPS but higher GPU usage. So maybe it is the CPU after all ? In the case, get a 5900X or wait for the 11th Gen Intel ?


----------



## riky3003

HeLeX63 said:


> Is anyone noticing in some games, at least at 21:9 3440x1440 on Max Settings, where GPU usage is totally trash and unstable ? Far Cry 5, shows 40 to 80% and Destiny 2 is unplayable with 50fps drops and stuttering ?
> 
> Refer to my youtube video. Notice the GPU Utilization in the top left corner.


You are not alone. I can’t speak for Far Cry, but Destiny 2 has been terrible since I got a 6900 XT; for reference I play at 1440p with a 3600.

Same symptoms as you describe, low GPU utilization and very poor FPS, especially in certain locations, such as tangled shore.

A friend of mine with a 6800 XT and 5900X is also in the same situation.

The problem seems to be common to many system configurations but it appears worse on AMD GPUs, and especially bad on 6xxx series as they can’t roll back to old drivers.

At least bungie seems aware of it and investigating the phenomenon:








[BNG] PC FPS Issues Megathread - UPDATE June 22 > Help | Forums | Bungie.net


Bungie.net is the Internet home for Bungie, the developer of Destiny, Halo, Myth, Oni, and Marathon, and the only place with official Bungie info straight from the developers.




www.bungie.net


----------



## LtMatt

HeLeX63 said:


> Thanks for the reply. Whoa, seriously didn't think my 8700K would be that much of a bottleneck ? At 1080P, I can understand but a resolution just under 4k, still seems odd. I tried upping the resolution a bit, yielded same FPS but higher GPU usage. So maybe it is the CPU after all ? In the case, get a 5900X or wait for the 11th Gen Intel ?


Well I can only speak for Far Cry 5, I’ve not tried the other game. And Far Cry 5 does have a few locations on Jacobs Island where GPU utilisation drops to 80% briefly, but on the whole it’s 99% locked.
I run a 6900 XT at 5120x1440 so slightly up from yours.

EDIT

I could put up a video if you like? I'll need to download the game so may take a couple of days or so.


----------



## No-one-no1

HeLeX63 said:


> Is anyone noticing in some games, at least at 21:9 3440x1440 on Max Settings, where GPU usage is totally trash and unstable ? Far Cry 5, shows 40 to 80% and Destiny 2 is unplayable with 50fps drops and stuttering ?
> 
> Refer to my youtube video. Notice the GPU Utilization in the top left corner.


I initially had issues like this in pubg. Hadn't noticed it with a 2070 too much.
Alleviated significantly by adjusting the minimum MHz slider in the radeon tuning software.
However, the problem was completely solved by stabilizing the ram overclock. XMP profile was not fully stable when running 4 sticks in dual rank. Stable ram overclock, and I'm assuming cpu overclock also, is required.
IntelBurnTest, dram calc mem speed test, and P95 are your friends here.


----------



## HeLeX63

LtMatt said:


> Well I can only speak for Far Cry 5, I’ve not tried the other game. And Far Cry 5 does have a few locations on Jacobs Island where GPU utilisation drops to 80% briefly, but on the whole it’s 99% locked.
> I run a 6900 XT at 5120x1440 so slightly up from yours.
> 
> EDIT
> 
> I could put up a video if you like? I'll need to download the game so may take a couple of days or so.


I plugged my PC into my 4K LG OLED TV, had amazing performance. 90 to 120fps. Plug into my 3440x1440 34" Monitor, and get same or even worse performance, but was hoping for a smooth 144hz which didn't quite make it. Frame rate hardly moved which was something very interesting I noticed. If there is a game that is unoptimized and I am getting poor performance, for example, Destiny 2, I am ok with this, knowing it is not as a result of my system. But in Far Cry 5, I might have room to upgrade and remove the bottleneck.

I increased Far Cry 5 resolution scale to 1.3. Performance increased and GPU pegged 85 to 99%. So I guess if in this title it is CPU bound, im guessing in at least 75% of other titles I would be having similar issues.


----------



## newls1

Guess im joining this club. Walked into MC this afternoon to just "LOOK" to see if they happen to have any gpu's as it was explained to me today that their website doesnt always show their current stock.... So i throw my damn face mask on and casually walk in, already told myself to not be pissed cause i always have the worst luck anyways but it was only a 30min drive to my MC.... walk down the GPU isle, all nvidia cards sold, 56billion rx580/570's and then BAM!! Right infront of my face are selves full of 6800's (looked like 10 maybe) then 1 very lonely 6900XT. I stood gaurd and waited for a sales shark to come at me with a key! Snagged her up, he took it to the front, gave me a pick slip, and the best part ensued...... Price for GPU was 1379.00, but after my 1st responder discount, and the 5% off MC insentive to use my MC charge card, I paid 950.82!! Not bad for RTX3090 performance (Just wish I had dlss, but maybe AMD will come out with something similiar) So im pretty happy with purchase. Gonna drive her around for a few days then waterblock it.


----------



## newls1

Also, can someone tell me if this Asus "TUF" 6900XT uses a reference PCB so I can order a waterblock for it? Appreciate it


----------



## 6u4rdi4n

newls1 said:


> Also, can someone tell me if this Asus "TUF" 6900XT uses a reference PCB so I can order a waterblock for it? Appreciate it


It's a custom PCB.


----------



## newls1

yeah, found this out. guessing im going to have to play the waiting game for a waterblock now...


----------



## newls1

Can someone please help me with this question: What is the purpose of the "min clock speed" slider in the OC section of the drivers, and what is the preferred speed to set this at for a 6900xt?? Thank you


----------



## Tonza

Just installed mine, loving the card, next week i will build full custom loop.


----------



## majestynl

ir88ed said:


> Yes. I am running version 1.3.2 downloaded from igor's website
> The error occurs when I try to load the original bios (Navi21.rom) into RBE. Just get an error of "not supported".


As another member already said, RBE is not supported yet. RBE is for baking a rom. With MPT you can read powerplaytables files, edit them and then write them to your registry so Radeon SW will be let's say modded 




newls1 said:


> yeah, found this out. guessing im going to have to play the waiting game for a waterblock now...


Don't wanna ruin your party but don't think the TUF one will get a block. 

Sapphire / Trio / Red devil are allready lucky this time. Alphacool makes them,!


----------



## newls1

majestynl said:


> As another member already said, RBE is not supported yet. RBE is for baking a rom. With MPT you can read powerplaytables files, edit them and then write them to your registry so Radeon SW will be let's say modded
> 
> 
> 
> 
> Don't wanna ruin your party but don't think the TUF one will get a block.
> 
> Sapphire / Trio / Red devil are allready lucky this time. Alphacool makes them,!


NOOOOOOOOOOO  This is terrible news, I need someone to make a WB for this!


----------



## 6u4rdi4n

newls1 said:


> NOOOOOOOOOOO  This is terrible news, I need someone to make a WB for this!


Hopefully EKWB will make a block for it. They are currently considering it, as far as I know.

I do know your feeling. I managed to snag a XFX Speedster MERC 319 model of the RX 6900 XT. Performance is pretty good, but after having water cooling for years, the stock cooler is loud. It's not bad for an air cooler, but yeah... Doesn't seem like anyone is making a block for this either.


----------



## geriatricpollywog

Tonza said:


> Just installed mine, loving the card, next week i will build full custom loop.
> 
> View attachment 2473155
> View attachment 2473156
> View attachment 2473157


I like the design theme. The photography helps. Black rubber tubing would look great with this setup when you go custom loop.


----------



## Aussiejuggalo

Anyone use a USB-C to DP cable on these things? I was reading that the USB-C interface is basically DP just with a different connector and as long as it's a USB-C to DP cable and not one of those adapter dongle things it should work like native DP. I'm trying to avoid running 2 DP & 1 HDMI because it causes issues.

Friend got a reference 6900 XT but also a 6900 XT Red Devil, offered me the reference when he gets the Red Devil. If the USB-C works like native DP then I'll grab the reference seeing I'm gonna watercool it anyway and I'm not sure about blocks for non reference cards atm, haven't seen much about them.


----------



## newls1

6u4rdi4n said:


> Hopefully EKWB will make a block for it. They are currently considering it, as far as I know.
> 
> I do know your feeling. I managed to snag a XFX Speedster MERC 319 model of the RX 6900 XT. Performance is pretty good, but after having water cooling for years, the stock cooler is loud. It's not bad for an air cooler, but yeah... Doesn't seem like anyone is making a block for this either.


exactly.. I havent used an air cooled gpu in my gaming pc in YEARS! I feel like im betraying my CPU by having an aircooled GPU in the same case


----------



## No-one-no1

newls1 said:


> Guess im joining this club.


Sweet find! Let us know if the max mV adjustment in MPT works on it. If you test that. (won't work on my reference 6900xt)


----------



## newls1

i dont know how to use MPT, or where to get it.....


----------



## ilmazzo

newls1 said:


> i dont know how to use MPT, or where to get it.....











RED BIOS EDITOR and MorePowerTool for Polaris, Navi and Big Navi - MPT 1.3.18 | igor'sLAB


New curve options have been added. Of course we would be very happy about a feedback in the forum about the use of this new function! The More PowerTool (MPT) has been also revised once again for the…




www.igorslab.de


----------



## newls1

read from beginning to end, and im way to confused. I guess ill just stick with what OC tools are inside the driver.


----------



## DirtyScrubz

Anyone here order one from amd.com and if so how long did it take to ship out?


----------



## Tonza

0451 said:


> I like the design theme. The photography helps. Black rubber tubing would look great with this setup when you go custom loop.


Yeah i will be using EK ZMT black tubing.


----------



## The EX1

newls1 said:


> NOOOOOOOOOOO  This is terrible news, I need someone to make a WB for this!


The ASUS TUF lineup has always been their "entry level" offering outside of reference. Alphacool has really stepped up with the AIB offerings so I would just keep an eye out.



6u4rdi4n said:


> Hopefully EKWB will make a block for it. They are currently considering it, as far as I know.
> 
> I do know your feeling. I managed to snag a XFX Speedster MERC 319 model of the RX 6900 XT. Performance is pretty good, but after having water cooling for years, the stock cooler is loud. It's not bad for an air cooler, but yeah... Doesn't seem like anyone is making a block for this either.


Alphacool made a block for XFX AIB 5700 models. Maybe they will make one for the MERC.


----------



## ir88ed

newls1 said:


> read from beginning to end, and im way to confused. I guess ill just stick with what OC tools are inside the driver.


Took me a while to figure out (with some help from people here). 

Basically you use GPUz to save your stock gpu bios to your drive. 
Open up MPT (make sure you right click the shortcut and choose 'run as administrator').
Load your stock bios and make sure you choose your GPU in the pull down on the first tab.
Make some changes (i.e. gpu power = 320)
Click "write sppt"
If you did everything right, you should see a success dialog box. 
Reboot and changes will be in effect.

Igors lab is talks a lot about flashing that bios to the card to make the changes permanent(-ish).


----------



## ir88ed

I have been using MPT to try to increase my clocks. I am currently stuck in the 2650-ish range (low gpu=2600, high gpu=2700). I have the power limit set at 350W with a 15% slider which allows about 400W or so. Temps are in the mid 50's at peak. Any recommendations on what are common sense boundries for power are on these cards?


----------



## OrionBG

ir88ed said:


> I have been using MPT to try to increase my clocks. I am currently stuck in the 2650-ish range (low gpu=2600, high gpu=2700). I have the power limit set at 350W with a 15% slider which allows about 400W or so. Temps are in the mid 50's at peak. Any recommendations on what are common sense boundries for power are on these cards?


I guess you are water-cooled? My old 1080Ti was going up to 650W on water with a no-limit BIOS, and I never saw more than 50C. I guess that if you can keep it cool (let's say below 60C) you can try to push more. (Kingpin cards usually have 1000W - 2000W TDP limits so if nVidia GPU can handle this, AMD ones should also be able to).


----------



## majestynl

ir88ed said:


> I have been using MPT to try to increase my clocks. I am currently stuck in the 2650-ish range (low gpu=2600, high gpu=2700). I have the power limit set at 350W with a 15% slider which allows about 400W or so. Temps are in the mid 50's at peak. Any recommendations on what are common sense boundries for power are on these cards?





OrionBG said:


> I guess you are water-cooled? My old 1080Ti was going up to 650W on water with a no-limit BIOS, and I never saw more than 50C. I guess that if you can keep it cool (let's say below 60C) you can try to push more. (Kingpin cards usually have 1000W - 2000W TDP limits so if nVidia GPU can handle this, AMD ones should also be able to).


Personally I'm also not afraid for the card to handle 400w +

The issue is the max voltage here


----------



## NeeDforKill

Got my 6900xt Red Devil any possibility to get more than 2800. If i set up more than 2800 value in Radeon Wattman it just crash my driver when i try to game or any test.
PL +15

I loaded my bios to MPT and it shows 281 power limit is that normal? On Powercolor site it says red devil have like ~500w power limit.


----------



## ZealotKi11er

NeeDforKill said:


> Got my 6900xt Red Devil any possibility to get more than 2800. If i set up more than 2800 value in Radeon Wattman it just crash my driver when i try to game or any test.
> PL +15
> 
> I loaded my bios to MPT and it shows 281 power limit is that normal? On Powercolor site it says red devil have like ~500w power limit.


I am trying to find the place they showed 500w. Can you take a SS.


----------



## NeeDforKill

ZealotKi11er said:


> I am trying to find the place they showed 500w. Can you take a SS.


I am sorry to misslead you bois. I saw this information on hardware news sites
tomshardware
"ith its Red Devil Radeon RX 6900 XT, Power Color continued to use a VRM with a 14+2-phase VRM, but installed what it claims to be 'the industry's best DrMos as well as high polymer capacitors' to strengthen it and ensure cleaner and more stable power delivery to the GPU. The company also completely re-architected the power circuity in a bid to feed the card with up to 525 Watts of power from a combination of the PCIe 4.0 x16 slot and three 8-pin auxiliary PCIe power connectors, therefore maximizing its overclocking potential."

Tweaktown
"
The new PowerColor Radeon RX 6900 XT Red Devil Limited Edition is a gigantic, devil-ish looking triple-slot, triple-fan behemoth. It has triple 8-pin PCIe power connectors which PowerColor says are good to draw a huge 480W from the card -- a 900W PSU is recommended.

Read more: PowerColor Radeon RX 6900 XT Red Devil Limited Edition is a huge BEAST"

I am sorry idk, why i thinking about this info on power color site.


----------



## ZealotKi11er

281W + 15% = 323W. From what I have seen 330-350W is let you run at max voltage so you can increase that 281W to 300-330W if cooling allows it.


----------



## HeLeX63

NeeDforKill said:


> Got my 6900xt Red Devil any possibility to get more than 2800. If i set up more than 2800 value in Radeon Wattman it just crash my driver when i try to game or any test.
> PL +15
> 
> I loaded my bios to MPT and it shows 281 power limit is that normal? On Powercolor site it says red devil have like ~500w power limit.


I have the Red Devil Limited Edition too, the MPT table shows 281W for the GPU. I have increased mine to 309W (+15%) so effectively 355W until my Alphacool WB arrives.


----------



## DirtyScrubz

Wow the reference 6900 cooler is far from sufficient. Hot spot temp hits 95C in Timespy pretty fast. Can’t wait until Tuesday when my EK blocks arrive, the card needs watercooling badly. Good news is this thing easily hits 2.6 ghz in games and maintains it.


----------



## ZealotKi11er

DirtyScrubz said:


> Wow the reference 6900 cooler is far from sufficient. Hot spot temp hits 95C in Timespy pretty fast. Can’t wait until Tuesday when my EK blocks arrive, the card needs watercooling badly. Good news is this thing easily hits 2.6 ghz in games and maintains it.


Hotspot will hit those temps. It should be fine as long as it does not thermal throttle.


----------



## LtMatt

DirtyScrubz said:


> Wow the reference 6900 cooler is far from sufficient. Hot spot temp hits 95C in Timespy pretty fast. Can’t wait until Tuesday when my EK blocks arrive, the card needs watercooling badly. Good news is this thing easily hits 2.6 ghz in games and maintains it.


TJ Max is 110c so anything below that is fine. At stock the fan curve is very conservative. The stock cooler is the best yet in my opinion. If you think 95c is bad, you must not have used a reference 290X, which hit 95c Edge temperature never mind Junction temperature.

Nvidja only display edge temps so you can be sure there Junction temperatures will be similar in some cases.


----------



## No-one-no1

NeeDforKill said:


> Got my 6900xt Red Devil any possibility to get more than 2800. If i set up more than 2800 value in Radeon Wattman it just crash my driver when i try to game or any test.
> PL +15
> 
> I loaded my bios to MPT and it shows 281 power limit is that normal? On Powercolor site it says red devil have like ~500w power limit.


Problem is most likely the stock max 1175mV from 2600MHz and up.
Have you tried increasing the max core voltage with MPT? (power limit increase with MPT works even on reference cards)


----------



## DirtyScrubz

LtMatt said:


> TJ Max is 110c so anything below that is fine. At stock the fan curve is very conservative. The stock cooler is the best yet in my opinion. If you think 95c is bad, you must not have used a reference 290X, which hit 95c Edge temperature never mind Junction temperature.
> 
> Nvidja only display edge temps so you can be sure there Junction temperatures will be similar in some cases.


The xfx merc 6800 xt i had prior to this card never had hot spot hit 90C even when heavily OC’d. Of course that’s an aib design but it still did a great job (though the fans could get loud). I’m not going to push this card until i have it under water.


----------



## LtMatt

DirtyScrubz said:


> The xfx merc 6800 xt i had prior to this card never had hot spot hit 90C even when heavily OC’d. Of course that’s an aib design but it still did a great job (though the fans could get loud). I’m not going to push this card until i have it under water.


Yeah XFX really nailed the design this year, one of the best out there in terms of looks and temps. I think it's the only AIB design that is better looking than the MBA version.


----------



## ZealotKi11er

LtMatt said:


> TJ Max is 110c so anything below that is fine. At stock the fan curve is very conservative. The stock cooler is the best yet in my opinion. If you think 95c is bad, you must not have used a reference 290X, which hit 95c Edge temperature never mind Junction temperature.
> 
> Nvidja only display edge temps so you can be sure there Junction temperatures will be similar in some cases.


Also, they don't show G6X temp.


----------



## LtMatt

ZealotKi11er said:


> Also, they don't show G6X temp.


Sneaky!

I like to monitor the Edge, Memory and Junction temps via HWINFO64 and Rivatuner OSD.


----------



## NeeDforKill

ZealotKi11er said:


> 281W + 15% = 323W. From what I have seen 330-350W is let you run at max voltage so you can increase that 281W to 300-330W if cooling allows it.





No-one-no1 said:


> Problem is most likely the stock max 1175mV from 2600MHz and up.
> Have you tried increasing the max core voltage with MPT? (power limit increase with MPT works even on reference cards)


I tried to increase voltage more than 1175 it lock core on 500mhz.


----------



## NeeDforKill

On air able to hit with my **** ram and 8700k in TSE 10300 graphics points.


----------



## AvengerUK

It's been a long time since I've had an AMD card - managed to snag a 6900XT and trying to learn undervolting using the AMD CC

No idea if I'm on the right track...but:

1jmwkR2.png (1578×1163) (imgur.com) 

This is what I've managed so far:

Min 2419 Max 2519
1080mv
+15%
107% VRAM

Port Royal:
10 424


----------



## No-one-no1

NeeDforKill said:


> I tried to increase voltage more than 1175 it lock core on 500mhz.


Oh crap. The red devil does this also then. Need to cancel that order.
500MHz, same as my 6900xt reference model. When I try increasing the minimum MHz to 600, it won't start windows any more. And need to reset the MPT stuff from safe mode.
Guess only some aib models work with higher max volts.


----------



## majestynl

Alphacool postponed again so i decided to go for EK. Installed the block this weekend and used some leftover pads from Fuji instead of the low quality from EK 

Temps are oke. Around 40c under heavy load. Junction is about ~15c more.
Max power: 368w !

Didn't have time to play that much but got a nice graphic score so far.









I scored 15 151 in Fire Strike Ultra


AMD Ryzen 7 3800X, AMD Radeon RX 6900 XT x 1, 16384 MB, 64-bit Windows 10}




www.3dmark.com





Will play more upcoming days. Tomorrow finally the Dark Hero arrives so i can attach my 5900x on it  Damn that mobo was a hell to find last few weeks. Sold out everywhere 😔


----------



## kazukun

How are you doing?
Reference, I ran the 3DMark with water cooling.
PL is set to 300.

TIME SPY








I scored 20 155 in Time Spy


AMD Ryzen 9 5950X, AMD Radeon RX 6900 XT x 1, 32768 MB, 64-bit Windows 10}




www.3dmark.com






http://imgur.com/HkUJb6I


FIRE STRIKE ULTRA








I scored 15 320 in Fire Strike Ultra


AMD Ryzen 9 5950X, AMD Radeon RX 6900 XT x 1, 32768 MB, 64-bit Windows 10}




www.3dmark.com






http://imgur.com/UnL0xWv


----------



## newls1

Need some help, just assembled another 10900k pc and using this asus tuf 6900xt im havinga weird issue. (now PSU is 6 yrs old and its an evga 1300 platnium) as soon as I start Timespy benchmark the pc instantly reboots. I removed OC on gpu and timespy still reboots the pc, but i can play my game (fc5) for hours with no issues. The PC and gpu are NOT oc'd currently to eliminate that complication. Looking like a bad PSU im guessing. This all started after I adjusted the "minimum core speed slider" in wattman. I dont know where to adjust that slider too? any guidance on where to put that slider would be great. I saw a youtube vid where a dude said put minimum slider to 2000 and max core frequency slider to 2600 (or what ever your max OC will be) but now im confused.


----------



## NeeDforKill

newls1 said:


> Need some help, just assembled another 10900k pc and using this asus tuf 6900xt im havinga weird issue. (now PSU is 6 yrs old and its an evga 1300 platnium) as soon as I start Timespy benchmark the pc instantly reboots. I removed OC on gpu and timespy still reboots the pc, but i can play my game (fc5) for hours with no issues. The PC and gpu are NOT oc'd currently to eliminate that complication. Looking like a bad PSU im guessing. This all started after I adjusted the "minimum core speed slider" in wattman. I dont know where to adjust that slider too? any guidance on where to put that slider would be great. I saw a youtube vid where a dude said put minimum slider to 2000 and max core frequency slider to 2600 (or what ever your max OC will be) but now im confused.


It's just simple. Time Spy stress your gpu alot more, you can see on my screenshot above, it can eat 444w where in games it like ~300w.
Try to play with gpu core sliders in wattman if it still crash that can be problem with psu or gpu.


----------



## LtMatt

newls1 said:


> Need some help, just assembled another 10900k pc and using this asus tuf 6900xt im havinga weird issue. (now PSU is 6 yrs old and its an evga 1300 platnium) as soon as I start Timespy benchmark the pc instantly reboots. I removed OC on gpu and timespy still reboots the pc, but i can play my game (fc5) for hours with no issues. The PC and gpu are NOT oc'd currently to eliminate that complication. Looking like a bad PSU im guessing. This all started after I adjusted the "minimum core speed slider" in wattman. I dont know where to adjust that slider too? any guidance on where to put that slider would be great. I saw a youtube vid where a dude said put minimum slider to 2000 and max core frequency slider to 2600 (or what ever your max OC will be) but now im confused.


What PSU are you using and what A does it provide on the 12v rail?

I had a similar issue with a 650W Seasonic Platinum with a 54A 12v rail.I was only able get it to run once i undervolted to 1.100v. Any higher would crash and restart the PC.

I'd recommend a 850W unit with a 70A 12v rail at a minimum.

I went for a Corsair HX1000W Platinum in the end with a 83A 12V rail. Overkill, but this thins gas almost 95% efficiency in the 400-500W range.


----------



## newls1

its a 1300w evga platinum rated. its 6 yrs old and i bought it as "B" stock from evgas website back then. its had a hard life for sure.. I ordered a new PSU and it will be here wends. Crossing my fingers that this fixes my issue.


----------



## LtMatt

newls1 said:


> its a 1300w evga platinum rated. its 6 yrs old and i bought it as "B" stock from evgas website back then. its had a hard life for sure.. I ordered a new PSU and it will be here wends. Crossing my fingers that this fixes my issue.


Which unit did you go with?

Good luck.


----------



## newls1

this is what i bought. should be a solid unit 






Amazon.com: Super Flower Leadex Gold 1300W 80 Plus Gold, ECO Fanless & Silent Mode, Full Modular Power Supply, Dual Ball Bearing Fan,SF-1300F14MG: Computers & Accessories


Buy Super Flower Leadex Gold 1300W 80 Plus Gold, ECO Fanless & Silent Mode, Full Modular Power Supply, Dual Ball Bearing Fan, SF-1300F14MG: Internal Power Supplies - Amazon.com ✓ FREE DELIVERY possible on eligible purchases



www.amazon.com


----------



## Pedros

So, I have a bunch of offers for my 3080 Suprim X ... and I was thinking about the 6900XT. Any of you good people did this kind of move, going from a 3080 to a 6900XT? If so, what are your findings and general opinion about this kind of move?

( Background: I got the 3080 as a one-shot opportunity, my kid needed a new GPU and I gave him my 1080ti ... since there was no stock on 6800XT at the time I decided to pull the plug on the 3080, although my plans were to go AMD. And now, this happens, when people are offering me the same price I bought it new )


----------



## LtMatt

newls1 said:


> this is what i bought. should be a solid unit
> 
> 
> 
> 
> 
> 
> Amazon.com: Super Flower Leadex Gold 1300W 80 Plus Gold, ECO Fanless & Silent Mode, Full Modular Power Supply, Dual Ball Bearing Fan,SF-1300F14MG: Computers & Accessories
> 
> 
> Buy Super Flower Leadex Gold 1300W 80 Plus Gold, ECO Fanless & Silent Mode, Full Modular Power Supply, Dual Ball Bearing Fan, SF-1300F14MG: Internal Power Supplies - Amazon.com ✓ FREE DELIVERY possible on eligible purchases
> 
> 
> 
> www.amazon.com


108A on the 12V rail, beast PSU and massive overkill Lol, nice.


----------



## newls1

LtMatt said:


> 108A on the 12V rail, beast PSU and massive overkill Lol, nice.


super flower made my evga unit, so i have high hopes for solid performance. Just really hope this is in fact my issue. I prematurely removed the PSU from PC when in fact I should have probed the 5v/12v lines with my multi meter while under load to check voltages, but to late now. Crossing fingers and hoping for the best.


----------



## newls1

to anyone that might have or will be getting an ASUS TUF 6900XT GPU, I emailed EKWB's to see if they have plans for a FCWB for this GPU and they finally responded with a "YES" but unsure on release date... SO there is hope out there for me


----------



## LtMatt

newls1 said:


> super flower made my evga unit, so i have high hopes for solid performance. Just really hope this is in fact my issue. I prematurely removed the PSU from PC when in fact I should have probed the 5v/12v lines with my multi meter while under load to check voltages, but to late now. Crossing fingers and hoping for the best.


I think it's the spikes that causes the issue with insufficient or old PSUs.


----------



## newls1

not sure what that is telling me 😬


----------



## RaEyE

newls1 said:


> not sure what that is telling me 😬


Maybe this recent test might help you.
Unfortunately it does not include a RX 6900XT, but a 6800XT with OC on torture loop shouldn't be too far of from a 6900XT. I guess.

---EDIT---
Srry, forgot the actual LINK 😅


----------



## newls1

read entire article and feel even more dumb now cause i didnt understand most of what i read! So basically its not just the gpu causing this, its an OC'd cpu too, so is the motherboard calling for the shut down, or OCP/OPP in the PSU calling the shut down? If its the board, surely there is an option in bios im missing to disable this damn feature then right?

Im worried now that i read that, that even with my new PSU this will still hapen... Is that a real possiblity? or was that article showing aging PSUs and their ability? Guess ill have to wait and see.


----------



## RaEyE

newls1 said:


> Im worried now that i read that, that even with my new PSU this will still hapen... Is that a real possiblity? or was that article showing aging PSUs and their ability? Guess ill have to wait and see.


It was an articel related to your problem. Actually the article was a followup article to a less detailed on on how strong a PSU should be.
The former test assesed what PSU would handle the combination of a beefy CPU (5900X if I remember correctly) 
and either RTX3090 or 6800XT, both CPU & GPU in OC and torture loop (furmark, Prime95, etc.). *LINK*
Even then ~800W (meaning a 850W PSU), was enough to handle such a system, including drives and some RGB. So a 1300W PSU should be more than enough.

The new article simply detailed why some PSUs, although nominally adequate, might not handle resulting loads of CPU + GPU + Board.
This mostly applies to either slightly undersized PSUs or cheap PSUs that represent the bare minimum of what a PSU in the targeted class should offer, or even less.
Spikes are usually handled by capacitors of your PSU which, if sized adequatly, should "smoothe" the load on your PSU given only short spikes (20ms or less) burden the PSU.
On loads that are prolonged past 20ms and are above what your PSU is designed for, OCP and / or OPP might trigger, shutting your system down immediately.
In that case however, your computer would turn off, *and would stay off*. If your OCP or OPP trigger, the computer stays off.

If however e.g. the mainboard decides, something went wrong, the system restarts.

In generall, I'd say your new PSU is more than enough to power your system. Nominal.
As was your old one.
Given your system simply restarts after it went down, it's unlikly OCP or OPP of your PSU triggered and shut off your system.
As already described, in that case your computer wouldn't turn on again on its own. The failure therefore should be somewhere else.

I had a look back in this thread, you didn't give detailed information about your system apart from CPU & GPU, right?
May I ask for a hardware description of your system? Mainboard especially?

If you got the possibility you should first start emliminating possible defects.
Personally I don't think it's simply your PSU thats causing the problems, way too beefy to surrender to a single GPU system.
Especially if its not OCed. Btw. did you check your EventLog (*LINK*) for possible reasons for a failure / reset?

Most benchmarks I've seen using a 6900XT or RTX3090 with similar CPUs are done with a 1200W-1600W PSU.
That would put you right inbetween of that. If a 1200W PSU packs a 10900K + RTX3090 both with OC, then your 1300W should do as well.


----------



## newls1

RaEyE said:


> It was an articel related to your problem. Actually the article was a followup article to a less detailed on on how strong a PSU should be.
> The former test assesed what PSU would handle the combination of a beefy CPU (5900X if I remember correctly)
> and either RTX3090 or 6800XT, both CPU & GPU in OC and torture loop (furmark, Prime95, etc.). *LINK*
> Even then ~800W (meaning a 850W PSU), was enough to handle such a system, including drives and some RGB. So a 1300W PSU should be more than enough.
> 
> The new article simply detailed why some PSUs, although nominally adequate, might not handle resulting loads of CPU + GPU + Board.
> This mostly applies to either slightly undersized PSUs or cheap PSUs that represent the bare minimum of what a PSU in the targeted class should offer, or even less.
> Spikes are usually handled by capacitors of your PSU which, if sized adequatly, should "smoothe" the load on your PSU given only short spikes (20ms or less) burden the PSU.
> On loads that are prolonged past 20ms and are above what your PSU is designed for, OCP and / or OPP might trigger, shutting your system down immediately.
> In that case however, your computer would turn off, *and would stay off*. If your OCP or OPP trigger, the computer stays off.
> 
> If however e.g. the mainboard decides, something went wrong, the system restarts.
> 
> In generall, I'd say your new PSU is more than enough to power your system. Nominal.
> As was your old one.
> Given your system simply restarts after it went down, it's unlikly OCP or OPP of your PSU triggered and shut off your system.
> As already described, in that case your computer wouldn't turn on again on its own. The failure therefore should be somewhere else.
> 
> I had a look back in this thread, you didn't give detailed information about your system apart from CPU & GPU, right?
> May I ask for a hardware description of your system? Mainboard especially?
> 
> If you got the possibility you should first start emliminating possible defects.
> Personally I don't think it's simply your PSU thats causing the problems, way too beefy to surrender to a single GPU system.
> Especially if its not OCed. Btw. did you check your EventLog (*LINK*) for possible reasons for a failure / reset?
> 
> Most benchmarks I've seen using a 6900XT or RTX3090 with similar CPUs are done with a 1200W-1600W PSU.
> That would put you right inbetween of that. If a 1200W PSU packs a 10900K + RTX3090 both with OC, then your 1300W should do as well.


Thank you for such a detailed reply. This pc i just built is using a MSI Z490 tomahawk board (OC'd to 5.1ghz @ 1.30v on the 10900k) 2 x16gb ddr4000, 6900xt, .... If its the board shutting off the system, can i disable this "option"??


----------



## dagget3450

LtMatt said:


> I think it's the spikes that causes the issue with insufficient or old PSUs.
> View attachment 2473609


Oh man, thats crazy. I know my Vega FE's in mgpu ran okay, but recently started having issues with the top one failing even at stock in benchmarks/games. I thought something was wrong with the card, but after i removed the 8pin cables from the PSU i found some of the contacts looked black. Last time i had this type of issue was with GTX 480's in quadfire. Melted an ATX cable back then. luckily in my case the PSU i have is a G1600 lepa. I hope i dont have this issue with the 6900xt in the near future. So far i am good in testing but man thats some crazy spikes on wattage.


----------



## Duvar

I hope my 500W bequiet psu will be enough to run the card undervolted + ryzen 3600 hard undervolted.
Card should arrive today.


----------



## RaEyE

dagget3450 said:


> [...] but after i removed the 8pin cables from the PSU i found some of the contacts looked black.


Ok, that's some quit heavy resulat that you got there.

@newls1 Checking the cables none the less is a easy and fast possibility.
On that matter, be sure you use 2 seperate PCIe cables on your PSU, regardless if it's a mono- or multi-rail PSU.

Have you tried running your benchmark with everything, including CPU @stock?
Make sure to save your OC settings before resetting to stock. Always bugs me when I loose working settings for an OC.


----------



## LtMatt

newls1 said:


> not sure what that is telling me 😬





dagget3450 said:


> Oh man, thats crazy. I know my Vega FE's in mgpu ran okay, but recently started having issues with the top one failing even at stock in benchmarks/games. I thought something was wrong with the card, but after i removed the 8pin cables from the PSU i found some of the contacts looked black. Last time i had this type of issue was with GTX 480's in quadfire. Melted an ATX cable back then. luckily in my case the PSU i have is a G1600 lepa. I hope i dont have this issue with the 6900xt in the near future. So far i am good in testing but man thats some crazy spikes on wattage.


I think you'll be okay. Aside from the spikes, the 6900 XT draws less power then the 3090 and the 3080 on the averages in gaming. It also loves an overclock and an undervolt so you can bring the power draw down nicely while still overclocking and achieving better than stock performance.


----------



## RaEyE

Duvar said:


> I hope my 500W bequiet psu will be enough to run the card undervolted + ryzen 3600 hard undervolted.
> Card should arrive today.


That fits pretty much into what the article, I refered to earlier (LINK), describes. [5900X + 6800XT @ 500/550W]
Given your CPU is a 5600X and not a 5900X but you run a 6900XT I guess it should work.
A bit tight, but not necessarily too tight. How old is your PSU and is it a cheap(ish) one?


----------



## Duvar

You get a nice mouse mat and a R Button for free!

My 3600 is running with 3.8GHz and 0.9V CB 20 max Power below 40W^^
Playing at 3440x1440 (115Hz OC/ 100Hz native). I think i will have to overclock my CPU now, with the 1080Ti UV it was enough with 3.8GHz.
RAM can run 3800CL14 tight subs. CPU can do 4.625GHz CB20 stable.


----------



## newls1

RaEyE said:


> Ok, that's some quit heavy resulat that you got there.
> 
> @newls1 Checking the cables none the less is a easy and fast possibility.
> On that matter, be sure you use 2 seperate PCIe cables on your PSU, regardless if it's a mono- or multi-rail PSU.
> 
> Have you tried running your benchmark with everything, including CPU @stock?
> Make sure to save your OC settings before resetting to stock. Always bugs me when I loose working settings for an OC.


yes, i always make it a habit of running 2 individual pcie cables and NEVER use those split cables that combine 2 plugs. and to answer your other question, of course i checked all connections and cables prior to spending 300$ on another PSU. Im not a n00b, or beginner user, ive been building/selling/OCing/tweaking pc's since the 486 days.


----------



## cg4200

I have been trying to get a card on launch day I only watch newegg twitter feed and saw a drop Thursday .Many times before i would try add to cart with the oops problem or oos please remove from cart or save *** hate that as sure everyone does . When i went for the saphire 6900 was gone tried asus reference 1049.00 and bam got that fuc..r came yesterday heavy card so happy.. Anyone know if there is a new morepowertool for 6900xt ? thanks


----------



## No-one-no1

Did some retesting with 1250 and 1300mV max in MPT 1.3.3
Both voltages result in exact same behavior. Clock capped at 500MHz. But tuning the MHz clock % upwards with the radeon tool works (100% = 500MHz, slider goes to 600% = 3000MHz). Stock voltage curve is followed up to 1018mV at 2300MHz, and voltage capped there.
With 1018mV limit, my reference 6900xt crashes at a little over 2430MHz. (light work load, just to test the mV / frequency curve)

edit: setting max voltage to 1050mV works as expected, follows stock curve to 1050, and caps at that. So probably some software max limit, that resets it to 1018mV, if the set value is over 1175mV on the reference model


----------



## No-one-no1

NeeDforKill said:


> I tried to increase voltage more than 1175 it lock core on 500mhz.


Could you retry with MPT 1.3.3 and by increasing the MHz "%" slider in the radeon tool? As per my test above.
Very curious to see if the red devil has a voltage override limit (or whatever is happening with the reference model).


----------



## RaEyE

newls1 said:


> yes, i always make it a habit of running 2 individual pcie cables and NEVER use those split cables that combine 2 plugs. and to answer your other question, of course i checked all connections and cables prior to spending 300$ on another PSU. Im not a n00b, or beginner user, ive been building/selling/OCing/tweaking pc's since the 486 days.


No need to be upset.
I'm not sitting right beside you so I wouldn't know. Simply checking and ticking of common faults that can happen to anyone.


Did you check Windows EventLog?
Has your MoBo some kind of Log, detailing reasons for a reset? 
As I mentiond the system zurning off and starting by itself again might be due to the MoBo triggering PWR_OK.
What about running the benchmark without any OC (including CPU and RAM)
Whats happens if you run benchmarks targeting the cpu, such as Prime95?
Have you looked at the temperatures when the system reset?


----------



## newls1

RaEyE said:


> No need to be upset.
> I'm not sitting right beside you so I wouldn't know. Simply checking and ticking of common faults that can happen to anyone.
> 
> 
> Did you check Windows EventLog?
> Has your MoBo some kind of Log, detailing reasons for a reset?
> As I mentiond the system zurning off and starting by itself again might be due to the MoBo triggering PWR_OK.
> What about running the benchmark without any OC (including CPU and RAM)
> Whats happens if you run benchmarks targeting the cpu, such as Prime95?
> Have you looked at the temperatures when the system reset?


my apologizes, i didnt mean for it to come across that way. To answer your questions:
1. did NOT check windows events, didnt even think of that.
2. not t hat i know of, doubt it seriously
3. i can benchtest my cpu and ram for days, with no issue what so ever
4. prime95 no problem
5. temps while prime95 WITH avx was 78c with OC, aida64 stabilty test high 60s, low 70s max.

System is very highend watercooled. My issue really only happens when the GPU is 100% loaded down, then BAM.......Instant reset!


----------



## RaEyE

No offence taken. My bad, wa a bit thin-skinned.

Please do, not necessarilly there is something helpfull since resets as you describe them are often hardware related,
and don't necessarily issue a log entry but definetly worth a shot.
Probably not, usally only server feature. But worth a shot.
Great
Also great
Excelent temps, no worry there  (78c with AVX, nice)
Ok, lets try this:

Since it seems to only be the GPU, maybe try an older drive (I guess you are at 20.12.2 ?).
Have you tried to limit GPU powerconsumption?
Reduce max clocks
Undevolt (not much, maybe 10-20mV tops)

Did you try other benchmarks?
Preferably those that don't tax other components too much.
HERE is a overview of multipke possible benchmarks.
- PassMark (seperate 2D and 3D bench) and FurMark might be interesting
What bugs me is the way you describe your resets.

Only happening when GPU is taxed
Actual restart, no pwer off (therefore unlikely its actually PSU related. A PSU selfprotectin mechanism does not turn on the PSU on its own again)
Only in benchmark scenarios (>90% GPU utilization)

Either the GPU hast problems with a certain kind of load or is otherwise faulty.
Maybe that particualr card can't keep up with the erratic frequency chages under load, but thats a very wild guess.
In both casess, the card should crash without taking down the whole system.

What sounds more likely is that the reset is caused by the Mainboard, either as self-protection or actually because it's in some way unstable.
That would explain why the system goes offline imediately, then turns back on again.

Did you try a different PCIe slot?
Is the GPU still stock, meaning, did you already switch to watercooling?
---EDIT---


newls1 said:


> If its the board shutting off the system, can i disable this "option"??


Beg your pardon, I overlooked this question. Sorry.
If it is the mainboard, then probably not. It is a selfprotection feature and 'if' it is e.g. the PWR_OK signal, a necessity specified by ATX standards.
I won't rule out a board targeting OC might allow tightening / loosening rules on this behavior, but disabling it. I doubt it.


----------



## newls1

what is the reason for the "min frequency" slider in the OC panel? where should that be adjusted too?


----------



## RaEyE

It's AMDs was of representing a curve.
Min and Max frequency sliders along with a slider for voltage represent the coefficients to describe a formula for a curve. Very unintuitive actually.

The "MPT - More Power Tool" tries to represent Navi over locking in a different way. This article gives an introduction to the topic.

Basically the min. frequency slider represents the start point of the curve. You may raise it so that your card won't clock under a certain frequency for certain frequency. If possible.


----------



## newls1

Update: First off, thank you guys for all your help on my "insta-reboot issue" upon 3d load. RaEyE, i really appreciate your feedback, and tried most of everything you said to do. I got my PSU tonight, and tossed it in. very first thing i noticed was NO COIL WHINE at all while just sitting at desktop. Before The card emitted a very slight coil whine doing nearly nothing, and just thought it was normal. I went into AMD's OC tuning in the CP, and made sure card was at stock settings, and ran 3DMark.. Im happy to report ZERO reboot, and card still didnt make ANY COIL WHINE, card was totally silent (other then the noise from fans of course) Im completely shocked at this point and very happy as this confirmed my EVGA 1200 Platinum PSU was faulty. To go even further with this, I went back into the OC panel in the CP, and applied 2000min/2700max, maxed out mem slider, fast timings, fan speeds adjusted appropriately, etc.... and re-ran 3dmark again, STILL NO COIL WHINE, and test completed.. SHES STABLE FINALLY.. Super Flower 1300 FTW!! 

Here's a few things I learned with this curve ball that i was tossed: the 6900XT took out my eVGA 1200 Plat PSU and now I have a beautiful dead PSU to add to the shelf! Now this coil whine issue was strange.... I guess the eVGA PSU was sending dirty power to the gpu and however/whatever the power delivery system works on this gpu, the coil whine was some sort of by product of it. Now with this new PSU, the card makes ZERO coil whine so something goes hand in hand here?? Anyways, im finally a happy ass consumer again, and can finally enjoy what the purpose of this pc is... GAMES! here are a few pics to show what PSU i went with, and my fresh new 3DMark Score.. Thanks again guys


----------



## LtMatt

newls1 said:


> Update: First off, thank you guys for all your help on my "insta-reboot issue" upon 3d load. RaEyE, i really appreciate your feedback, and tried most of everything you said to do. I got my PSU tonight, and tossed it in. very first thing i noticed was NO COIL WHINE at all while just sitting at desktop. Before The card emitted a very slight coil whine doing nearly nothing, and just thought it was normal. I went into AMD's OC tuning in the CP, and made sure card was at stock settings, and ran 3DMark.. Im happy to report ZERO reboot, and card still didnt make ANY COIL WHINE, card was totally silent (other then the noise from fans of course) Im completely shocked at this point and very happy as this confirmed my EVGA 1200 Platinum PSU was faulty. To go even further with this, I went back into the OC panel in the CP, and applied 2000min/2700max, maxed out mem slider, fast timings, fan speeds adjusted appropriately, etc.... and re-ran 3dmark again, STILL NO COIL WHINE, and test completed.. SHES STABLE FINALLY.. Super Flower 1300 FTW!!
> 
> Here's a few things I learned with this curve ball that i was tossed: the 6900XT took out my eVGA 1200 Plat PSU and now I have a beautiful dead PSU to add to the shelf! Now this coil whine issue was strange.... I guess the eVGA PSU was sending dirty power to the gpu and however/whatever the power delivery system works on this gpu, the coil whine was some sort of by product of it. Now with this new PSU, the card makes ZERO coil whine so something goes hand in hand here?? Anyways, im finally a happy ass consumer again, and can finally enjoy what the purpose of this pc is... GAMES! here are a few pics to show what PSU i went with, and my fresh new 3DMark Score.. Thanks again guys
> 
> 
> View attachment 2474045
> 
> 
> View attachment 2474046


Glad to hear it I knew that PSU was bad.


----------



## DirtyScrubz

Question for you guys: I put a EKWB block on my reference 6900 XT and the temps are fantastic. However, when I set the min to 2400 and max to around 2600 mhz, the card briefly clocks to 2600 mhz in 3dmark and then right away clocks back down to 2400s. On air, it was going to 2500-2600 fine but now on water it's clocking down. Does this have something to do with the fans reading 0 rpm?

Going to reinstall Windows since this thing had an NVIDIA card installed previously.

Edit: Figured it out after the clean install. I guess I should have read more of this thread, the anemic 255w was holding me back. After using MPT and pushing the power limit way up, the card was hitting and holding 2.6 ghz easily.

Edit 2: I have a brand new Corsair AX1600i and the card definitely puts out some nasty coil whine (edit: turns out it was the mobo vrms). I didn't notice it when it was on air previously but I do notice it now w/the waterblock. Somehow I don't think the PSU is defective like the other guy above me.

Timespy: https://www.3dmark.com/3dm/56718116?


----------



## RaEyE

newls1 said:


> Before The card emitted a very slight coil whine doing nearly nothing, and just thought it was normal.





newls1 said:


> Im happy to report ZERO reboot, and card still didnt make ANY COIL WHINE, card was totally silent





newls1 said:


> I have a beautiful dead PSU to add to the shelf!


Great! Finally that beauty gets to run wild 
Love it that seemingly most of your coil whine is gone!
Enjoy it 

Interesting your PSU had such problems with your setup.
I actually wouldn't got so far as to say your PSU is actually bad, faulty or even dead.
The coil whine you noticed may indeed speak for some unclean voltage (ripple) or a resonance frequency between your VGA and PSU (maybe even more components involved).
Power supply is a (often) unexpectedly highly complicated topic. Or it canbee.
It would be interesting to see a in depth review of your PSU, bot simply based on its 'horsepower' which it undoubtedly (still) has, but in regards to its behaviour and stability on timings <20ms.
I especially would like to know how your PSU behaves when spikes from CPU, GPU and Mainboard overlay.
It's really curious. Even with OC, your system shouldn't spike anywhere near what your PSU should be capable of handling.

Welll TL/DR; Happy it now works  Good luck to you.


----------



## MyShadow

Anyone else seeing massive deltas between GPU and GPU hotspot temp when under load? like 40 degrees C?


----------



## No-one-no1

newls1 said:


> what is the reason for the "min frequency" slider in the OC panel? where should that be adjusted too?


From what I can tell, this is to be able to limit the card from clocking down during split-second low use spikes in games etc. Limiting the card from clocking down will reduce big stutters.
I'm assuming setting this too high will mess up the behavior at power/temp limits, and can result in the card going down to 500/0MHz idle state if it can't stay under the limits while at the min frequency MHz.
If you don't see any frequency drops in-game then this can be left on 0MHz.


----------



## LtMatt

MyShadow said:


> Anyone else seeing massive deltas between GPU and GPU hotspot temp when under load? like 40 degrees C?
> 
> View attachment 2474110


Not really, unless I am running at maximum voltage, with power limits removed with MPT on suicide runs. 

Typically underload with a 6900 XT MBA I see 15-20c difference in Edge and Junction (Hotspot) temps. 

What 6900 XT are you using?


----------



## newls1

we going to see a new driver soon?


----------



## coelacanth

MyShadow said:


> Anyone else seeing massive deltas between GPU and GPU hotspot temp when under load? like 40 degrees C?
> 
> View attachment 2474110


I am running XFX Speedster Merc 6900 XT bone stock. Max Hot Spot max temp after gaming is usually 98C, max GPU Temp is consistently 16 or 17 C lower, usually 81 C.


----------



## newls1

noticed my memory clocks are not down clocking. Is this normal?


----------



## knightriot

Here's my best result , for daily using and gaming 
























































Clock in ACO:


----------



## LtMatt

newls1 said:


> noticed my memory clocks are not down clocking. Is this normal?


Depending on the display you are using, yes it's normal. Typically displays with 75HZ/144HZ and perhaps some others have a vertical blank timing that requires a high memory clock frequency. It's expected and normal if this is you. 

You can workaround it by lowering refresh rate to 120HZ, or playing with CRU to adjust the timings so the vertical blank timing falls within the golden zone that allows a memory down clock.


----------



## newls1

ok, i thought i wouldnt screw with a more dialed in OC, but I cant stand it anymore! No matter if I set slider for my OC to 2500/2600/2700mhz in CP, the GPU wont go over 24xx - ish speeds and seems to be capped around 300ish watts max power usage. So im guessing I need to learn and figure out how to use MPT program. I dont want to flash bios and all that jazz, just want to simply adjust my OC so when i go to far, i can go back. Every article i read talks about flashing bios, etc... Someone offer any assistance for me please, im 1/2 ******ed!


----------



## damric

Just curious. How hard was it for you guys to find available cards? I'm trying to get ine of the 6800 or xts and no luck for a while. Any tricks?


----------



## newls1

i just got lucky when I walked into microcenter.. 6900xt sitting right on the shelf...


----------



## MyShadow

LtMatt said:


> Not really, unless I am running at maximum voltage, with power limits removed with MPT on suicide runs.
> 
> Typically underload with a 6900 XT MBA I see 15-20c difference in Edge and Junction (Hotspot) temps.
> 
> What 6900 XT are you using?


Asrock reference / MBA board. Seems unusual especially considering the undervolt, low ambients and how quickly it reaches 110 degrees so I thought maybe it was related to the thermal compound... But then came across this - The hotspot of the Radeon RX 6800 (XT), hurdles in the thermal grease replacement and the correct assembly sequence | igor´sLAB 

Figured maybe I'm just unlucky? Default clock for the card is 2470 though so it's not the worst sample out there...


----------



## DirtyScrubz

newls1 said:


> ok, i thought i wouldnt screw with a more dialed in OC, but I cant stand it anymore! No matter if I set slider for my OC to 2500/2600/2700mhz in CP, the GPU wont go over 24xx - ish speeds and seems to be capped around 300ish watts max power usage. So im guessing I need to learn and figure out how to use MPT program. I dont want to flash bios and all that jazz, just want to simply adjust my OC so when i go to far, i can go back. Every article i read talks about flashing bios, etc... Someone offer any assistance for me please, im 1/2 ******ed!


Its in the thread. DL gpuz, save your bios, load it in MPT, make changes and then write. Reboot and it will save it at the driver level, no flashing necessary.


----------



## DirtyScrubz

MyShadow said:


> Asrock reference / MBA board. Seems unusual especially considering the undervolt, low ambients and how quickly it reaches 110 degrees so I thought maybe it was related to the thermal compound... But then came across this - The hotspot of the Radeon RX 6800 (XT), hurdles in the thermal grease replacement and the correct assembly sequence | igor´sLAB
> 
> Figured maybe I'm just unlucky? Default clock for the card is 2470 though so it's not the worst sample out there...


The pad definitely makes good contact, its just the heatsink isn’t good enough to handle much of an OC. It does what it’s built for and that’s stock settings. If you want to go further, a waterblock is a must.


----------



## LtMatt

MyShadow said:


> Asrock reference / MBA board. Seems unusual especially considering the undervolt, low ambients and how quickly it reaches 110 degrees so I thought maybe it was related to the thermal compound... But then came across this - The hotspot of the Radeon RX 6800 (XT), hurdles in the thermal grease replacement and the correct assembly sequence | igor´sLAB
> 
> Figured maybe I'm just unlucky? Default clock for the card is 2470 though so it's not the worst sample out there...


Yes looks like you got unlucky with a bad mount imo. Unfortunate as the MBA designs are usually pretty consistent with the temp difference, my 6800 XT was basically identical.


----------



## RaEyE

newls1 said:


> ok, i thought i wouldnt screw with a more dialed in OC, but I cant stand it anymore! No matter if I set slider for my OC to 2500/2600/2700mhz in CP, the GPU wont go over 24xx - ish speeds and seems to be capped around 300ish watts max power usage. So im guessing I need to learn and figure out how to use MPT program. I dont want to flash bios and all that jazz, just want to simply adjust my OC so when i go to far, i can go back. Every article i read talks about flashing bios, etc... Someone offer any assistance for me please, im 1/2 ******ed!


AMD pretty much limited the RX6000 cards for OC.
The drive e.g. wouldn't allow clocks past 2500MHz on launch, now it's set to 3000MHz (unlikely a regular OC will ever reach this),

Most cards limit on power / temperature pretty heavily.
But a 6900XT not going past 2500MHz is new to me.


Currently Igor's Lab is pretty much invested to RX6000 OC, MPT and RedBIOS editor. You might look there.

The big Radeon RX 6800 (XT) overclocking and mod guide | Community
The complete Big Navi UV Guide: Undervolting and power saving with the MorePowerTool simply explained | Practice
RED BIOS EDITOR and MorePowerTool for Polaris, Navi and Big Navi
Unfortunately I'm more or less refering to german OC channels regarding the RX6000 series, so unless you are fine with german,
thats pretty much the extent for specific links I can hand you from the back of my mind.

The usual sources, including this forum are obviosly fine as well for english help.


----------



## newls1

RaEyE said:


> AMD pretty much limited the RX6000 cards for OC.
> The drive e.g. wouldn't allow clocks past 2500MHz on launch, now it's set to 3000MHz (unlikely a regular OC will ever reach this),
> 
> Most cards limit on power / temperature pretty heavily.
> But a 6900XT not going past 2500MHz is new to me.
> 
> 
> Currently Igor's Lab is pretty much invested to RX6000 OC, MPT and RedBIOS editor. You might look there.
> 
> The big Radeon RX 6800 (XT) overclocking and mod guide | Community
> The complete Big Navi UV Guide: Undervolting and power saving with the MorePowerTool simply explained | Practice
> RED BIOS EDITOR and MorePowerTool for Polaris, Navi and Big Navi
> Unfortunately I'm more or less refering to german OC channels regarding the RX6000 series, so unless you are fine with german,
> thats pretty much the extent for specific links I can hand you from the back of my mind.
> 
> The usual sources, including this forum are obviosly fine as well for english help.


When I get off shift tomorrow morning, im gonna try a different attempt to this OC. Currently im @ 
Max Voltage (1.175 i think it was)
Min clock speed slider set to 2000
Max Clock speed slider set to 2700
Mem slider maxed out @ 2150 And set the "fast timings"
Power slider maxed @ 15%
Fans set aggressive to keep temps down.

What i keep seeing on youtube videos that ive been watching are to DOWNCLOCK the gpu slightly and increase the Min Clock speed slider to keep the minimum speed up as high as possible. So what im gonna attempt to try is this

Voltage set to 1.105-1.125
Min clock slider set to 2500
Max clock slider set to 2750
keep the rest of settings same as prior, and hope for the best. If this doesnt bring my game clocks up to a sustained 2500MHz + then ill mess with MPT.. What do you think about trying these new settings?


----------



## RaEyE

Yeah, a lot ogf people are having problems with "too agressive" settings.
The RX6000 cards are rather "good" at selfregulation. Which might conflict with, lets say traditional, OC approaches.
A softer OC might actually benefit your results, since what you configure is more like a target or guidline for the GPU.

Powerdraw, temperature and voltages are what dictate what you will actually get.
Some people actually noticed (not new for AMD cards) that a moderate undervolting increases performance due to less temperate development.

Unfortunately I actualy can't say much about concrete values.
My own experience up now is, put it in the rig, test it, then watercool it 

Have been running a 6900 in my gaming-rig since about 2 weeks now.
I got mine mid. december but only had just about now the time to actualy play around.
Most of what I know about RX600 is from OC-forums and some tinkering myself.


----------



## newls1

RaEyE said:


> Yeah, a lot ogf people are having problems with "too agressive" settings.
> The RX6000 cards are rather "good" at selfregulation. Which might conflict with, lets say traditional, OC approaches.
> A softer OC might actually benefit your results, since what you configure is more like a target or guidline for the GPU.
> 
> Powerdraw, temperature and voltages are what dictate what you will actually get.
> Some people actually noticed (not new for AMD cards) that a moderate undervolting increases performance due to less temperate development.
> 
> Unfortunately I actualy can't say much about concrete values.
> My own experience up now is, put it in the rig, test it, then watercool it
> 
> Have been running a 6900 in my gaming-rig since about 2 weeks now.
> I got mine mid. december but only had just about now the time to actualy play around.
> Most of what I know about RX600 is from OC-forums and some tinkering myself.


This is the first gpu that i CANT buy a FCWB for it cause it doesnt use an AMD reference pcb. Ive emailed a few companies like EK, and they all respond with the same typical email reply, "If market growth demands a block for this Asus Tuf 6900, then will we continue forward with a block for it." So until a block hits the market, im stuck with air cooling. TBH here, temps really arent terrible. Max temp on gpu while benching timespy was 63c and hotspot was mid 70s with fans @ 65%, so plenty of room for increase cooling. Ill try my new approach tomorrow with undervolting, but im still so confused about the "MINIMUM Speed" slider and where to adjust it, just never seen a setup like this. Various youtube vids ive seen have that slider all over the place. 1 guy set it to 2700min and set max slider to 2750 (All on air mind you) decreased his voltage to 1.1xx and sustained steady speeds of 2700mhz nearly flat while playing SOTTR.. I was like WOW!


----------



## RaEyE

Yes, that could indeed become difficult.
One one hand the cards are rather rare, obviously why, and then there are very few custom cards.

Since the ref. Design is actually pretty good, or at least performs rather well, there will be a lot of people who adapt AMD's refernce cards.
If they get one, that is.

If water is a necessity for you and a TUF design fc-block is not available, there is also the option to use universal blocks.
Not nice looking, but they perform rahter well.


----------



## ilmazzo

bykski usually makes a lot of AIBs variant wbs, keep an eye on them......


----------



## newls1

RaEyE said:


> Yes, that could indeed become difficult.
> One one hand the cards are rather rare, obviously why, and then there are very few custom cards.
> 
> Since the ref. Design is actually pretty good, or at least performs rather well, there will be a lot of people who adapt AMD's refernce cards.
> If they get one, that is.
> 
> If water is a necessity for you and a TUF design fc-block is not available, there is also the option to use universal blocks.
> Not nice looking, but they perform rahter well.


Ive always watercooled everything, so hoping eventually a block comes out


----------



## Aussiejuggalo

Has anyone tried USB-C to DP on the reference cards? I seen it's supposed to support DP alt mode but as usual no reviewer got off there ass to test it .

Might be getting a 6900XT providing USB-D does support alt mode, friend will be testing it next week if the cable comes.


----------



## bluezone

The temperatures with the stock air cooler are not good, there's not good contact between the silicone and the thin copper block at the bottom of the vapor chamber. The vapor chamber slightly flexes and the die seems to be slightly curved.
So I got bored and made up my own partial water block. Doesn't work too bad. There's still about a 24° difference between GPU and junction temperatures, but a Max GPU temperature of 51° (junction 75°) isn't too bad. That's with an overclock of 2585 Mhz.
Might have to invest in a full water block. Haven't tried MPT yet.


----------



## newls1

I know ive asked a few times about the MPT, however, in a few hours ill finally be off shift and want to play with this OCing tool. Im looking to achieve a steady 2600MHz + OC on my Asus Tuf 6900XT, so what settings (numbers) should i input to try to achieve this? Im very much confused on appropriate numbers to input. Card is aircooled until i can buy a FCWB for this asus card (not ref pcb) but aircooler seems really good. Thank you for any input


----------



## MyShadow

LtMatt said:


> Yes looks like you got unlucky with a bad mount imo. Unfortunate as the MBA designs are usually pretty consistent with the temp difference, my 6800 XT was basically identical.


That's what I'm thinking too. Did you end up rectifying your issues with your 6800xt? If so, is the existing heat pad reusable or is it worth sourcing another do you think?


----------



## LtMatt

bluezone said:


> The temperatures with the stock air cooler are not good, there's not good contact between the silicone and the thin copper block at the bottom of the vapor chamber. The vapor chamber slightly flexes and the die seems to be slightly curved.
> So I got bored and made up my own partial water block. Doesn't work too bad. There's still about a 24° difference between GPU and junction temperatures, but a Max GPU temperature of 51° (junction 75°) isn't too bad. That's with an overclock of 2585 Mhz.
> Might have to invest in a full water block. Haven't tried MPT yet.





MyShadow said:


> That's what I'm thinking too. Did you end up rectifying your issues with your 6800xt? If so, is the existing heat pad reusable or is it worth sourcing another do you think?


Sorry most post was not very clear.

The difference between my Edge and Junction temps are 12-20c at most in both my 6800 XT and my 6900 XT. That's with a clock speed of 2350Mhz MIN and 2450Mhz MAX set in Radeon Software at 1.031v. Typically the difference is 15c or less at these settings.

However, for some reason the 6800 XT memory ran around 5c hotter than my 6900 XT does. I never did figure out why.

I ended up selling the 6800 XT to someone else for what I paid for it once i was able to get a 6900 XT.

The point i tried to make was that i think you just got unluckily with your sample. The differences in Edge/Junction temp seem to be the lowest on the MBA design (typically 15-20C) and higher on AIB designs, probably as they use paste and not a thermal pad. That's just what I've seen so far looking around at reviews and YT videos.


----------



## MyShadow

LtMatt said:


> Sorry most post was not very clear.
> 
> The difference between my Edge and Junction temps are 12-20c at most in both my 6800 XT and my 6900 XT. That's with a clock speed of 2350Mhz MIN and 2450Mhz MAX set in Radeon Software at 1.031v. Typically the difference is 15c or less at these settings.
> 
> However, for some reason the 6800 XT memory ran around 5c hotter than my 6900 XT does. I never did figure out why.
> 
> I ended up selling the 6800 XT to someone else for what I paid for it once i was able to get a 6900 XT.
> 
> The point i tried to make was that i think you just got unluckily with your sample. The differences in Edge/Junction temp seem to be the lowest on the MBA design (typically 15-20C) and higher on AIB designs, probably as they use paste and not a thermal pad. That's just what I've seen so far looking around at reviews and YT videos.


Thanks for the input! I might see what I can find and pull it apart. I've undervolted as far as I can and once it's been running for a minute or so the clock won't go over 2200mhz regardless of what it's doing, wattage sits between 220 and 230 to get that 110c on the hotspot.


----------



## newls1

so little update, using wattman (no mpt adjustments yet) I changed my OC a little like I posted above performing a .50mv reduction in voltage and increased min/max sliders as per the pic attached to show you, and im now finally getting spikes in the 2.7ghz range in timespy, and 25-2600+ speeds through out demo. I raised my 3dmark score 500 points with this new OC, so im very much happy here and unless using MPT can make this OC more "steady and consistant" im thinking about just leaving it here. Im guessing I cant sustain 2.7ghz is prob because im hitting my power cap, so if i was to adjust my allowed power usage to like 375w, you think that would make achieving 2.6-2.7ghz more steady? i'm attaching my 3dmark score and wattman settings incase you see anything i should change... thanks


----------



## LtMatt

MyShadow said:


> Thanks for the input! I might see what I can find and pull it apart. I've undervolted as far as I can and once it's been running for a minute or so the clock won't go over 2200mhz regardless of what it's doing, wattage sits between 220 and 230 to get that 110c on the hotspot.


Just bear in mind that you may make it worse as once you take the cooler off the thermal pad will almost certainly tear a little. That said, I’m not sure what you have to lose by trying other than potentially warranty as it seems it will throttle with an under volt anyway.

What is your case airflow like? Plenty of front intake and rear/top exhaust?

If you are sticking with the reference cooler you could ask for an RMA, if you don’t mind waiting for a replacement.


----------



## ZealotKi11er

Case airflow is very important. The GPU is dumping 300W of heat. One thing to note is the higher the leakage [unknown to us] of the GPU the higher the delta. Also same applies to voltage.


----------



## newls1

newls1 said:


> so little update, using wattman (no mpt adjustments yet) I changed my OC a little like I posted above performing a .50mv reduction in voltage and increased min/max sliders as per the pic attached to show you, and im now finally getting spikes in the 2.7ghz range in timespy, and 25-2600+ speeds through out demo. I raised my 3dmark score 500 points with this new OC, so im very much happy here and unless using MPT can make this OC more "steady and consistant" im thinking about just leaving it here. Im guessing I cant sustain 2.7ghz is prob because im hitting my power cap, so if i was to adjust my allowed power usage to like 375w, you think that would make achieving 2.6-2.7ghz more steady? i'm attaching my 3dmark score and wattman settings incase you see anything i should change... thanks
> 
> View attachment 2474442
> 
> 
> View attachment 2474443


so to quote myself, i just played fc5 for the past 30mins (my fav game ever so far) and game clocks were 2.5-2.65ghz consistantly... thats 250+mhz above my prior oc settings and performance was amazing. im stoked and really like the 6900 gpu. AMD has always been "different" compared to nvidia but im glad i walked into MC at the right time when this lonely single 6900XT was sitting on the shelf. So far its been a great purchase.


----------



## LtMatt

newls1 said:


> so to quote myself, i just played fc5 for the past 30mins (my fav game ever so far) and game clocks were 2.5-2.65ghz consistantly... thats 250+mhz above my prior oc settings and performance was amazing. im stoked and really like the 6900 gpu. AMD has always been "different" compared to nvidia but im glad i walked into MC at the right time when this lonely single 6900XT was sitting on the shelf. So far its been a great purchase.


That's one of my favourite games too. I play it using my silent gaming profile of 2350Mhz MIN and 2450Mhz MAX core clock (actual in game core frequency is just under 2400Mhz) and 2012Mhz memory (actual in game memory frequency is 2000Mhz+) at 1.012v-1.031v. Power draw is around 220W-250W peak at 5120x1440, all settings maxed out, HQ Texture pack and i enable resolution scaling to 1.3x.


----------



## bluezone

If you use MPT and write a 375 Watt power limit, just remember that you can add 15% to that with the power slider. That would be 431 Watts total. You might want to set it a bit lower then that.


----------



## MyShadow

LtMatt said:


> Just bear in mind that you may make it worse as once you take the cooler off the thermal pad will almost certainly tear a little. That said, I’m not sure what you have to lose by trying other than potentially warranty as it seems it will throttle with an under volt anyway.
> 
> What is your case airflow like? Plenty of front intake and rear/top exhaust?
> 
> If you are sticking with the reference cooler you could ask for an RMA, if you don’t mind waiting for a replacement.


Case airflow is excellent, Have a Silverstone FT02, I am yet to find a compelling reason to upgrade. I upgraded the 3 x 180mm fans to AP182s a few years back and have a Noctua 120mm for top exhaust.

I guess I'll leave it for the moment, looks like a pain to attempt a re-mount and the heatpads they use are hard to come by. Thanks for the input!


----------



## ZealotKi11er

What is the stock clk of the card in Radeon Setting? You could have a card that clks very high under water but not very good on Air.


----------



## majestynl

Finally installed the Dark Hero and the 5900x
Quickly run a test and it's looking promising 









I scored 15 514 in Fire Strike Ultra


AMD Ryzen 9 5900X, AMD Radeon RX 6900 XT x 1, 16384 MB, 64-bit Windows 10}




www.3dmark.com





GT2 running easily 2730-2750mhz 
GT1 still power limited 2500-2650mhz

MPT: 320w (+15%) = ~368w

Going to tweak more upcoming days...

Some pictures.. tubing is temporary, going to hard tube (mat black) when I now everything is working flawless


----------



## LtMatt

MyShadow said:


> Case airflow is excellent, Have a Silverstone FT02, I am yet to find a compelling reason to upgrade. I upgraded the 3 x 180mm fans to AP182s a few years back and have a Noctua 120mm for top exhaust.
> 
> I guess I'll leave it for the moment, looks like a pain to attempt a re-mount and the heatpads they use are hard to come by. Thanks for the input!


Just the 1 exhaust? Does the top rear/side of the case get very hot?


----------



## The EX1

MyShadow said:


> Anyone else seeing massive deltas between GPU and GPU hotspot temp when under load? like 40 degrees C?


I do and it makes me worried. Even with a custom curve my hotspot hits 110C while my Edge temp is in the low 60s. Anyone else? This is a reference card btw.


----------



## THUMPer1

Are the MBA 6900 xt's doing well at undervolting? I can't really find any info.


----------



## LtMatt

The EX1 said:


> I do and it makes me worried. Even with a custom curve my hotspot hits 110C while my Edge temp is in the low 60s. Anyone else? This is a reference card btw.


Wow that’s 20c more than mine. What is your case airflow like? Are you running stock voltage or something?


----------



## The EX1

LtMatt said:


> Wow that’s 20c more than mine. What is your case airflow like? Are you running stock voltage or something?


Open air test bench right now so it should be a best case scenario for it. I am currently redoing my loop in my case so my hardware is on this bench.

This is at all stock settings. A more aggressive fan curve will get edge temps in the 60s but the junction throttles at 110c still.


----------



## MyShadow

THUMPer1 said:


> Are the MBA 6900 xt's doing well at undervolting? I can't really find any info.


Mine is stable @ 1100mv, anything lower than that and i get crashes. Seems to be dependent on temps as well though, 1100mv is only stable if the edge temps stay below 80c.


----------



## MyShadow

LtMatt said:


> Just the 1 exhaust? Does the top rear/side of the case get very hot?


SilverStone Fortress FT02: True Classics Never Go Out of Style 

I can feel plenty of hot air being pushed out if that's what you mean? air cooled GPU's are the FT02's strength, many years ago mining cryptos I was running 2 x R9 290's in it without a problem... Those 180mm fans really push the air around!


----------



## MyShadow

The EX1 said:


> Open air test bench right now so it should be a best case scenario for it. I am currently redoing my loop in my case so my hardware is on this bench.
> 
> This is at all stock settings. A more aggressive fan curve will get edge temps in the 60s but the junction throttles at 110c still.


If you're going to put it under water then I'd be interested in knowing whether the Thermal Pad is mounted properly or this is something that is luck of the draw when it comes to the GPU - Soundslike similar behaviour to mine. If I throttle the clocks to 2200 and crank the fans to 100 whilst undervolting to 1100mv I can keep the Junction @ ~105c.


----------



## MyShadow

ZealotKi11er said:


> What is the stock clk of the card in Radeon Setting? You could have a card that clks very high under water but not very good on Air.


2470 stock


----------



## The EX1

MyShadow said:


> If you're going to put it under water then I'd be interested in knowing whether the Thermal Pad is mounted properly or this is something that is luck of the draw when it comes to the GPU - Soundslike similar behaviour to mine. If I throttle the clocks to 2200 and crank the fans to 100 whilst undervolting to 1100mv I can keep the Junction @ ~105c.


I have the Alphacool Acetal Eisblock in my cart but I haven't pulled the trigger. I love the look of this block and the subtle RGB Radeon branding on it.

My card has a stock clock of 2469. Does undervolting in the AMD software work? My voltage seems stay between 1037 and 1075 with 1130mv set.


----------



## The EX1

Fans -2250rpm
Edge - 62c
Junction - 104c
Voltage - 1110mv
PL - minus 5%
Sustained core - 2323mhz


----------



## ZealotKi11er

You probably need to lower vmax with MPT.


----------



## LtMatt

The EX1 said:


> Open air test bench right now so it should be a best case scenario for it. I am currently redoing my loop in my case so my hardware is on this bench.
> 
> This is at all stock settings. A more aggressive fan curve will get edge temps in the 60s but the junction throttles at 110c still.


Yeah sounds like yours has a terrible mount for whatever reason, same as MyShadow. Sympathies to you both, if it was me I’d RMA unless planning to go water as a good mount should see Junction 15-20c worse than Edge with under volts to 2400-2500Mhz core clock.


----------



## LtMatt

MyShadow said:


> Asrock reference / MBA board. Seems unusual especially considering the undervolt, low ambients and how quickly it reaches 110 degrees so I thought maybe it was related to the thermal compound... But then came across this - The hotspot of the Radeon RX 6800 (XT), hurdles in the thermal grease replacement and the correct assembly sequence | igor´sLAB
> 
> Figured maybe I'm just unlucky? Default clock for the card is 2470 though so it's not the worst sample out there...





The EX1 said:


> Open air test bench right now so it should be a best case scenario for it. I am currently redoing my loop in my case so my hardware is on this bench.
> 
> This is at all stock settings. A more aggressive fan curve will get edge temps in the 60s but the junction throttles at 110c still.


Here is my 6900 XT MBA running Warzone, 1440P. Exact settings listed in the video. 

This is a best case scenario for the difference between Edge and Junction on a MBA with what appears to be decent contact. Warzone is not that demanding as can be seen by the OSD. Some other games can draw 40W more at times, but even so the difference only climbs to around 15c or so. It takes voltage being up near 1.100v to get a 20c difference. 

Call of Duty Warzone 1440P Benchmark | 5950X + 6900 XT - YouTube


----------



## OrionBG

Guys, anybody knows anything about the AsRock RX 6900 XT Phantom Gaming D? It is the only almost reasonable custom design RX 6900 XT I can get my hands on...
I can get a reference design card for about 1100 Euro locally and they are even easy to find, but I've been reading here that the custom designs have a lot more potential for manipulation with MPT and so on...


----------



## chispy

OrionBG said:


> Guys, anybody knows anything about the AsRock RX 6900 XT Phantom Gaming D? It is the only almost reasonable custom design RX 6900 XT I can get my hands on...
> I can get a reference design card for about 1100 Euro locally and they are even easy to find, but I've been reading here that the custom designs have a lot more potential for manipulation with MPT and so on...


I have one , here is a write up i did on it , excellent choice of card , go for it  .It is a 3x8 power pin non-reference design - AMD RX 6900xt / 6800xt / 6800 Overclocking tweaks , tricks and mods.


----------



## chispy

Finally an amd bios flashing utility that works on every Radeon 6xxx video card has been found. People are already flashing their 6800 and 6800xt to 6900xt bios with success. Now we wait for Igor's lab Red Rebellion Bios Editor to be updated and voila let the cards fully open fly 

Download here - RDNA2 RX6000 Series Owners Thread, Tests, Mods, BIOS & Tweaks !


----------



## THUMPer1

6900 xt MBA arrives soon. I really just want to undervolt and keep temps in check. I actually want a 6800 xt, so I'll probably end up selling it anyway.


----------



## LtMatt

THUMPer1 said:


> 6900 xt MBA arrives soon. I really just want to undervolt and keep temps in check. I actually want a 6800 xt, so I'll probably end up selling it anyway.


Keep the 6900 XT, full fat GPUs ftw.


----------



## THUMPer1

LtMatt said:


> Keep the 6900 XT, full fat GPUs ftw.


I know...., but i think it's overkill for my use. haha


----------



## THUMPer1

Asus GPU Tweak...probably nothing special though.








ASUS GPU Tweak III Utility Hits Beta For Overclocking Your GeForce RTX And Radeon RX Cards


The software is now getting an update with new features and UI to make an “extremely intuitive and capable utility.”




hothardware.com


----------



## Gutfux

*Very 'meh' performance with my new 6900xt *

Hey guys, so I managed to get my hands on a 6900xt last week. Fitted into my loop with an EK Quantum Vector block, upgraded my PSU to an RM850x and all working very nicely.

Performance seems.. kind of meh? I don't think something is quite right. Frames aren't as high as other peoples benchmarks and some games I have hardly seen an improvement over my old 1080ti (namely Warzone).

I have tried default WattMann settings, turning off boost/enhanced sync/other AMD features and even wiping my drivers with DDU in safe mode (twice).

*Specs:*

Ryzen 5 3600 with EK Velicity @ 4.2GHz (1.4v)
MSI MPG X570 Gaming Plus
Patriot Viper DDR4 16GB (2x 8GB) @ 3600Mhz 18-19-19-39
6900XT with EK Quantum Vector @ 2750Mhz core & 2150Mhz memory (1125mV)
Corsair RM850x
Custom loop with 360 & 280 rads
Pixio PX7 Prime @ 1440p 165Hz

















My temps are fine (around 60c on CPU & GPU during gaming) and I am getting no CPU or RAM bottlenecks when gaming. I have a FreeSync premium monitor with decent quality cable.

Warzone only averages around 100fps, which is frankly ****. I have tried turning settings up/down but no real difference in FPS, I have seen people get a stable 160+ with max settings.

Anything I am missing here?


----------



## ZealotKi11er

Something definitely wrong there. 1080 ti = 5700 xt and this should be 2x that.


----------



## Gutfux

ZealotKi11er said:


> Something definitely wrong there. 1080 ti = 5700 xt and this should be 2x that.


Yeah, I think it must be an issue with Warzone so going to try reinstalling it. 

I have tested Unigine Superposition 1080p extreme and I got 11,319 - seems consistent with the top leaderboards who run 6900xt's (top of the leaderboard got 11,969 with an i9-10980XE)


----------



## newls1

I need help understanding this, Ltmatt has been extremely helpful but i think he's over me annoying him! So my issue is this: With the following adjustments in wattman I get the following score:



















Now my issue starts: If I adjust the "min freq slider" too 2600 and leave the "max slider" where its at 2750... my graphics score drops 2000 points!! So im assuming im bouncing off either amps/watts limit maybe?? If I put the "min slider" back to 2500 score comes back to normal again. So is this the time I need to finally use MPT and adjust a few settings? Card is aircooled currently (cause no one makes a block for this asus tuf yet) but heatsink/fans are actually really good and with adjusted fan curve, temps stay in the 60c and hot spot 78/80c ish...... I noticed when min slider was set to 2600, average clocks did stay in the 26xx range but yet still lower score! What would be my next step here to be able to leave my min slider to 2600ish and yet still get a high score in timespy? Thanks


----------



## ZealotKi11er

Try 2650/2750.


----------



## newls1

still losing 1000 points.. im pretty sure i need to adjust something in mpt, but not sure what to adjust to over come this.


----------



## ZealotKi11er

newls1 said:


> still losing 1000 points.. im pretty sure i need to adjust something in mpt, but not sure what to adjust to over come this.


What is your power limit settings?


----------



## newls1

ill have to get back to you in a few on that.... just left that pc


----------



## No-one-no1

newls1 said:


> Now my issue starts: If I adjust the "min freq slider" too 2600 and leave the "max slider" where its at 2750... my graphics score drops 2000 points!! So im assuming im bouncing off either amps/watts limit maybe?? If I put the "min slider" back to 2500 score comes back to normal again. So is this the time I need to finally use MPT and adjust a few settings? Card is aircooled currently (cause no one makes a block for this asus tuf yet) but heatsink/fans are actually really good and with adjusted fan curve, temps stay in the 60c and hot spot 78/80c ish...... I noticed when min slider was set to 2600, average clocks did stay in the 26xx range but yet still lower score! What would be my next step here to be able to leave my min slider to 2600ish and yet still get a high score in timespy? Thanks


Min slider is too high.


----------



## newls1

ZealotKi11er said:


> What is your power limit settings?


here is the stock settings for my Asus TUF 6900XT.. What would be a good starting point to adjust to go in the 2650-2700 core range?


----------



## newls1

update: I adjusted the 
POWER LIMIT GPU to 325 
and TDC 375

Holy damn it! The card being aircooled is very hard to cool! I had to take side off the case to give it more air with fans at full speed and 3dmark would crash cause hotspot was 110c and climbing YIKES. I went into wattman and dialed back the Power percentage to 10% (was 15%) then max watts didnt exceed 357ish (before was going to nearly 400) and 3dmark passed and get my best score ever! MPT is amazing... now please someone come out with a waterblock for the asus tuf card PLEASE!!!


----------



## LtMatt

newls1 said:


> update: I adjusted the
> POWER LIMIT GPU to 325
> and TDC 375
> 
> Holy damn it! The card being aircooled is very hard to cool! I had to take side off the case to give it more air with fans at full speed and 3dmark would crash cause hotspot was 110c and climbing YIKES. I went into wattman and dialed back the Power percentage to 10% (was 15%) then max watts didnt exceed 357ish (before was going to nearly 400) and 3dmark passed and get my best score ever! MPT is amazing... now please someone come out with a waterblock for the asus tuf card PLEASE!!!
> 
> View attachment 2474876


Nice graphics score that. This is the best I managed with the 6900 XT MBA on air. Might have to run it again soon and see if i can improve it a little. AMD Radeon 6900 XT video card benchmark result - AMD Ryzen 9 5950X,ASUSTeK COMPUTER INC. ROG CROSSHAIR VIII HERO (3dmark.com)


----------



## RaEyE

newls1 said:


> still losing 1000 points.. im pretty sure i need to adjust something in mpt, but not sure what to adjust to over come this.


Hi,
currently I can't help much, since I'm not at my rig, and sadly won't be for the forseable future.
And that although I just completed my loop on my own 6900 ... well, real life first.

Regardless, your Voltage is comparably high.
One of the few things I figured out about my own setup is that a slight undervolting can raise your score. But thats not always the case.
(Radon overclocking has be come somewhat fickle since RDNA. My old 7870 (1GHz Edition, when 1GHz still was something special ^^) was a real overclock beast.)
Try lowering your voltage in ~5mV steps (not sure right now what steps Wattman accepts) and find your lowest stable voltage.
Should be around 1010mV on most cards. Some say they can go as low as 1000mV, but not realy stable.
When you found that spot, raise your voltage slightly (1 or 2 teps), then do some benching.

In generall I can recommend the tips Igor has prepared for RX6000 overclocking. This time around Igor realy zeroed in on those RX6000 cards ^^
A overclocking guide for RX6000 cards with a RX6800XT can be found here: LINK


----------



## ilmazzo

Gutfux said:


> Yeah, I think it must be an issue with Warzone so going to try reinstalling it.
> 
> I have tested Unigine Superposition 1080p extreme and I got 11,319 - seems consistent with the top leaderboards who run 6900xt's (top of the leaderboard got 11,969 with an i9-10980XE)


are you sure that the 2150 on ram is helping on perfomance instead of holding you back or worse?

60c on a fullcover wb is quite high........ since it is the EK block I would check that it is fitting the gpu die properly.....


----------



## newls1

RaEyE said:


> Hi,
> currently I can't help much, since I'm not at my rig, and sadly won't be for the forseable future.
> And that although I just completed my loop on my own 6900 ... well, real life first.
> 
> Regardless, your Voltage is comparably high.
> One of the few things I figured out about my own setup is that a slight undervolting can raise your score. But thats not always the case.
> (Radon overclocking has be come somewhat fickle since RDNA. My old 7870 (1GHz Edition, when 1GHz still was something special ^^) was a real overclock beast.)
> Try lowering your voltage in ~5mV steps (not sure right now what steps Wattman accepts) and find your lowest stable voltage.
> Should be around 1010mV on most cards. Some say they can go as low as 1000mV, but not realy stable.
> When you found that spot, raise your voltage slightly (1 or 2 teps), then do some benching.
> 
> In generall I can recommend the tips Igor has prepared for RX6000 overclocking. This time around Igor realy zeroed in on those RX6000 cards ^^
> A overclocking guide for RX6000 cards with a RX6800XT can be found here: LINK


thank you for the reply. I used MPT and increased my watts/amps limit and card is soring now, but temps are making it unstable once "hot spot" hits 102c it seems card gives up and crashes to desktop, so i had to use some common sense and realized im using 375w (new adjustment i made in mpt) PLUS 15% powerlimit.. watts were going 400+ hence my cooling issue. I backed down to 5% power limit, and temps are back in check with a minor UV to 1.120mv. Hot spot sees max of 94c in timespy. Just cant wait to get a damn waterblock out for this card, she will absolutely fly with one.. Im guessing this card will come to life like my RVII did once on a FCWB.


----------



## RaEyE

newls1 said:


> Im guessing this card will come to life like my RVII did once on a FCWB.


I wish you the best of luck on that 
Hopefully I will too get to play around a bit with my card in 2 weeks or so.


----------



## newls1

RaEyE said:


> I wish you the best of luck on that
> Hopefully I will too get to play around a bit with my card in 2 weeks or so.


EKWB's website shows hope as a block might come out for this card (asus TUF 6900xt) crossing my fingers


----------



## Gutfux

ilmazzo said:


> are you sure that the 2150 on ram is helping on perfomance instead of holding you back or worse?
> 
> 60c on a fullcover wb is quite high........ since it is the EK block I would check that it is fitting the gpu die properly.....


Not sure, I can try going down to 2100 but not sure what is optimal for this card in terms of ram.

Yeah my temps are even higher now, junction temps are high too so not sure whats going on. Its using the original backplate but that shouldn't make a difference. Might be worth a re-fit.


----------



## DirtyScrubz

Gutfux said:


> *Very 'meh' performance with my new 6900xt *
> 
> Hey guys, so I managed to get my hands on a 6900xt last week. Fitted into my loop with an EK Quantum Vector block, upgraded my PSU to an RM850x and all working very nicely.
> 
> Performance seems.. kind of meh? I don't think something is quite right. Frames aren't as high as other peoples benchmarks and some games I have hardly seen an improvement over my old 1080ti (namely Warzone).
> 
> I have tried default WattMann settings, turning off boost/enhanced sync/other AMD features and even wiping my drivers with DDU in safe mode (twice).
> 
> *Specs:*
> 
> Ryzen 5 3600 with EK Velicity @ 4.2GHz (1.4v)
> MSI MPG X570 Gaming Plus
> Patriot Viper DDR4 16GB (2x 8GB) @ 3600Mhz 18-19-19-39
> 6900XT with EK Quantum Vector @ 2750Mhz core & 2150Mhz memory (1125mV)
> Corsair RM850x
> Custom loop with 360 & 280 rads
> Pixio PX7 Prime @ 1440p 165Hz
> 
> View attachment 2474801
> View attachment 2474802
> 
> 
> My temps are fine (around 60c on CPU & GPU during gaming) and I am getting no CPU or RAM bottlenecks when gaming. I have a FreeSync premium monitor with decent quality cable.
> 
> Warzone only averages around 100fps, which is frankly ****. I have tried turning settings up/down but no real difference in FPS, I have seen people get a stable 160+ with max settings.
> 
> Anything I am missing here?


Stating the obvious here but make sure vsync is turned off.


----------



## cg4200

hey I have been away from amd 5700xt for awhile . I could write spt with mpt before so easy.. I saw the german guy video but don't know german lol video it looks easy just making sure 
1> I see to command prompt as admin unlock rom .
what next upload saved bios from gpu-z to mpt do my changes and save instead of write from what I read ???
then you flash your same bios that has been modded correct??
Thanks 
My 6900xt does not boost as high as some i have seen>
Also I am using 12.2 drivers the newest but it seems my undervolt voltage only stays at what i set it if I don't change core clocks?? If I change core past 2509 the voltage jumps back up to stock?? even moving the min and max sliders
What is the best driver so far


----------



## RaEyE

cg4200 said:


> hey I have been away from amd 5700xt for awhile . I could write spt with mpt before so easy.. I saw the german guy video but don't know german lol video it looks easy just making sure


The article to the video of "the german guy" - LINK


----------



## OrionBG

chispy said:


> I have one , here is a write up i did on it , excellent choice of card , go for it  .It is a 3x8 power pin non-reference design - AMD RX 6900xt / 6800xt / 6800 Overclocking tweaks , tricks and mods.


It is a really nice card but unfortunately, there is currently no Water Block for it and EK doesn't have any plans for one ( I guess Alphacool also won't make one). I'll most probably go with a Red Devil... they are beginning to show up locally...
With a little bit of luck by the end of next week, I should get one... hopefully...


----------



## The EX1

Gutfux said:


> *Very 'meh' performance with my new 6900xt *
> 
> Hey guys, so I managed to get my hands on a 6900xt last week. Fitted into my loop with an EK Quantum Vector block, upgraded my PSU to an RM850x and all working very nicely.
> 
> Performance seems.. kind of meh? I don't think something is quite right. Frames aren't as high as other peoples benchmarks and some games I have hardly seen an improvement over my old 1080ti (namely Warzone).
> 
> I have tried default WattMann settings, turning off boost/enhanced sync/other AMD features and even wiping my drivers with DDU in safe mode (twice).
> 
> *Specs:*
> 
> Ryzen 5 3600 with EK Velicity @ 4.2GHz (1.4v)
> MSI MPG X570 Gaming Plus
> Patriot Viper DDR4 16GB (2x 8GB) @ 3600Mhz 18-19-19-39
> 6900XT with EK Quantum Vector @ 2750Mhz core & 2150Mhz memory (1125mV)
> Corsair RM850x
> Custom loop with 360 & 280 rads
> Pixio PX7 Prime @ 1440p 165Hz
> 
> View attachment 2474801
> View attachment 2474802
> 
> 
> My temps are fine (around 60c on CPU & GPU during gaming) and I am getting no CPU or RAM bottlenecks when gaming. I have a FreeSync premium monitor with decent quality cable.
> 
> Warzone only averages around 100fps, which is frankly ****. I have tried turning settings up/down but no real difference in FPS, I have seen people get a stable 160+ with max settings.
> 
> Anything I am missing here?


Looks like EK needs to work on that light diffusion. Seeing those LED hotspots would drive me nuts.

What is your you usage at?


----------



## kazukun

AGESA1.2.0.0 BIOS F33a








X570AORUSXTREME


MediaFire is a simple to use free service that lets you put all your photos, documents, music, and video in a single place so you can access them anywhere and share them everywhere.



www.mediafire.com


----------



## newls1

why is there a bios file in this thread for an AMD motherboard??


----------



## kazukun

I posted this by mistake.


----------



## LtMatt

kazukun said:


> I posted this by mistake.


Leave it up and let it serve as a lesson to better in life and at forum etiquette.


----------



## Bart

Go figure, I had put down cash on an EVGA 3080 a while ago, no sign or word on arrival. I switch my money to a 6900XT and it lands in 3 days, LOL!









This thing is THICK!! Good lord I cannot wait to get this sucker properly blocked.


----------



## HeLeX63

I am expecting my Alphacool 6900XT Red Devil Limited OC Water-Block in 2 weeks. I will post OC results, temps. On air I am overclocked and sustaining 2620 to 2680MHz on the core with 355W power draw on GPU.

This will be paired with a 5900X and monoblock as well, with a 480x60mm and 560x30mm radiator.


----------



## ZealotKi11er

Bart said:


> Go figure, I had put down cash on an EVGA 3080 a while ago, no sign or word on arrival. I switch my money to a 6900XT and it lands in 3 days, LOL!
> 
> 
> 
> 
> 
> 
> 
> 
> 
> This thing is THICK!! Good lord I cannot wait to get this sucker properly blocked.


Which store?
The higher priced the card over MSRP the faster you get it. My 3080 TUF non OC $950 CAD + TAX took 65 days. My friend got high-end eVGA model $1150 CAD + Tax after 1 week.


----------



## Bart

ZealotKi11er said:


> Which store?
> The higher priced the card over MSRP the faster you get it. My 3080 TUF non OC $950 CAD + TAX took 65 days. My friend got high-end eVGA model $1150 CAD + Tax after 1 week.


Canada Computers. Took less than a week from the time I put cash on it till arrival, maybe I just got lucky. But I had cash down on that EVGA card for several weeks with no word.

EDIT: intial testing is interesting. I'll need time to adjust to these higher frequencies, LOL! But so far it seems happy at 2400mhz, I just let it AutoOC for now until I get a water block, then things get serious. But paired with a 5950x, with a negative 20 on all cores in the core optimizer, I'm approaching near 19,000 points in Time Spy, on an all-AMD rig with no Intel or Nvidia. Giggity. 🙃


----------



## ZealotKi11er

Bart said:


> Canada Computers. Took less than a week from the time I put cash on it till arrival, maybe I just got lucky. But I had cash down on that EVGA card for several weeks with no word.


They added 6000 series cards quite late and many people did not get in queue.


----------



## Bart

ZealotKi11er said:


> They added 6000 series cards quite late and many people did not get in queue.


Plus I'm guessing most people aren't nuts enough to drop $1600CDN on a card either, doesn't make sense from a performance perspective. But some of us are nuts.


----------



## ZealotKi11er

Bart said:


> Plus I'm guessing most people aren't nuts enough to drop $1600CDN on a card either, doesn't make sense from a performance perspective. But some of us are nuts.


At the current market? Its 2080 Ti price not really that bad.


----------



## BeeDeeEff

Can anyone give some guidance or advice on how to get MorePowerTool PPT editing feature working for my reference Sapphire 6900XT? or is it bios/rom mods only meaning I need to extract my gpu bios first to load up?

I've DDU'd from safe mode several times, and the options are greyed out on both drivers 20.12.1 + 21.1.1, runing it as administrator, both with the shortcut option and directly rightclicking the .exe


----------



## Forsaken1

BeeDeeEff said:


> View attachment 2475405
> 
> 
> Can anyone give some guidance or advice on how to get MorePowerTool PPT editing feature working for my reference Sapphire 6900XT? or is it bios/rom mods only meaning I need to extract my gpu bios first to load up?
> 
> I've DDU'd from safe mode several times, and the options are greyed out on both drivers 20.12.1 + 21.1.1, runing it as administrator, both with the shortcut option and directly rightclicking the .exe


Look at dirty’s post 344.


----------



## kazukun

BeeDeeEff said:


> View attachment 2475405
> 
> 
> Can anyone give some guidance or advice on how to get MorePowerTool PPT editing feature working for my reference Sapphire 6900XT? or is it bios/rom mods only meaning I need to extract my gpu bios first to load up?
> 
> I've DDU'd from safe mode several times, and the options are greyed out on both drivers 20.12.1 + 21.1.1, runing it as administrator, both with the shortcut option and directly rightclicking the .exe


----------



## BeeDeeEff

Forsaken1 said:


> Look at dirty’s post 344.


Thanks for the direction, that was the step I was missing getting the bios from gpu-z to load in, got me on the right track and I already have the gpu pulling north of 330 watts, learning the quirks of rdna2 now.


----------



## kwikgta

Installed the new AsRock Phantom Gaming 6900xt today. I love it!


----------



## nyk20z3

Working on getting a Strix 6900XT, anyone else own one in this thread ?


----------



## newls1

i have the "TUF" version and its been solid. just need a fcwb for the damn thing


----------



## Bart

newls1 said:


> i have the "TUF" version and its been solid. just need a fcwb for the damn thing


Same!


----------



## HeLeX63

newls1 said:


> i have the "TUF" version and its been solid. just need a fcwb for the damn thing


I know Alphacool make alot of the 6800/6900XT water blocks. Even my Red Devil Limited Edition has one, have you checked with them or even EK ?


----------



## newls1

HeLeX63 said:


> I know Alphacool make alot of the 6800/6900XT water blocks. Even my Red Devil Limited Edition has one, have you checked with them or even EK ?


yes, i have..... nothing "yet" been in email contact with both companies


----------



## TurboJelly

Is there any comparison of the non-reference 6900xt cards available somewhere on the net? All reviews I can find are only describing the reference card. Are there any cards clearly better than the others?

I am considering upgrading my 1080 to something more powerful and 6900xt seems like a best deal to me in Europe . nVidia is nowhere to be seen (and I actually wanted to go team red this time around), 6800xt starts at 1.1k EUR and 6900xt at 1.2-1.3k, so I prefer to pay few euros extra to get full die.


----------



## ZealotKi11er

TurboJelly said:


> Is there any comparison of the non-reference 6900xt cards available somewhere on the net? All reviews I can find are only describing the reference card. Are there any cards clearly better than the others?
> 
> I am considering upgrading my 1080 to something more powerful and 6900xt seems like a best deal to me in Europe . nVidia is nowhere to be seen (and I actually wanted to go team red this time around), 6800xt starts at 1.1k EUR and 6900xt at 1.2-1.3k, so I prefer to pay few euros extra to get full die.


Generally speaking, AIB card are better because they are bigger. Most of them are pretty good this generation. Sapphire/XFX/PowerColor are all good from AMD AIBs. ASUS also good. 
Unlike Nvidia which limited GPUs power limit. With AMD cards its comes to silicon lottery to get the best GPU.


----------



## Bart

newls1 said:


> yes, i have..... nothing "yet" been in email contact with both companies


Heatkillers are due, but not soon. EK blocks are available, but I don't use their GPU blocks for any reason. I'm content to wait.


----------



## HeLeX63

Bart said:


> Heatkillers are due, but not soon. EK blocks are available, but I don't use their GPU blocks for any reason. I'm content to wait.


You are referring to reference design. We are talking about AIB water blocks. EKWB have none available for any AIB.


----------



## newls1

Bart said:


> Heatkillers are due, but not soon. EK blocks are available, but I don't use their GPU blocks for any reason. I'm content to wait.


reading is fundamental


----------



## No-one-no1

Has anyone tested overvolting a reference model on the newest driver update?
Or any other overvolting success?


----------



## ZealotKi11er

No-one-no1 said:


> Has anyone tested overvolting a reference model on the newest driver update?
> Or any other overvolting success?


Is there an issue with undervolting? Wattman voltage slider is not really undervolting. Probably best to just use MPT and limit max voltage to what ever value you want and then use Wattman to increase the core clk back up.


----------



## No-one-no1

ZealotKi11er said:


> Is there an issue with undervolting? Wattman voltage slider is not really undervolting. Probably best to just use MPT and limit max voltage to what ever value you want and then use Wattman to increase the core clk back up.


Overvolting != undervolting


----------



## ZealotKi11er

No-one-no1 said:


> Overvolting != undervolting


I just lower form 1.15 to 1.05. MPT is a dumb name. It can be used to lower values too.


----------



## HeLeX63

ZealotKi11er said:


> Is there an issue with undervolting? Wattman voltage slider is not really undervolting. Probably best to just use MPT and limit max voltage to what ever value you want and then use Wattman to increase the core clk back up.


Core frequency and power seem to be tied directly to core voltage. You could use MPT to hardlock the voltage at say 1100MV, but to do it normally via software, you would need to lower frequency to yield lower Vcore.


----------



## ZealotKi11er

HeLeX63 said:


> Core frequency and power seem to be tied directly to core voltage. You could use MPT to hardlock the voltage at say 1100MV, but to do it normally via software, you would need to lower frequency to yield lower Vcore.


Yes, its a balance of power/voltage/frequency. Voltage and frequency are linked. If you give it unlimited power you are going to be voltage limited. What you can go is keep power 255w, set voltage to 1.05v with MPT. That will lower the max freq. You can then go back to wattman and increase the slider. As long as you can maintain 1.05v you should be able to hit the max freq - 50MHz.


----------



## LtMatt

ZealotKi11er said:


> Yes, its a balance of power/voltage/frequency. Voltage and frequency are linked. If you give it unlimited power you are going to be voltage limited. What you can go is keep power 255w, set voltage to 1.05v with MPT. That will lower the max freq. You can then go back to wattman and increase the slider. As long as you can maintain 1.05v you should be able to hit the max freq - 50MHz.


Could you share the exact settings you use in Radeon Software Tuning and MPT?


----------



## The EX1

Well, I was able to buy one of the LE Red Devil cards this morning. Think it is worth selling my AMD reference card?


----------



## bkrownd

I have joined the club with the Sapphire Nitro card, through the newegg shuffle. It's like joining the snooty yacht club - I feel a bit naughty about it.  I'm a little worried about he power consumption - just because I live in the tropics and heating the room with a GPU is not an ideal situation. (I wonder if I can duct the exhaust outside...?) But I just wanted to get a Nitro and be done with this months-long GPU search, and "won" the shuffle lotto so now I'm able to move on with my life. Sometime this year I may want to get an 21:9 display to really make use of the new horsepower.


----------



## The EX1

bkrownd said:


> I have joined the club with the Sapphire Nitro card, through the newegg shuffle. It's like joining the snooty yacht club - I feel a bit naughty about it.  I'm a little worried about he power consumption - just because I live in the tropics and heating the room with a GPU is not an ideal situation. (I wonder if I can duct the exhaust outside...?) But I just wanted to get a Nitro and be done with this months-long GPU search, and "won" the shuffle lotto so now I'm able to move on with my life. Sometime this year I may want to get an 21:9 display to really make use of the new horsepower.


Congrats on the Nitro! I wouldn’t worry about the power consumption. These cards like an undervolt and can perform really well still on a power saving profile with some easy tweaking.


----------



## RaEyE

The EX1 said:


> Congrats on the Nitro! I wouldn’t worry about the power consumption. These cards like an undervolt and can perform really well still on a power saving profile with some easy tweaking.


Congrats on the Nitro!

Indeed undervolting is still an issue with the new Radeon 6000.
Here is something to read on that topic:

The complete Big Navi UV Guide: Undervolting and power saving with the MorePowerTool simply explained | Practice
AMD RX 6800 Overclocking and Undervolting – from over 240 watts to under 180 watts, it’s all there
Have fun with your new aquisition.


----------



## No-one-no1

To get higher speeds than about 2700 we need more voltage, not less. (Or very low temps)
The VRMs should be more than capable of delivering the power. Just need the driver/bios checks circumvented so that we can apply higher voltages than 1175mV factory limit with MPT.


----------



## newls1

Holy crap, might have found something to make me super happy finally. On techpowerups main page, they announced that alphacool is finally coming out with a FCWB for the Asus "TUF" 6800XT card.... My question is this: I have the 6900XT version... can someone confirm if its the same pcb? I would think it is almost certainly. Anyone have any knowledge about this?


----------



## OrionBG

Hey guys, I'm finally a proud owner of a PowerColor RX6900XT Red Devil. 
I have an issue though... I have a 55" LG B9 OLED TV that I have used as my main monitor for almost a year. I was excited to finally get the ability to go to [email protected] over HDMI 2.1 but for some reason, I still only see 60Hz as a maximum... I have the 12bit color option (which was not there with the old GPU at HDMI 2.0) but I don't have any option to switch to 120Hz anywhere in Windows.
The previous GPU was an Nvidia RTX2080Ti and I DDUed the Nvidia drivers before installing the AMD ones.
Ideas?


----------



## ilmazzo

If you downgrade to 8-10bit you get the 120hz option maybe?


----------



## ilmazzo

newls1 said:


> Holy crap, might have found something to make me super happy finally. On techpowerups main page, they announced that alphacool is finally coming out with a FCWB for the Asus "TUF" 6800XT card.... My question is this: I have the 6900XT version... can someone confirm if its the same pcb? I would think it is almost certainly. Anyone have any knowledge about this?
> 
> View attachment 2476247


Reference cards can swap fullcovers to each others, I'm not sure regarding AIB asus model though.... you can compare some detailed pcb shots form some site or be the one who sacrifice himself for the community


----------



## ZealotKi11er

OrionBG said:


> Hey guys, I'm finally a proud owner of a PowerColor RX6900XT Red Devil.
> I have an issue though... I have a 55" LG B9 OLED TV that I have used as my main monitor for almost a year. I was excited to finally get the ability to go to [email protected] over HDMI 2.1 but for some reason, I still only see 60Hz as a maximum... I have the 12bit color option (which was not there with the old GPU at HDMI 2.0) but I don't have any option to switch to 120Hz anywhere in Windows.
> The previous GPU was an Nvidia RTX2080Ti and I DDUed the Nvidia drivers before installing the AMD ones.
> Ideas?


Max is 4K 120 10-bit.


----------



## newls1

i will certainly buy the 6800xt block and try..... worst case is ill hjave to sell it if it doesnt work. no biggy


----------



## The EX1

newls1 said:


> i will certainly buy the 6800xt block and try..... worst case is ill hjave to sell it if it doesnt work. no biggy


There are some slight differences between the 3080 and 3090 TUF cards. Should be easy to tell with some PCB photos.


----------



## OrionBG

ZealotKi11er said:


> Max is 4K 120 10-bit.


Switching to 10bit gives me "No Signal" for some reason and the PC shortly restarts on its own and boot-loops several times...
Going to 8bit works but still no more than 60Hz... I'm astounded by the situation...


----------



## Forsaken1

newls1 said:


> i will certainly buy the 6800xt block and try..... worst case is ill hjave to sell it if it doesnt work. no biggy


Do not get your hopes up for great scaling with WB.Temps are nice.Idile at or near ambient.Clocks due not scale well.Need -C and that does not look great from what I’ve seen.Still better then standard ambient.
Every new generation of cpu/gpu oc scaling go’s down the toilet.


----------



## newls1

Forsaken1 said:


> Do not get your hopes up for great scaling with WB.Temps are nice.Idile at or near ambient.Clocks due not scale well.Need -C and that does not look great from what I’ve seen.Still better then standard ambient.
> Every new generation of cpu/gpu oc scaling go’s down the toilet.


I will get my hopes up. I can just barely cool 347watts right now, and sustain ~2700MHz on air. A FCWB will allow me to increase my power limit back to 15% (currently at 7%) with a MPT 325w adjustment.. My temps should certainly stay in check (mostly hotspot temp as that is what is limiting me) clocks should sustain 2700+ steady and allow me like i said above run the full 375watts finally. Time will tell, but im certainly crossing my fingers! 🤞


----------



## The EX1

OrionBG said:


> Switching to 10bit gives me "No Signal" for some reason and the PC shortly restarts on its own and boot-loops several times...
> Going to 8bit works but still no more than 60Hz... I'm astounded by the situation...


Is that DP cable 1.4 and DSC certified? I had to buy a different DP cable than what my monitor came with to support the bandwidth required.


----------



## OrionBG

The EX1 said:


> Is that DP cable 1.4 and DSC certified? I had to buy a different DP cable than what my monitor came with to support the bandwidth required.


It is an HDMI 2.1 Cable. The TV does not have any DP inputs.


----------



## cg4200

OrionBG said:


> It is an HDMI 2.1 Cable. The TV does not have any DP inputs.


I have lg 65 c9 and it has option in windows under display options than down advanve options should see the 60 hz change it hit apply.. If you can't see option make sure your hdmi 2.1 is good than use ddu again try reinstall


----------



## OrionBG

cg4200 said:


> I have lg 65 c9 and it has option in windows under display options than down advanve options should see the 60 hz change it hit apply.. If you can't see option make sure your hdmi 2.1 is good than use ddu again try reinstall


60Hz I see. 100Hz and 120Hz are the ones I'm missing. It is maybe time for a Windows reinstall... I just hope that it is not some firmware issue with the TV...


----------



## coelacanth

I just bought a CX to use with the 6900XT. I hope it works at 120Hz. I'll find out this weekend.


----------



## ZealotKi11er

coelacanth said:


> I just bought a CX to use with the 6900XT. I hope it works at 120Hz. I'll find out this weekend.


It works with C9.


----------



## OrionBG

I finally fixed it! It was most probably some driver issue I guess, as a DDU of the GPU driver and installing it again, fixed it. I'm now running at [email protected] 10Bit Color RGB 4:4:4 PC Standart (Full RGB)
The funny part is that because the card is too big to currently fit in my PC (Water cooling reservoirs...) I'm using an old Lian Li riser cable that came with my 2 years old PC-O11WRX Case but works happily at PCIe 4.0 specs...
The even funnier part is that last night I put my old HDMI cable that has never even heard of HDMI 2.1 spec but works fine at the above settings... I'm afraid to connect the new one again...
The only thing left to do is enable the FreeSync support with CRU...

Update:
New 20... Enabling GPU Scaling brings back the screen to max 60Hz... Disable GPU Scaling and "Oh! Miracle..." back to 120Hz... 

Update 2:
Actually nvm... It appears that this issue has been around for more than a decade and it is a "normal" behavior...

Update 3:
It appears that at least on my configuration, Editing the Monitor EDID to add Freesync support, results in a loss of video signal. The system needs to be rebooted in Safe Mode after that and the EDID information reset to the default one to get a signal again.

Basically [email protected] works fine (unless you enable GPU Scaling) but no Freesync (at least on my LG B9). The 2020 LG models (BX, CX, GX...) support Freesync with supposedly no issues.


----------



## HeLeX63

OrionBG said:


> Switching to 10bit gives me "No Signal" for some reason and the PC shortly restarts on its own and boot-loops several times...
> Going to 8bit works but still no more than 60Hz... I'm astounded by the situation...


You actually need a proper new HDMI 2.1 cable capable of 4k 120hz...


----------



## jfrob75

I recently upgraded to reference 6900 XT by PowerColor. I just installed the EK water block. What a difference it makes in the GPU junction and hot spot temps. On the reference air cooler I would see hot spot temps of 112 deg C. On the water block the hot spot temps barely get into the 60's. I am also using the latest MPT. I am able to change the power setting from 255 to 350, TDC to 350 and TDC SOC to 65 without any obvious issues. However, if I increase the maximum voltage GFX to 1200mv it results timespy not being able to run the first graphics test, it causes a timeout to occur. If I increase the max memory clock, under the Overdrive Limits tab, it does not allow me to adjust the max GPU clock above 500MHz. So, is there something I am doing wrong or missing or a driver bug?


----------



## ZealotKi11er

jfrob75 said:


> I recently upgraded to reference 6900 XT by PowerColor. I just installed the EK water block. What a difference it makes in the GPU junction and hot spot temps. On the reference air cooler I would see hot spot temps of 112 deg C. On the water block the hot spot temps barely get into the 60's. I am also using the latest MPT. I am able to change the power setting from 255 to 350, TDC to 350 and TDC SOC to 65 without any obvious issues. However, if I increase the maximum voltage GFX to 1200mv it results timespy not being able to run the first graphics test, it causes a timeout to occur. If I increase the max memory clock, under the Overdrive Limits tab, it does not allow me to adjust the max GPU clock above 500MHz. So, is there something I am doing wrong or missing or a driver bug?


Only edit the power limits. voltage and clocks u can only go down.


----------



## lukart

kwikgta said:


> Installed the new AsRock Phantom Gaming 6900xt today. I love it!
> View attachment 2475541


Thats awesome, was looking for that card, what temperatures are you getting?


----------



## OrionBG

HeLeX63 said:


> You actually need a proper new HDMI 2.1 cable capable of 4k 120hz...


Yep I purchased one even before the card 
Everything runs OK now (with the exception of Freesync) so I'm happy. Doing tests and stuff...


----------



## HeLeX63

OrionBG said:


> Yep I purchased one even before the card
> Everything runs OK now (with the exception of Freesync) so I'm happy. Doing tests and stuff...


Good to hear you got it running. What was the issue ? I have an LG C9 OLED, and it recognised 120hz straight away by plugging it in...


----------



## kwikgta




----------



## OrionBG

HeLeX63 said:


> Good to hear you got it running. What was the issue? I have an LG C9 OLED, and it recognized 120hz straight away by plugging it in...


I have enabled GPU Scaling and it appears that this is limiting the refresh rate to 60Hz. Disabling the feature shows 100Hz and 120Hz options in Display settings.


----------



## RaEyE

OrionBG said:


> I have enabled GPU Scaling and it appears that this is limiting the refresh rate to 60Hz. Disabling the feature shows 100Hz and 120Hz options in Display settings.


The AMD Faq says that enabling GPU scaling forces native resolution and refresh rate.
LG does not mention the actual display refresh rate of your device. Only it's engine refresh rate of 120Hz.
It might be possible the actual OLED display of your device is a 60Hz display virtually running with 120Hz. TV's are known for "virtual refresh rates" and such.

I won't rule out, this also might be a bug that occures inbetween the Radeon Adrenalin Software and your display manufacturers firmware, but the reporting of display characteristics should be handled by standardized protocols ... soooo it seems unlikely.

What you might try (if you not already have) is to enable GPU Scaling, then via Windows try to force 120Hz on your display.
AMD Faq - Enabling and Configuring GPU Scaling
AMD Faq - Adjusting Display Brightness, Resolution and Refresh Rate


----------



## OrionBG

RaEyE said:


> The AMD Faq says that enabling GPU scaling forces native resolution and refresh rate.
> LG does not mention the actual display refresh rate of your device. Only it's engine refresh rate of 120Hz.
> It might be possible the actual OLED display of your device is a 60Hz display virtually running with 120Hz. TV's are known for "virtual refresh rates" and such.
> 
> I won't rule out, this also might be a bug that occures inbetween the Radeon Adrenalin Software and your display manufacturers firmware, but the reporting of display characteristics should be handled by standardized protocols ... soooo it seems unlikely.
> 
> What you might try (if you not already have) is to enable GPU Scaling, then via Windows try to force 120Hz on your display.
> AMD Faq - Enabling and Configuring GPU Scaling
> AMD Faq - Adjusting Display Brightness, Resolution and Refresh Rate


According to what I've read, enabling GPU Scaling forces 60Hz independent of what the monitor supports (100Hz, 120Hz, 144,165,240...)
When I enable GPU scaling from the driver settings, all refresh rate options above 60Hz just disappear...
Anyway, I disabled the GPU scaling and everything is fine. It is disabled by default after all.
Also, the LG OLED panels in 2019 and newer TVs are native 120Hz panels.
I've used Nvidia cards for many years and I'm still adjusting to AMD's way of doing things


----------



## Bart

Any of you guys having "blackout" issues? I just tried to update to the latest drivers and the install crashed, now my monitor keeps going dark within seconds of booting. If I unplug/replug the DP cable, the screen comes back for about 10 seconds before it goes black again. This _was_ behaving just fine, what the heck!


----------



## coelacanth

Bart said:


> Any of you guys having "blackout" issues? I just tried to update to the latest drivers and the install crashed, now my monitor keeps going dark within seconds of booting. If I unplug/replug the DP cable, the screen comes back for about 10 seconds before it goes black again. This _was_ behaving just fine, what the heck!


Haven't had that problem. I always uninstall old drivers, then use DDU in safe mode, and then reinstall fresh drivers.


----------



## Bart

coelacanth said:


> Haven't had that problem. I always uninstall old drivers, then use DDU in safe mode, and then reinstall fresh drivers.


I made it go away _so far_ (knock on wood) by setting my Aorus Masters primary PCIE lane to gen3 instead of leaving it at "auto" and running gen4.


----------



## HyperC

Anybody order Red devil waterblock from ppcs says pre-order and coming 3 or 4th week Jan did they sellout or still waiting for first shipment?

Forget it i got lucky AF they got the stock today posting pics hopefully this weekend depending on newegg shipping


----------



## ZealotKi11er

Are there any suggestions for maximum TimeSpy score? How much does platform, CPU, RAM play a role?


----------



## HyperC

Pretty sure only for the cpu part


----------



## ZealotKi11er

I got all the way up to 22.2k but I feel I can go higher.


----------



## HyperC

very nice, Think mine comes monday and now with all this snow who knows... So overclocking these cards do I have to use wattman or can i just still use afterburner. And for MPT do I just load, edit , write, save? been a very long time using amd


----------



## ilmazzo

Msi ab is good for common oc but you can’t deal with increasing power budget limit of the card

voltage is still unmanageable by any software atm but would be only useful to lc cards on steroids


----------



## ZealotKi11er

HyperC said:


> very nice, Think mine comes monday and now with all this snow who knows... So overclocking these cards do I have to use wattman or can i just still use afterburner. And for MPT do I just load, edit , write, save? been a very long time using amd


You want to unlock power and keep it cool.


----------



## Kstarestrella

Good evening guys. I'm new here and I have some questions about watercooling my ASRock ASRock AMD Radeon RX 6900 XT Phantom. 

I recently ordered an Waterblock from Aliexpress. However, I'm just learning that the reference blocks don't work on the Phantom. 

Do you guys know of a waterblock that may be compatible with the ASRock 6900XT phantom? 

I'll probably have to sell the waterblock I ordered since it's no use to me. 

Please advise. 

Thank you.


----------



## HyperC

I was looking to buy an ASrock card also.. I could not find any info about anyone making a water block for it might have to email around asking GL


----------



## OrionBG

Kstarestrella said:


> Good evening guys. I'm new here and I have some questions about watercooling my ASRock ASRock AMD Radeon RX 6900 XT Phantom.
> 
> I recently ordered an Waterblock from Aliexpress. However, I'm just learning that the reference blocks don't work on the Phantom.
> 
> Do you guys know of a waterblock that may be compatible with the ASRock 6900XT phantom?
> 
> I'll probably have to sell the waterblock I ordered since it's no use to me.
> 
> Please advise.
> 
> Thank you.


Initially, I was also going to purchase the AsRock Phantom D but both EK and Alphacool told me they will NOT be making blocks for that model.
Since this is not a very popular (for GPU at least) brand I doubt anyone will make a block for it... At least not anytime soon...
I got myself a Powercolor Red Devil 6900xt. It will have blocks from at least Alphacool and most probably EK too.


----------



## OrionBG

On another note, can someone please give any steps for successful overclocking of those RX 6900xt cards? Whatever I do with MPT, the GPU gets locked to 5xxMHz
I can overclock an Nvidia card in my sleep but those RX cards are quite alien to me...
I have read Igor's lab articles but things don't actually happen like described 
For instance, I can put a higher RAM clock limit and it works! I got up to 2300MHz but the GPU begins artifacting quite a lot at that point. That Is just the GDDR6 being unable to handle it.
Unfortunately, the GPU clocks locks up to 5xxMHz at that point... Changing power limits... same, changing... max voltage... same... (all changes made with MPT)


----------



## HeLeX63

OrionBG said:


> On another note, can someone please give any steps for successful overclocking of those RX 6900xt cards? Whatever I do with MPT, the GPU gets locked to 5xxMHz
> I can overclock an Nvidia card in my sleep but those RX cards are quite alien to me...
> I have read Igor's lab articles but things don't actually happen like described
> For instance, I can put a higher RAM clock limit and it works! I got up to 2300MHz but the GPU begins artifacting quite a lot at that point. That Is just the GDDR6 being unable to handle it.
> Unfortunately, the GPU clocks locks up to 5xxMHz at that point... Changing power limits... same, changing... max voltage... same... (all changes made with MPT)


How are you even able to get the Ram higher than 2150MHz ?


----------



## HeLeX63

OrionBG said:


> On another note, can someone please give any steps for successful overclocking of those RX 6900xt cards? Whatever I do with MPT, the GPU gets locked to 5xxMHz
> I can overclock an Nvidia card in my sleep but those RX cards are quite alien to me...
> I have read Igor's lab articles but things don't actually happen like described
> For instance, I can put a higher RAM clock limit and it works! I got up to 2300MHz but the GPU begins artifacting quite a lot at that point. That Is just the GDDR6 being unable to handle it.
> Unfortunately, the GPU clocks locks up to 5xxMHz at that point... Changing power limits... same, changing... max voltage... same... (all changes made with MPT)


Yes that is correct. Putting the memory even 1MHz higher than 2150MHz, will cause GPU clock to lock at 500MHz, same issue here. This is the infamous AMD artificial lock on these cards, which are yet to be broken by some tech guru.


----------



## OrionBG

HeLeX63 said:


> Yes that is correct. Putting the memory even 1MHz higher than 2150MHz, will cause GPU clock to lock at 500MHz, same issue here. This is the infamous AMD artificial lock on these cards, which are yet to be broken by some tech guru.


I wonder where those limits are imposed... In the BIOS itself or the drivers. For instance, if a BIOS is modified with MPT and then flashed to the card directly, will the new data be considered the new limit, or will the card lock itself to 5xxMHz again.
Actually, are there any RX 6900XT GPUs that have higher TDP limits than the stock ones?

Update:
Looked in TechPowerUp's VGA BIOS Collection database and although there are very few RX 6900XT cards, I see one strange thing:

Power Limit 
Total: 255 W 
GPU: 320 A 
SOC: 55 A

All of them have the same limits for GPU and SOC but the Total varies depending on the card. Also, why the sum of GPU and SOC is always a lot higher than the Total value? I don't get it...


----------



## HeLeX63

OrionBG said:


> I wonder where those limits are imposed... In the BIOS itself or the drivers. For instance, if a BIOS is modified with MPT and then flashed to the card directly, will the new data be considered the new limit, or will the card lock itself to 5xxMHz again.
> Actually, are there any RX 6900XT GPUs that have higher TDP limits than the stock ones?
> 
> Update:
> Looked in TechPowerUp's VGA BIOS Collection database and although there are very few RX 6900XT cards, I see one strange thing:
> 
> Power Limit
> Total: 255 W
> GPU: 320 A
> SOC: 55 A
> 
> All of them have the same limits for GPU and SOC but the Total varies depending on the card. Also, why the sum of GPU and SOC is always a lot higher than the Total value? I don't get it...


Flashing not necessary. You can unlock power easy using MPT, but thats all it can do right now. Frequency limits, although you can change them, are still hard locked somehow.


----------



## OrionBG

HeLeX63 said:


> Flashing not necessary. You can unlock power easy using MPT, but thats all it can do right now. Frequency limits, although you can change them, are still hard locked somehow.


Which is the exact setting in MPT that I need to addjust for the power limit? And what are the (theoretically) maximum values to play with on the Red Devil?


----------



## WilliamLeGod

Just want your insight on this: As of now 6xxx Amd doesnt support Vrr on lg C9 at 4k120 4:4:4 10/12b HDR right?


----------



## HeLeX63

OrionBG said:


> Which is the exact setting in MPT that I need to addjust for the power limit? And what are the (theoretically) maximum values to play with on the Red Devil?


Just change GPU Power Limit (W) in MPT. Remember that if increasing it by an extra 15% in the Radeon Software, it will be 15% additional of the new value. For example, mine is set to 309W, for a theoretical max of 355W.


----------



## OrionBG

HeLeX63 said:


> Just change GPU Power Limit (W) in MPT. Remember that if increasing it by an extra 15% in the Radeon Software, it will be 15% additional of the new value. For example, mine is set to 309W, for a theoretical max of 355W.


I see you have the same card as me (almost... mine is not the Limited Edition) I guess you are still running it air-cooled? What OC settings have you found to work best with it?


----------



## HeLeX63

OrionBG said:


> I see you have the same card as me (almost... mine is not the Limited Edition) I guess you are still running it air-cooled? What OC settings have you found to work best with it?


Actually just received by Alphacool Waterblock for it. Will be installing it next week. On air, with 355W limit, I get sustained frequency at 2650MHz to 2680MHz on the core in-game. This can drop on very intense scenes to 2550MHz.

You can visit my youtube channel of some game tests I did, and look at frequency and power usage:


----------



## NDS322

My RX 6900XT Memory clock always stick at 2000 MHz with AMD 21.2.1 Beta Driver 

Does anyone encounter a problem like this?


----------



## OrionBG

HeLeX63 said:


> Actually just received by Alphacool Waterblock for it. Will be installing it next week. On air, with 355W limit, I get sustained frequency at 2650MHz to 2680MHz on the core in-game. This can drop on very intense scenes to 2550MHz.
> 
> You can visit my youtube channel of some game tests I did, and look at frequency and power usage:


But... But... But... How???  How did you get that block? Alphacool says those will be available at the end of the month supposedly...
Anyway, Thanks for the video, I'll check it out.
BTW I just did some tests. With the limit set to 340W by MPT and I also decided to be generous  and gave another 5% in the driver for a max of 356W (at least this was the max I saw), I got interesting results. Also Max clock to 2600MHz. If I run 3D Mark Firestrike, the wattage will very rarely go above 300W, Some scenes in GPU Test 2, and that's it. I was even thinking that something is wrong and the values I set in MPT are not working. Then I tried Furmark... wow... 356W on the dot with temp going crazy for less than a minute... It seems to be working  Water it is...


----------



## OrionBG

NDS322 said:


> My RX 6900XT Memory clock always stick at 2000 MHz with AMD 21.2.1 Beta Driver
> 
> Does anyone encounter a problem like this?


Nope, mine goes to the max 2150MHz (2138 - 2142MHz actual) when I select the manual OC.


----------



## cg4200

I am waiting for my waterblock for my 6900xt
I noticed that using gpu-z voltage will adjust and work fine undervolting But if you change the core clock at all from 2509 than the voltage now jumps up and does what it wants . put back to 2509 and can run 1mv and game clock stays 2440 ish.
Has anyone just cganged the core clock to say 2609 flash back and see if undervolt works still?? I would be curious if can change core and flash back with out locking up card to 500mhz?? thanks


----------



## LtMatt

My best Firestrike Extreme score, 14th in the Hall of Fame atm.

5950X PBO AIO
6900 XT Merc 319 Air Cooled
21.2.1

AMD Radeon RX 6900 XT video card benchmark result - AMD Ryzen 9 5950X,ASUSTeK COMPUTER INC. ROG CROSSHAIR VIII HERO (3dmark.com)


----------



## doomsdaybg

> My RX 6900XT Memory clock always stick at 2000 MHz with AMD 21.2.1 Beta Driver


Same to me, happens only at monitor refresh rate is set to 144hz, if yo uset to 120 will drop to idle. Its a bug from drivers from long time.


----------



## LtMatt

doomsdaybg said:


> Same to me, happens only at monitor refresh rate is set to 144hz, if yo uset to 120 will drop to idle. Its a bug from drivers from long time.


It's not a bug, it's due to the vertical blank timing of your display.

Using a 60/120HZ alters the timing to fall in the sweet zone, which enables memory downclock.

There are even some 75/144HZ displays that have a vertical blank timing that allows memory downclock, but they are few and far between.

Most 60/120HZ displays fall in the sweet zone, but my LG CX Oled (4K120HZ) is not one of them so my memory is also at maximum clock.

I'm not worried though as it does not impact the longevity of the GPU in anyway and zero RPM works just fine despite the slightly higher idle temps.


----------



## WilliamLeGod

Any1 knows if 6900xt works on Lg C9 with Vrr support at 4k120 4:4:4 10/12b hdr on?


----------



## LtMatt

WilliamLeGod said:


> Any1 knows if 6900xt works on Lg C9 with Vrr support at 4k120 4:4:4 10/12b hdr on?


Not sure about the C9, but the CX series works just fine at 4K120 4:4:4 10 bit HDR with FreeSync support.


----------



## ZealotKi11er

WilliamLeGod said:


> Any1 knows if 6900xt works on Lg C9 with Vrr support at 4k120 4:4:4 10/12b hdr on?


By default no. C9 does not support FreeSync. You have to hack it.


----------



## HyperC

I got my block for my devil today just need my card now looking like monday or tuesday 

@LtMatt is the pc outside 47c? air cooled?


----------



## newls1

HyperC said:


> I got my block for my devil today just need my card now looking like monday or tuesday
> 
> @LtMatt is the pc outside 47c? air cooled?


im damn jealous! For us Asus "TUF" card owners we have to wait about another month. These cards should reallllllly take off on water


----------



## HeLeX63

OrionBG said:


> But... But... But... How???  How did you get that block? Alphacool says those will be available at the end of the month supposedly...
> Anyway, Thanks for the video, I'll check it out.
> BTW I just did some tests. With the limit set to 340W by MPT and I also decided to be generous  and gave another 5% in the driver for a max of 356W (at least this was the max I saw), I got interesting results. Also Max clock to 2600MHz. If I run 3D Mark Firestrike, the wattage will very rarely go above 300W, Some scenes in GPU Test 2, and that's it. I was even thinking that something is wrong and the values I set in MPT are not working. Then I tried Furmark... wow... 356W on the dot with temp going crazy for less than a minute... It seems to be working  Water it is...


I ordered a very long time ago, in January actually. Lucky to secure one of the early batches.

I would honestly lower the values in MPT. You set it to 340W. If you accidentally load a profile in Radeon allowing 15% higher, that is 391W which is insanely high. Just a security measure, but id lower the value, and then allow Radeon software to secure your end target goal, just incase you add too much power by accident and blow a capacitor.


----------



## OrionBG

HeLeX63 said:


> I ordered a very long time ago, in January actually. Lucky to secure one of the early batches.
> 
> I would honestly lower the values in MPT. You set it to 340W. If you accidentally load a profile in Radeon allowing 15% higher, that is 391W which is insanely high. Just a security measure, but id lower the value, and then allow Radeon software to secure your end target goal, just incase you add too much power by accident and blow a capacitor.


Aren't those Red Devil cards supposed to be built for overclocking? My old 1080Ti cards (reference design) were hitting 650W (per card, 2 in SLI) with a special BIOS. My 1200W PSU was shutting down during 3DMark  
Those 3*8pin power connector cards should be good for at least 500W with proper cooling...


----------



## HeLeX63

OrionBG said:


> Aren't those Red Devil cards supposed to be built for overclocking? My old 1080Ti cards (reference design) were hitting 650W (per card, 2 in SLI) with a special BIOS. My 1200W PSU was shutting down during 3DMark
> Those 3*8pin power connector cards should be good for at least 500W with proper cooling...


Correct, with proper cooling yes. And proper, Id say watercooling. I dont go above 350W on air. Ill install my Alphacool WB today, and see what difference it makes to temps and clocks.


----------



## OrionBG

HeLeX63 said:


> Correct, with proper cooling yes. And proper, Id say watercooling. I dont go above 350W on air. Ill install my Alphacool WB today, and see what difference it makes to temps and clocks.


Eagerly waiting for results!


----------



## WilliamLeGod

ZealotKi11er said:


> By default no. C9 does not support FreeSync. You have to hack it.


How can u hack it? Keep in mind i mwntioned 4k120 4:4:4 10b Hdr on so CRU doesnt work


----------



## HeLeX63

OrionBG said:


> Eagerly waiting for results!


Done and installed. I have a sustained frequency of 2740MHz. Power is unlocked to 360W, GPU temps hover between 42 to 48C and Junction temp at around 60 to 70C at face value. I have managed around 2780MHz max core frequency in Radeon software. I notice a bit of artifacting at 2800MHz, with sustained frequency at around 2750MHz. But will still need to do more testing. Sustained frequencies hover between 2700Mhz to 2740MHz. These cards still get very warm, even with a 480 and 560 rad when pushed upwards of 360W.


----------



## HyperC

are you guys just changing the watts? with MPT


----------



## bluezone

WilliamLeGod said:


> Any1 knows if 6900xt works on Lg C9 with Vrr support at 4k120 4:4:4 10/12b hdr on?


I've got my C9 working with CRU. Mind you I found the key to make it work was for 4K 119 4:4:4 10-bit. I just set the variable refresh rate to 40 to 119. 120hz ends up being a black screen.


----------



## LtMatt

HyperC said:


> are you guys just changing the watts? with MPT


I change Watts and Max SOC voltage to 1.018v which results in 1.000v under load. I generally don't bother touching TDC unless I am going for 3DMark best scores.


----------



## WilliamLeGod

bluezone said:


> I've got my C9 working with CRU. Mind you I found the key to make it work was for 4K 119 4:4:4 10-bit. I just set the variable refresh rate to 40 to 119. 120hz ends up being a black screen.


HDR is on also right?


----------



## newls1

LtMatt said:


> I change Watts and Max SOC voltage to 1.018v which results in 1.000v under load. I generally don't bother touching TDC unless I am going for 3DMark best scores.


could you post a pic of your mpt settings?


----------



## NeonFlak

NM, my MPT was corrupt. Had to redownload.


----------



## LtMatt

newls1 said:


> could you post a pic of your mpt settings?


All default, except i use 300W power limit in MPT and Max SOC voltage at 1.018v. Everything else is as 6900 XT default. This is my 24/7 settings. For benching i put it up to 400W. I don't touch power limit in Radeon Software.


----------



## HyperC

HeLeX63 said:


> Done and installed. I have a sustained frequency of 2740MHz. Power is unlocked to 360W, GPU temps hover between 42 to 48C and Junction temp at around 60 to 70C at face value. I have managed around 2780MHz max core frequency in Radeon software. I notice a bit of artifacting at 2800MHz, with sustained frequency at around 2750MHz. But will still need to do more testing. Sustained frequencies hover between 2700Mhz to 2740MHz. These cards still get very warm, even with a 480 and 560 rad when pushed upwards of 360W.


I just noticed our PC's are almost the same, I do have 3x480s and 1x280 not sure it will help much in temps but should be fun to compare


----------



## OrionBG

HeLeX63 said:


> Done and installed. I have a sustained frequency of 2740MHz. Power is unlocked to 360W, GPU temps hover between 42 to 48C and Junction temp at around 60 to 70C at face value. I have managed around 2780MHz max core frequency in Radeon software. I notice a bit of artifacting at 2800MHz, with sustained frequency at around 2750MHz. But will still need to do more testing. Sustained frequencies hover between 2700Mhz to 2740MHz. These cards still get very warm, even with a 480 and 560 rad when pushed upwards of 360W.


Thanks for the info!
I do have one 480 and 2x 360 all 60mm thick radiators (in push/pull) only for the GPU loop so I don't think I'll have any issues dissipating the heat


----------



## HyperC

OrionBG said:


> Thanks for the info!
> I do have one 480 and 2x 360 all 60mm thick radiators (in push/pull) only for the GPU loop so I don't think I'll have any issues dissipating the heat


LMAO what case do you have and what is the loop size for the cpu? my corex9 is maxed out and doesn't play well with push/pull or thick rads top mounted


----------



## HeLeX63

HyperC said:


> I just noticed our PC's are almost the same, I do have 3x480s and 1x280 not sure it will help much in temps but should be fun to compare


I actually still am running a [email protected] and Z370 board, I am awaiting 5900X. Have motherboard, monoblock ready to go once my pre order fills in a few months.

Even a 560 and 480 rad is overkill. You will see that you hit limits with the transfer of the copper plate to the gpu block. If I push my card to 400W, I would see GPU temps easily hit 50C (depending on ambient of course). Remember the diminishing returns.


----------



## HeLeX63

OrionBG said:


> Thanks for the info!
> I do have one 480 and 2x 360 all 60mm thick radiators (in push/pull) only for the GPU loop so I don't think I'll have any issues dissipating the heat


Whoa that is overkill, you dont have a CPU or Monoblock at least ? Id be getting one if I were you, may as well use some of that radiator space and put it to good use


----------



## OrionBG

HyperC said:


> LMAO what case do you have and what is the loop size for the cpu? my corex9 is maxed out and doesn't play well with push/pull or thick rads top mounted


My case is the Raijintek Eris EVO. There are currently 3x 480 and 2x 360 all 60mm thick rads and all of them are in push/pull... I have a total of 38 12cm fans in this monstrosity...



HeLeX63 said:


> Whoa that is overkill, you dont have a CPU or Monoblock at least ? Id be getting one if I were you, may as well use some of that radiator space and put it to good use


The GPU loop is a leftover from my previous two RTX 2080Ti GPUs. The Threadripper 3960x is cooled by 2x 480 rads on a separate loop.


----------



## HeLeX63

OrionBG said:


> My case is the Raijintek Eris EVO. There are currently 3x 480 and 2x 360 all 60mm thick rads and all of them are in push/pull... I have a total of 38 12cm fans in this monstrosity...
> 
> 
> The GPU loop is a leftover from my previous two RTX 2080Ti GPUs. The Threadripper 3960x is cooled by 2x 480 rads on a separate loop.


You can fit that many rads and in push pull ? Got any photos of your build, im curious how it looks


----------



## OrionBG

HeLeX63 said:


> You can fit that many rads and in push pull ? Got any photos of your build, im curious how it looks


It is quite a mess right not due to the removal of the 2 2080Ti-s but I'll fix it when I get the GPU block. One 480 top, one right, one left, and the 2x 360 are on the back. Unfortunately, it is about 70 kg and I can't easily move it to make a photo of them. 








BTW, if you are wondering why the GPU is not inside the case... It is too long to fit with the air cooler...


----------



## HyperC

HeLeX63 said:


> I actually still am running a [email protected] and Z370 board, I am awaiting 5900X. Have motherboard, monoblock ready to go once my pre order fills in a few months.
> 
> Even a 560 and 480 rad is overkill. You will see that you hit limits with the transfer of the copper plate to the gpu block. If I push my card to 400W, I would see GPU temps easily hit 50C (depending on ambient of course). Remember the diminishing returns.


 Jesus even worse i had my 8700k gaming 7 before this


OrionBG said:


> It is quite a mess right not due to the removal of the 2 2080Ti-s but I'll fix it when I get the GPU block. One 480 top, one right, one left, and the 2x 360 are on the back. Unfortunately, it is about 70 kg and I can't easily move it to make a photo of them.
> View attachment 2477807
> 
> BTW, if you are wondering why the GPU is not inside the case... It is too long to fit with the air cooler...


 that looks sexy I'd do it


----------



## coelacanth

Got 120Hz 10-bit 4:4:4 working on the LG CX. I had to first update the firmware, then set the input to PC Mode.

Picture advanced settings on the TV was greyed out until I enabled HDMI Link Assurance in the Radeon driver. That took a while to figure out.


----------



## chispy

I have heard there was some software able to flash bios already for 6xxx series gpus in windows. Does anyone knows if anyone has been able to modify its own bios and flash it back ? I know the guys at igor's lab were testing some stuff but i have not been following the work in progress.


----------



## ZealotKi11er

chispy said:


> I have heard there was some software able to flash bios already for 6xxx series gpus in windows. Does anyone knows if anyone has been able to modify its own bios and flash it back ? I know the guys at igor's lab were testing some stuff but i have not been following the work in progress.


What are you trying to gain with vBIOS flash/ modified vBIOS?


----------



## HyperC

Daddy is home and ready for surgery, first stock run still little air in loop


----------



## SLAY3R8888

Hey guys, new member here. Anytime I'm overclocking anything on my PC I usually end up in here reading discussions and learning. I finally made an account, because I have a few questions that I can't find specific answers to anywhere. An overview of my card and settings and then I will get to my point. I have a XFX Speedster Merc319 6900xt. I have been overclocking and gaming with it since the day I bought it roughly a month ago. I run it on air, I have a very good flowing case, with mesh front and 7 case fans total. This version of the 6900xt cools very very well on air, and I have been running it with a fan curve that ramps up to 100% at 65c core temp in the Adrenaline software. The card is always below 60c core temp and 80c junction when running benchmarks such as TimeSpy, or when playing various games in 1440p Ultra. In most gaming scenarios, my temps are quite low, around 45c core and 55c junction temp. This is with using the following overclock settings, which I have been gaming on for a few weeks: 2500 - 2600MHz (min - max), 1163mV (set with MorePowerTool), 2150MHz Vram w/ fast timings, and 330w +15% = 380w power limit. This has been completely 100% stable for gaming, and mostly stable for TimeSpy -- what I mean by mostly, is if I run TimeSpy 5 times in a row, there's a chance that once or twice it will stop the test and say "an error occured", and sometimes I will get a crash report from Adrenaline Software. The other times it will successfully complete the test, usually giving me around a 20,500 graphics score. The big balancing act I have found with this card, is trying to set the power limit to where I am not bouncing off of it. TimeSpy seems to stress this card as much or more than any gaming scenario, so I have been using this benchmark for most of my OC tinkering. This card seems to want more and more power -- I have tried undervolting it like others have, and it becomes unstable in TimeSpy pretty quick when I bring the voltage down around 1100 - 1130mV, requiring me to drop the clock speeds lower. The temps haven't even been even remotely high, so I started raising the power limit. I have seen people in here say that you probably don't want to give the card more than 350w on air, but my question is why exactly? If the temps are under control and not even getting in the 80's for junction, is there a reason why I can't be pushing the envelope on wattage? Are capacitors more resistant to blowing when the card is at, say, 55c junction vs 75 - 80c junction? My temps on air are not even getting remotely close to thermal throttling temp. This card has a 14+2 phase power design, but I just don't know how much further I can reasonably push the power limit without just begging to blow a capacitor. I suppose none of us do, because this card hasn't been out for long. But I am asking for your opinions on this. Thermally, this card could easily handle more power than the 380w power limit I am already at, and stay below 85c junction always. What are your thoughts on the max power limit I should be trying to run, and why? The reason I am using the power limit I am at, is because at 1163mV, and 2500 - 2600 MHZ range, my card is already hitting the 380w power limit in portions of TimeSpy. I am tempted to raise it up further than 330 + 15%. The low junction temps are practically begging me to do this. I can attempt undervolting some more and bring the power limit back down to 350 or 360 total, if you guys think I am an insane person for having 380w as my power limit. However in my previous attempts to undervolt, it only led to me having to drop the clock speeds to get the card "TimeSpy" stable at said lower voltage. 

P.S. -- I saw LTMatt saying he limits his SOC voltage to 1.018v. Can someone tell me if I should be doing this, and what is the point of doing it? 

Any other ideas? Should I stop trying to push for game clocks higher than ~ 2550MHz and just drop it down by 50-100 MHz, drop the power limit to 350w total, drop the voltage to ~ 1140mV and call it a day? 

The card is behaving like it wants to go higher, thermally it wants more power. I am conflicted about the situation and I need your opinions.

Thanks so much for reading my long post, and I look forward to hearing suggestions from you nerds.


----------



## HyperC

Hmm I don't know what I am doing wrong with MPT but wattage does not change . I saved the bios from gpuz then loaded it with MPT changed the wattage and tdc and clicked write still stock 281 watts.. Also what is the minimum gpu clock slider for videos?


----------



## OrionBG

HyperC said:


> Hmm I don't know what I am doing wrong with MPT but wattage does not change . I saved the bios from gpuz then loaded it with MPT changed the wattage and tdc and clicked write still stock 281 watts.. Also what is the minimum gpu clock slider for videos?


Did you reboot? You need to reboot every time you write something with MPT.


----------



## OrionBG

SLAY3R8888 said:


> Hey guys, new member here. Anytime I'm overclocking anything on my PC I usually end up in here reading discussions and learning...
> ...


According to this video from Actually Hardcore Overclocking, the VRM is doing almost nothing at 300A at 1.2V. The video is for the RX 6800XT Merc 319 but I'm 99% sure the PCB is the same and the VRM is at least the same spec. That means 14 phase VRM at 70A each phase... P=U.I
Basically, the max theoretical power that this VRM can provide (14x70A).1,2V = 1176W that is the max that can go through the VRM... Theoretically (according to the VRM specifications)


----------



## HyperC

OrionBG said:


> Did you reboot? You need to reboot every time you write something with MPT.


Nope didn't know that part lol, thanks ill check in a couple hours, I love amd and rebooting so much fun.. oh first power up I tried to enable sam and seems to disable csm rebooted back into bios enabled csm saved then black screen ?


----------



## dagget3450

OrionBG said:


> It is quite a mess right not due to the removal of the 2 2080Ti-s but I'll fix it when I get the GPU block. One 480 top, one right, one left, and the 2x 360 are on the back. Unfortunately, it is about 70 kg and I can't easily move it to make a photo of them.
> View attachment 2477807
> 
> BTW, if you are wondering why the GPU is not inside the case... It is too long to fit with the air cooler...


Hey, i have to ask. The PCIE extender cables i see anywhere are PCIE 3.0, The gpu is 4.0 so with ryzen 3xxx/5xxx your looking at 4.0 pcie slot to card. Do you know for a fact the PCIE extender will work @ 4.0 speeds?

Genuinely curious as i'd like to do a custom build using a pcie extender cable(riser) but last time i tried was many years back and they were buggy.


----------



## OrionBG

dagget3450 said:


> Hey, i have to ask. The PCIE extender cables i see anywhere are PCIE 3.0, The gpu is 4.0 so with ryzen 3xxx/5xxx your looking at 4.0 pcie slot to card. Do you know for a fact the PCIE extender will work @ 4.0 speeds?
> 
> Genuinely curious as i'd like to do a custom build using a pcie extender cable(riser) but last time i tried was many years back and they were buggy.


I was also quite surprised that this old riser I had from a 3-year-old Lian Li PC-o11 WRX (the model before the Dynamic) work perfectly at PCIe4.0
GPU-Z is reporting PCIe4.0 at 16X and no issues at all.


----------



## OrionBG

HyperC said:


> Nope didn't know that part lol, thanks ill check in a couple hours, I love amd and rebooting so much fun.. oh first power up I tried to enable sam and seems to disable csm rebooted back into bios enabled csm saved then black screen ?


For SAM to work, you need to disable CSM! This is the first thing I do on a new system actually, Disable CSM... UEFI for the win


----------



## chispy

ZealotKi11er said:


> What are you trying to gain with vBIOS flash/ modified vBIOS?


Why do you always ask so many questions instead of trying to help ??? The last 2 or 3 questions you asked i did answer for you and you never even say thank you  ...

anyways , if anyone knows how to do what i ask and if possible i will be grateful.


I'll be nice to you today:

@ZealotKi11er - your answer is here = AMD RX 6900xt / 6800xt / 6800 Overclocking tweaks , tricks and mods. ( yes , i will be running this card on liquid Nitrogen once i find out how to mod the bios , i already did hardware volt mods with evc2 and ran sub-sero on water chiller @ -21c and single Stage Phase dual compressors @ -52c , for pre-testing , obviously next test will be mounting my gpu Ln2 pot on it and run LN2 to it for competitive benchmarking at hwbot for my Team Overclock.net , yes this forum , my house. )


----------



## HyperC

Starting to get the hang of AMD drivers now it all looked chinese to me, never knew AMD likes doing green screens  do i need to up my power limit more or stop testing at 2800mhz slider? seems to go green at 2640mhz timespy


----------



## ZealotKi11er

chispy said:


> Why do you always ask so many questions instead of trying to help ??? The last 2 or 3 questions you asked i did answer for you and you never even say thank you  ...
> 
> anyways , if anyone knows how to do what i ask and if possible i will be grateful.
> 
> 
> I'll be nice to you today:
> 
> @ZealotKi11er - your answer is here = AMD RX 6900xt / 6800xt / 6800 Overclocking tweaks , tricks and mods. ( yes , i will be running this card on liquid Nitrogen once i find out how to mod the bios , i already did hardware volt mods with evc2 and ran sub-sero on water chiller @ -21c and single Stage Phase dual compressors @ -52c , for pre-testing , obviously next test will be mounting my gpu Ln2 pot on it and run LN2 to it for competitive benchmarking at hwbot for my Team Overclock.net , yes this forum , my house. )


I have been told VBIOS changes don't unlock anything for RX6xxx.


----------



## ZealotKi11er

HyperC said:


> Starting to get the hang of AMD drivers now it all looked chinese to me, never knew AMD likes doing green screens  do i need to up my power limit more or stop testing at 2800mhz slider? seems to go green at 2640mhz timespy


The green screen usually when fclk goes bad.


----------



## HyperC

so raise the SOC from 55a to 60?


----------



## ZealotKi11er

HyperC said:


> so raise the SOC from 55a to 60?


I set mine to 75. What display do you have? If its 1080p and still get green screen its probably something else.


----------



## HyperC

atm I have 2 main is 1440p 165hz other 1080p 144hz, can the voltages be edited ? btw how did you come across this issue my google-fu must be broken


----------



## ZealotKi11er

HyperC said:


> atm I have 2 main is 1440p 165hz other 1080p 144hz, can the voltages be edited ? btw how did you come across this issue my google-fu must be broken


Does this happen at stock settings?


----------



## HyperC

before using MPT only ran it once never messed with the sliders at all, tried 75 soc crashed upped the wattage no crash but lower score


----------



## SLAY3R8888

OrionBG said:


> According to this video from Actually Hardcore Overclocking, the VRM is doing almost nothing at 300A at 1.2V. The video is for the RX 6800XT Merc 319 but I'm 99% sure the PCB is the same and the VRM is at least the same spec. That means 14 phase VRM at 70A each phase... P=U.I
> Basically, the max theoretical power that this VRM can provide (14x70A).1,2V = 1176W that is the max that can go through the VRM... Theoretically (according to the VRM specifications)


I watched this video when he made it. Thanks for pointing that out for me. In your opinion, does this theoretically mean that is the point when the VRM's would be ruined? Obviously it would be on fire way before then with my thermal solution but yeah 🤣. So it's reasonable to say that I could attempt pushing power limit up a bit more as long as thermals aren't getting close to around 100c junction? I know overclocking involves some guessing and risk, just wondering if you would say this is a reasonable rule of thumb to be following? My guess is that I can go up to roughly 400 - 420w total power limit, at which point I would be using the max voltage of 1175mV to drive clock speeds up with stability, and thermals would get warm enough for me to want to quit going further. I'm just waiting for the card to give me a sign that it's had enough, and that hasn't happened yet.


----------



## HyperC

OKay getting better my card seems to want more wattage only soft crashing 2nd test timespy guess ill pump in 380 watts


----------



## SLAY3R8888

HyperC said:


> Nope didn't know that part lol, thanks ill check in a couple hours, I love amd and rebooting so much fun.. oh first power up I tried to enable sam and seems to disable csm rebooted back into bios enabled csm saved then black screen ?


I can't help with your issues enabling SAM as I followed a guide I found online and didn't have any issue. But I wanted to tell you an easy way to validate your changes in MPT to make sure they applied, or to reference them later. Just save your Bios from GPU-Z again after doing "Write SPPT" and rebooting PC, and name it something different than your stock Bios. Then you can load that one in MPT to see / change your current settings, instead of starting over with loading the stock Bios every time. Idk how this works exactly because it's just a registry mod, but I think pulling the Bios again after you do a "Delete SPPT" will show the settings back at stock configuration.


----------



## SLAY3R8888

HyperC said:


> OKay getting better my card seems to want more wattage only soft crashing 2nd test timespy guess ill pump in 380 watts


That's where I'm at now is 380w but it hits the limit sometimes still at 1162mV. I'm stuck at 2500 - 2600mhz for now, but thermally it wants more power I think. I took a break from OC'ing to game on it for a few weeks, about to push it more.


----------



## HyperC

SLAY3R8888 said:


> I can't help with your issues enabling SAM as I followed a guide I found online and didn't have any issue. But I wanted to tell you an easy way to validate your changes in MPT to make sure they applied, or to reference them later. Just save your Bios from GPU-Z again after doing "Write SPPT" and rebooting PC, and name it something different than your stock Bios. Then you can load that one in MPT to see / change your current settings, instead of starting over with loading the stock Bios every time. Idk how this works exactly because it's just a registry mod, but I think pulling the Bios again after you do a "Delete SPPT" will show the settings back at stock configuration.


I got SAM working for me had to convert my windows drive to GPT try this if you haven't yet Still using BIOS? It's time to switch to UEFI — here's how on Windows 10. 

Timespy test 2 doesn't like seems anything over 2650 guess i need more vcore for that...first test I can probably hit around 2750 2800  ..... So why does amd force 100mhz min to max is there away around it?


----------



## SLAY3R8888

HyperC said:


> I got SAM working for me had to convert my windows drive to GPT try this if you haven't yet Still using BIOS? It's time to switch to UEFI — here's how on Windows 10.
> 
> Timespy test 2 doesn't like seems anything over 2650 guess i need more vcore for that...first test I can probably hit around 2750 2800  ..... So why does amd force 100mhz min to max is there away around it?


Thanks for the info, but my drive is already GPT, which explains why I had no issue enabling SAM to begin with.

The 100mhz clock spread is a new feature for these GPU's I think, I didn't have the 100mhz spread with my 5700xt. As far as I know there is no way around it, it's meant to help give the card a clock spread it can move around within to "help with stability". In my opinion, I think it sets the target frequency at the halfway point between the min and max just like before, but the "set spread" gives the consumer a more realistic expectation of what clock speeds to expect. Because even with my 5700xt, it would never hold the set clock speed, it would be right around 50mhz below it typically.

Right now I'm at 2500 - 2600mhz, 1163mV, 380w total power limit, and it will hold basically around ~ 2550 for the whole test. ~20,500 graphics score roughly. If I ever get a soft Timespy crash aka "an error occured", and possibly a driver timeout message from Adrenaline software, it usually will happen during test 2. In games like Battlefield 5 or COD at 1440p Ultra, the clock speed will either hold steady above 2500mhz, or it will sometimes drop down to 2480 - 2490 (I noticed this in BF5). I am honestly not sure if this is due to the scene not loading up the card all the way, or if it's because it's too graphically intensive and the card feels like it needs to downclock. I think it has to do with the game engine not getting full utilization from the GPU, but that's just a guess. I don't think it HAS to stay within that clock range if the card is not getting enough demand placed on it, but that's just another theory of mine based on my experience with AMD cards and Adrenaline software.

I have also noticed that the card never uses even close to as much wattage during any gaming scenario as compared to TimeSpy. So my current theory is that, if you make the OC "TimeSpy stable", it's most likely gaming stable for sure.

I am also using the Steam version of 3DMark TimeSpy, I have read that the benchmark is more stable if you use the standalone version of 3DMark that's not connected to Steam. I was having a hard time finding a proper download for the standalone version, so I hadn't tried that yet. This could provide some more OC headroom potentially in TimeSpy.

I think I have pretty much reached the limit of my card, but next I plan to attempt raising the power limit to 345w + 15% (total 397w), and see if it gives me any more stability at 1163mV with higher clock speeds. If not then I will give it the full 1175mv and see what that does. I would be happy if I could get it to 2550 - 2650, or 2600 - 2700, and be able to make it through a TimeSpy run without issues. If I can make this happen then we will see if it even improves my graphics score any. If not, then instability is indicated and I will drop back down to my current 2500 - 2600 setting for gaming. The card is a beast regardless and it is completely destroying every title it touches in 1440p Ultra from my experience


----------



## SLAY3R8888

OrionBG said:


> According to this video from Actually Hardcore Overclocking, the VRM is doing almost nothing at 300A at 1.2V. The video is for the RX 6800XT Merc 319 but I'm 99% sure the PCB is the same and the VRM is at least the same spec. That means 14 phase VRM at 70A each phase... P=U.I
> Basically, the max theoretical power that this VRM can provide (14x70A).1,2V = 1176W that is the max that can go through the VRM... Theoretically (according to the VRM specifications)





OrionBG said:


> According to this video from Actually Hardcore Overclocking, the VRM is doing almost nothing at 300A at 1.2V. The video is for the RX 6800XT Merc 319 but I'm 99% sure the PCB is the same and the VRM is at least the same spec. That means 14 phase VRM at 70A each phase... P=U.I
> Basically, the max theoretical power that this VRM can provide (14x70A).1,2V = 1176W that is the max that can go through the VRM... Theoretically (according to the VRM specifications)


How did you get the drop down arrow below your posts with your PC Specs? The "Rigs" section is limited to 100 characters so I know that can't be it, and I tried putting mine in the "About Me" section but that doesn't seem to add the drop down below my name.


----------



## LtMatt

SLAY3R8888 said:


> Thanks for the info, but my drive is already GPT, which explains why I had no issue enabling SAM to begin with.
> 
> The 100mhz clock spread is a new feature for these GPU's I think, I didn't have the 100mhz spread with my 5700xt. As far as I know there is no way around it, it's meant to help give the card a clock spread it can move around within to "help with stability". In my opinion, I think it sets the target frequency at the halfway point between the min and max just like before, but the "set spread" gives the consumer a more realistic expectation of what clock speeds to expect. Because even with my 5700xt, it would never hold the set clock speed, it would be right around 50mhz below it typically.
> 
> Right now I'm at 2500 - 2600mhz, 1163mV, 380w total power limit, and it will hold basically around ~ 2550 for the whole test. ~20,500 graphics score roughly. If I ever get a soft Timespy crash aka "an error occured", and possibly a driver timeout message from Adrenaline software, it usually will happen during test 2. In games like Battlefield 5 or COD at 1440p Ultra, the clock speed will either hold steady above 2500mhz, or it will sometimes drop down to 2480 - 2490 (I noticed this in BF5). I am honestly not sure if this is due to the scene not loading up the card all the way, or if it's because it's too graphically intensive and the card feels like it needs to downclock. I think it has to do with the game engine not getting full utilization from the GPU, but that's just a guess. I don't think it HAS to stay within that clock range if the card is not getting enough demand placed on it, but that's just another theory of mine based on my experience with AMD cards and Adrenaline software.
> 
> I have also noticed that the card never uses even close to as much wattage during any gaming scenario as compared to TimeSpy. So my current theory is that, if you make the OC "TimeSpy stable", it's most likely gaming stable for sure.
> 
> I am also using the Steam version of 3DMark TimeSpy, I have read that the benchmark is more stable if you use the standalone version of 3DMark that's not connected to Steam. I was having a hard time finding a proper download for the standalone version, so I hadn't tried that yet. This could provide some more OC headroom potentially in TimeSpy.
> 
> I think I have pretty much reached the limit of my card, but next I plan to attempt raising the power limit to 345w + 15% (total 397w), and see if it gives me any more stability at 1163mV with higher clock speeds. If not then I will give it the full 1175mv and see what that does. I would be happy if I could get it to 2550 - 2650, or 2600 - 2700, and be able to make it through a TimeSpy run without issues. If I can make this happen then we will see if it even improves my graphics score any. If not, then instability is indicated and I will drop back down to my current 2500 - 2600 setting for gaming. The card is a beast regardless and it is completely destroying every title it touches in 1440p Ultra from my experience


Great post and agree.


----------



## SLAY3R8888

LtMatt said:


> Great post and agree.


LtMatt --- thanks... This is a funny coincidence because I was doing some research and I am actually reading a post of yours in another forum as we speak. (I assume you are the same person, lol) It was from a month ago, I saw a profile where you had lowered the SOC and raised the TDC. I have only been playing with Power Limit GPU (W), and maximum voltage. Has your position on these settings changed any? Do I need to be lowering SOC and raising TDC? If you don't mind explaining why I should do this, I would appreciate it. My initial reaction is that lowering SOC would just open the window for more instability. Is there a problem with it's stock value?

I saw your post in this forum that you are now using 1018mV for SOC.

To be honest I don't even know exactly what the SOC value does for a GPU, but I assume it has to do with memory stability.

As far as TDC goes, I am not sure if I need to be raising it. I am at 380w total power limit and 1163mV atm, and the card will not make it through a TimeSpy run without throwing an error if I bump the frequency up from 2500 - 2600, to 2550 - 2650. It does the benchmark just fine at the lower of those two settings. I was about to rewrite SPPT with a 400w total power limit, and try raising to the full voltage of 1175mV, but I wonder if I need to be considering changing the SOC and TDC values from stock to something else. I will use these higher settings for gaming if it results in higher scores / perfomance and the thermals are reasonable. Your input / suggestions on this would be much appreciated. Thank you.

Update - I raised my power limit to 348w + 15% = 400w. I raised voltage to 1175mV. Tried running TimeSpy at 2550 - 2650 MHz and it gave me "an error occured" half way through the test. Thermals were fine, the highest I ever saw the junction temperature get was 81c for a brief moment. Other than that instance, junction temp was in the mid to low 70's for most of the test. I saw the card reach 398w for a brief moment, but for most of the test it was below the 400w limit and not hitting it. Card held right at 2600 MHz the whole time until the error happened.

It seems like my chip just doesn't want to be overclocked past the 2500 - 2600 set range, unless you guys have any other ideas for me to try. I haven't messed with TDC yet because I don't understand how it applies to GPU's. For example, with Ryzen CPU's it is just a factory rating of how many amps it can be supplied, but a manual overclock just surpasses the limit so it doesn't matter. I also haven't messed with SOC voltage as I don't see how lowering it would help stability any. I am open to adjusting either of these if anyone can explain why.

I will go back to my old stable OC of 380w total power limit, 1163mV, and 2500 - 2600 MHz for now.

I'm not really disappointed by any means, the card is performing great. Just was wondering if it could be pushed any higher, as the temps are super low and reasonable. I'm used to pushing hardware until it's near being thermally limited. If not then that's fine. If I hear any ideas from you guys then I will give them a shot. Should I leave it alone or try something else?

Another update - For this OC session I ended up settling with 2500 - 2600 MHz, 1175mV, 380w total power limit, and 2150MHz Vram with fast timings. With the higher voltage, the card does hit the 380w power limit a few times during the run, but these settings seemed to provide the best stability and most consistent TimeSpy results, without having to consume 400w to be stable. I usually get somewhere around 20,500 graphics score, give or take a hundred. Temps are super cool, around 70 - 80c junction, with an aggressive fan curve that hits 100% at 55c. The Merc319 really has quiet fans even when they are running at full blast. I can get the card to run at a higher frequency, but the extra power required and inconsistent stability wasn't worth the performance increase, which was within margin of error anyway. I think the most impressive thing about this XFX card is how overboard the air cooling solution is. It could handle so much more heat than it actually needs to. I guess it should be that way, it is a HUGE card!

It's easy to get trapped into seeing other people's clock speeds online and thinking your card should be able to hit them. Every chip is different, and it seems I got an average one at best. Either way it's performing great, and is running at 200 MHz above it's rated boost clock of 2340. Will be interesting to see how performance changes as AMD ages these cards like a fine wine - they always do! If anyone has any other suggestions of tweaks or MPT settings to try, I'm all ears. Until then, I'm leaving it like this.

Attached is a screenshot of my last TimeSpy result for today, and a couple pictures of my PC.


----------



## HyperC

Nice i can't get higher then 20,895 gpu score, either my gpu needs more vcore for test 2 or the amd drivers are just saying NO SIR not today once my clocks jump over 2640 its done TDC and watts do nothing more.. I love now amd have these drivers locked i cant overclock ram over 2150 but the core speed won't boost until i set 2150 again... So does the EVC2 work? or did i answer my own question its the lovely drivers


----------



## LtMatt

SLAY3R8888 said:


> LtMatt --- thanks... This is a funny coincidence because I was doing some research and I am actually reading a post of yours in another forum as we speak. (I assume you are the same person, lol) It was from a month ago, I saw a profile where you had lowered the SOC and raised the TDC. I have only been playing with Power Limit GPU (W), and maximum voltage. Has your position on these settings changed any? Do I need to be lowering SOC and raising TDC? If you don't mind explaining why I should do this, I would appreciate it. My initial reaction is that lowering SOC would just open the window for more instability. Is there a problem with it's stock value?
> 
> I saw your post in this forum that you are now using 1018mV for SOC.
> 
> To be honest I don't even know exactly what the SOC value does for a GPU, but I assume it has to do with memory stability.
> 
> As far as TDC goes, I am not sure if I need to be raising it. I am at 380w total power limit and 1163mV atm, and the card will not make it through a TimeSpy run without throwing an error if I bump the frequency up from 2500 - 2600, to 2550 - 2650. It does the benchmark just fine at the lower of those two settings. I was about to rewrite SPPT with a 400w total power limit, and try raising to the full voltage of 1175mV, but I wonder if I need to be considering changing the SOC and TDC values from stock to something else. I will use these higher settings for gaming if it results in higher scores / perfomance and the thermals are reasonable. Your input / suggestions on this would be much appreciated. Thank you.
> 
> Update - I raised my power limit to 348w + 15% = 400w. I raised voltage to 1175mV. Tried running TimeSpy at 2550 - 2650 MHz and it gave me "an error occured" half way through the test. Thermals were fine, the highest I ever saw the junction temperature get was 81c for a brief moment. Other than that instance, junction temp was in the mid to low 70's for most of the test. I saw the card reach 398w for a brief moment, but for most of the test it was below the 400w limit and not hitting it. Card held right at 2600 MHz the whole time until the error happened.
> 
> It seems like my chip just doesn't want to be overclocked past the 2500 - 2600 set range, unless you guys have any other ideas for me to try. I haven't messed with TDC yet because I don't understand how it applies to GPU's. For example, with Ryzen CPU's it is just a factory rating of how many amps it can be supplied, but a manual overclock just surpasses the limit so it doesn't matter. I also haven't messed with SOC voltage as I don't see how lowering it would help stability any. I am open to adjusting either of these if anyone can explain why.
> 
> I will go back to my old stable OC of 380w total power limit, 1163mV, and 2500 - 2600 MHz for now.
> 
> I'm not really disappointed by any means, the card is performing great. Just was wondering if it could be pushed any higher, as the temps are super low and reasonable. I'm used to pushing hardware until it's near being thermally limited. If not then that's fine. If I hear any ideas from you guys then I will give them a shot. Should I leave it alone or try something else?
> 
> Another update - For this OC session I ended up settling with 2500 - 2600 MHz, 1175mV, 380w total power limit, and 2150MHz Vram with fast timings. With the higher voltage, the card does hit the 380w power limit a few times during the run, but these settings seemed to provide the best stability and most consistent TimeSpy results, without having to consume 400w to be stable. I usually get somewhere around 20,500 graphics score, give or take a hundred. Temps are super cool, around 70 - 80c junction, with an aggressive fan curve that hits 100% at 55c. The Merc319 really has quiet fans even when they are running at full blast. I can get the card to run at a higher frequency, but the extra power required and inconsistent stability wasn't worth the performance increase, which was within margin of error anyway. I think the most impressive thing about this XFX card is how overboard the air cooling solution is. It could handle so much more heat than it actually needs to. I guess it should be that way, it is a HUGE card!
> 
> It's easy to get trapped into seeing other people's clock speeds online and thinking your card should be able to hit them. Every chip is different, and it seems I got an average one at best. Either way it's performing great, and is running at 200 MHz above it's rated boost clock of 2340. Will be interesting to see how performance changes as AMD ages these cards like a fine wine - they always do! If anyone has any other suggestions of tweaks or MPT settings to try, I'm all ears. Until then, I'm leaving it like this.
> 
> Attached is a screenshot of my last TimeSpy result for today, and a couple pictures of my PC.
> View attachment 2478011
> View attachment 2478012
> View attachment 2478013


Amazing looking setup you have. I will respond to this tomorrow morning when I am at the PC on my phone atm.


----------



## OrionBG

SLAY3R8888 said:


> How did you get the drop down arrow below your posts with your PC Specs? The "Rigs" section is limited to 100 characters so I know that can't be it, and I tried putting mine in the "About Me" section but that doesn't seem to add the drop down below my name.


Regarding your temp question, yes I think that keeping Junction temp below 100C is a good idea.
As for my sig, it was made before the forum moved to the new software and it was migrated. I have not touched it since the migration.


----------



## SLAY3R8888

HyperC said:


> Nice i can't get higher then 20,895 gpu score, either my gpu needs more vcore for test 2 or the amd drivers are just saying NO SIR not today once my clocks jump over 2640 its done TDC and watts do nothing more.. I love now amd have these drivers locked i cant overclock ram over 2150 but the core speed won't boost until i set 2150 again... So does the EVC2 work? or did i answer my own question its the lovely drivers


My card is almost scoring and clocking exactly the same as yours. I got around 20,800 score before, but it was dumb luck that I made it through the benchmark and I couldn't replicate the results consistently at all. Could be the drivers, it sure feels that way since I haven't been able to crash my PC at all while OC'ing this card.. it's always a soft "error" crash. My "stable" score is around 20,400 - 20,600.

Maybe this is AMD nerfing the software after the entire PC community roasted them for "unstable drivers" for a year straight...

I researched what an EVC2 does after reading about it in this forum, and I don't understand the benefits of using one. It's a monitoring device? Which the card already has tons of sensors on it, so yeah idk what it's for, I think it can repair bricked cards also.


----------



## HyperC

yeah and should be able to change the voltage, just not so sure it will help amd users vs nvidia


----------



## SLAY3R8888

I just installed 3DMark standalone version (instead of the Steam one) and I think I am seeing more stability in TimeSpy, and haven't gotten any errors so far after pushing clocks up. Will update with results.


----------



## LtMatt

SLAY3R8888 said:


> LtMatt --- thanks... This is a funny coincidence because I was doing some research and I am actually reading a post of yours in another forum as we speak. (I assume you are the same person, lol) It was from a month ago, I saw a profile where you had lowered the SOC and raised the TDC. I have only been playing with Power Limit GPU (W), and maximum voltage. Has your position on these settings changed any? Do I need to be lowering SOC and raising TDC? If you don't mind explaining why I should do this, I would appreciate it. My initial reaction is that lowering SOC would just open the window for more instability. Is there a problem with it's stock value?
> 
> I saw your post in this forum that you are now using 1018mV for SOC.
> 
> To be honest I don't even know exactly what the SOC value does for a GPU, but I assume it has to do with memory stability.
> 
> As far as TDC goes, I am not sure if I need to be raising it. I am at 380w total power limit and 1163mV atm, and the card will not make it through a TimeSpy run without throwing an error if I bump the frequency up from 2500 - 2600, to 2550 - 2650. It does the benchmark just fine at the lower of those two settings. I was about to rewrite SPPT with a 400w total power limit, and try raising to the full voltage of 1175mV, but I wonder if I need to be considering changing the SOC and TDC values from stock to something else. I will use these higher settings for gaming if it results in higher scores / perfomance and the thermals are reasonable. Your input / suggestions on this would be much appreciated. Thank you.
> 
> Update - I raised my power limit to 348w + 15% = 400w. I raised voltage to 1175mV. Tried running TimeSpy at 2550 - 2650 MHz and it gave me "an error occured" half way through the test. Thermals were fine, the highest I ever saw the junction temperature get was 81c for a brief moment. Other than that instance, junction temp was in the mid to low 70's for most of the test. I saw the card reach 398w for a brief moment, but for most of the test it was below the 400w limit and not hitting it. Card held right at 2600 MHz the whole time until the error happened.
> 
> It seems like my chip just doesn't want to be overclocked past the 2500 - 2600 set range, unless you guys have any other ideas for me to try. I haven't messed with TDC yet because I don't understand how it applies to GPU's. For example, with Ryzen CPU's it is just a factory rating of how many amps it can be supplied, but a manual overclock just surpasses the limit so it doesn't matter. I also haven't messed with SOC voltage as I don't see how lowering it would help stability any. I am open to adjusting either of these if anyone can explain why.
> 
> I will go back to my old stable OC of 380w total power limit, 1163mV, and 2500 - 2600 MHz for now.
> 
> I'm not really disappointed by any means, the card is performing great. Just was wondering if it could be pushed any higher, as the temps are super low and reasonable. I'm used to pushing hardware until it's near being thermally limited. If not then that's fine. If I hear any ideas from you guys then I will give them a shot. Should I leave it alone or try something else?
> 
> Another update - For this OC session I ended up settling with 2500 - 2600 MHz, 1175mV, 380w total power limit, and 2150MHz Vram with fast timings. With the higher voltage, the card does hit the 380w power limit a few times during the run, but these settings seemed to provide the best stability and most consistent TimeSpy results, without having to consume 400w to be stable. I usually get somewhere around 20,500 graphics score, give or take a hundred. Temps are super cool, around 70 - 80c junction, with an aggressive fan curve that hits 100% at 55c. The Merc319 really has quiet fans even when they are running at full blast. I can get the card to run at a higher frequency, but the extra power required and inconsistent stability wasn't worth the performance increase, which was within margin of error anyway. I think the most impressive thing about this XFX card is how overboard the air cooling solution is. It could handle so much more heat than it actually needs to. I guess it should be that way, it is a HUGE card!
> 
> It's easy to get trapped into seeing other people's clock speeds online and thinking your card should be able to hit them. Every chip is different, and it seems I got an average one at best. Either way it's performing great, and is running at 200 MHz above it's rated boost clock of 2340. Will be interesting to see how performance changes as AMD ages these cards like a fine wine - they always do! If anyone has any other suggestions of tweaks or MPT settings to try, I'm all ears. Until then, I'm leaving it like this.
> 
> Attached is a screenshot of my last TimeSpy result for today, and a couple pictures of my PC.
> View attachment 2478011
> View attachment 2478012
> View attachment 2478013


Yes that is me. 

I don't think TDC has much effect tbh as 320A is already pretty high. Nonetheless for short benching runs i sometimes bump it up to 350, but I'm not sure it is necessary.

I used to adjust Max SOC voltage as i was initially using the MBA and that benefited in terms of power and thermals from the Watts saved by lower voltage from 1.150v > 1.018v. Didn't effect stability as long as you didn't go below 1.000v i found. It does not seem to make much difference on the Merc, but i still lower it anyway to reduce power a few Watts.

What is your stock core clock as shown in Radeon Software? Mine is 2489Mhz. My sample is not great, but I can go a bit higher than you. I was able to complete a Timespy run at at 2700Mhz, at 1.175v, with power limit set to 400W. It was barely stable and when i ran it again a few days later i had to drop to 2690Mhz. I think temperatures and ambient can help if you can lower them.

I have aircon in my man cave and i had that cooling the room to 17c at the time i was freezing my bollocks off in there, but needs must for a better score Lol.

The thing i wanted to ask you about was the temperatures and the noise of your Merc. I feel like mine sounds like a jet engine at 100%, and i was really surprised when you said how quiet yours is at 100%. I guess noise is subjective, or perhaps mine has dodgy fan bearings or something. I like a quiet PC and have all my fans running at low RPM, so for me the Merc starts getting noticeable once you go over 1500RPM which is about 35%- ish. Once you go past 50% its in the loud territory. And as i said, at 100% it's like the sound of a reference blower card at full blast imo.

Regarding temperatures, what is the difference between your Edge and Junction temperatures after a Timespy run at 380W? Also, what about when you just run a game for 30 mins that uses sat 300W? I see around 15-20c when gaming difference, and around 20-30C when running a synthetic with high power draw 350W+. I did re-paste my Merc with MX4 and that got me a slight improvement over what it was originally like by a couple of C and moved edge and junction temps closer together by a couple of C. Not a huge difference though, stock paste was decent but was starting to dry a little.

Here is a Timespy run, 380W set in MPT, 2500-2600Mhz set in Radeon Software, max voltage 1.112v, fans at 100% fan speed at 55c and i saw a peak of 94c in Junction temperature. I am wondering if that is a little high based on your feedback. 









Here's a pic of my Merc and my now sold MBA.


----------



## SLAY3R8888

LtMatt said:


> Yes that is me.
> 
> I don't think TDC has much effect tbh as 320A is already pretty high. Nonetheless for short benching runs i sometimes bump it up to 350, but I'm not sure it is necessary.
> 
> I used to adjust Max SOC voltage as i was initially using the MBA and that benefited in terms of power and thermals from the Watts saved by lower voltage from 1.150v > 1.018v. Didn't effect stability as long as you didn't go below 1.000v i found. It does not seem to make much difference on the Merc, but i still lower it anyway to reduce power a few Watts.
> 
> What is your stock core clock as shown in Radeon Software? Mine is 2489Mhz. My sample is not great, but I can go a bit higher than you. I was able to complete a Timespy run at at 2700Mhz, at 1.175v, with power limit set to 400W. It was barely stable and when i ran it again a few days later i had to drop to 2690Mhz. I think temperatures and ambient can help if you can lower them.
> 
> I have aircon in my man cave and i had that cooling the room to 17c at the time i was freezing my bollocks off in there, but needs must for a better score Lol.
> 
> The thing i wanted to ask you about was the temperatures and the noise of your Merc. I feel like mine sounds like a jet engine at 100%, and i was really surprised when you said how quiet yours is at 100%. I guess noise is subjective, or perhaps mine has dodgy fan bearings or something. I like a quiet PC and have all my fans running at low RPM, so for me the Merc starts getting noticeable once you go over 1500RPM which is about 35%- ish. Once you go past 50% its in the loud territory. And as i said, at 100% it's like the sound of a reference blower card at full blast imo.
> 
> Regarding temperatures, what is the difference between your Edge and Junction temperatures after a Timespy run at 380W? Also, what about when you just run a game for 30 mins that uses sat 300W? I see around 15-20c when gaming difference, and around 20-30C when running a synthetic with high power draw 350W+. I did re-paste my Merc with MX4 and that got me a slight improvement over what it was originally like by a couple of C and moved edge and junction temps closer together by a couple of C. Not a huge difference though, stock paste was decent but was starting to dry a little.
> 
> Here is a Timespy run, 380W set in MPT, 2500-2600Mhz set in Radeon Software, max voltage 1.112v, fans at 100% fan speed at 55c and i saw a peak of 94c in Junction temperature. I am wondering if that is a little high based on your feedback.
> View attachment 2478127
> 
> 
> Here's a pic of my Merc and my now sold MBA.
> View attachment 2478126


Great looking build you have there! Looks perfect! I wish I was a better photographer, I feel my system looks so much better in person than in any photo I take. The Merc is beautiful, isn't it?

1. I might try messing with TDC at some point, but it doesn't seem to be imposing any barriers for me, even if I get the card to draw 400w.

2. I might try this sometime, thanks for the info on SOC voltage. Appreciate the explanation.

3. My stock core clock in Radeon Software is 2509 MHz. I am not thinking my temps are an issue, as I am running the aggressive fan cure, and I have 5 intake fans and 2 exhaust. 3 of the intake fans are really good Vardar EVO RGB, and all case fans run at 100% (2,000 rpm) all the time. In TimeSpy, even when running 1175mV and 400w power limit, my junction temp is in the low 70's for most of the test and reaches 81 - 82c at one point in the test for a second or two. So I would consider those temps to be very acceptable and a non-issue when it comes to overclocking stability / clock speeds, but I could be wrong.

4. My setup is actually in my garage, I have an insulated garage door that helps with keeping temps closer to room temperature, and I regulate the temperatures with space heaters around me during the winter. During the summer I have tower fans as well as the ability to open a door to the house to let air conditioned air cool my area. This is the only spot in my house where I won't disturb the wife and kids, and I can game and make noise while they are sleeping, etc. So it's the most ideal spot for my man cave in this house.

5. Honestly I bet your fan bearings are just fine. It does sound like a jet engine at 100%, but I think I am on the opposite end of the noise tolerance spectum from you. First of all, I have an air purifier and space heaters running, all within a few feet of where I sit. They make a lot of white noise. I also run my Noctua NH-U12A fans and my case fans all at full speed of 2,000 rpm. So therefore I think these things really drown out the sound of the GPU fans. I can still hear the GPU above everything else, but it's not annoying or loud to me. Also, the only noise level comparison I have is to my first ever GPU, my Gigabyte OC 5700xt, which had smaller fans and was louder than this card at 100% speed IMO. However the temps got higher on that card, I could easily hit 90 - 100c in TimeSpy when running my max OC on it. That's why I'm so impressed with the cooling on this Merc, I think it really gets along well with my over abundance of case fans pushing fresh air in and out constantly. I knew I needed so many case fans for overclocking because I wasn't planning to use any water cooling. Bottom line, I am one of those wierdos that likes hearing the roar of the GPU fans ramp up when I launch a game, lol, but I still can't hear it with my headset on (or at least it doesn't bother me?)

6. My temp differential is around 20c towards the end of test 2 in TimeSpy, hottest temps I see are 62c edge and 82c junction. That's when running 1175mV and 391w total power limit. For most of the test my temps are more like 55c edge and 72c junction on average. For gaming, I honestly don't think I've played a game that uses even 300w. For example, in BF5 with framerate uncapped, 1440p Ultra, I get around 180 - 220 fps and my temps are like 45c edge and 55c junction. In other games there might be a 15c differential instead of 10c, but yeah the temps are super low and not an issue really, so I don't pay much attention to them generally when gaming. I use a CapFrameX + RivaTuner overlay that shows my temps in all games except COD -- I can't use the overlay with COD because it doesn't work correctly and it flickers in my YT recordings. But even when checking my temps randomly in COD and other games, for example if I have Radeon Software open on my second monitor, my temps are always lower than what I see in a TimeSpy run. When I see they are accepable, I stop paying attention to them.

7. If I were in your situation, I wouldn't be concerned with your junction reaching 92c at all. I ran my 5700xt at 1206mV and 2150MHz with junctions hitting high 90's and even 100c at times, and overclocking stability was just fine, and it didn't seem to degrade for a whole year of heavy gaming on the OC, until I eventually upgraded to the Merc. As long as you're not constantly over 100c and flirting with 110c junction, I wouldn't worry about it. That's just me though.

So I noticed something interesting. Anytime I have a "0" score run where it throws an error and doesn't finish the run, If I look at any of my TimeSpy graphs for those runs, there is incorrect GPU / Vram clock speed reported in TimeSpy, where the clocks are reported as super low for about 20 seconds, then the graph's clock speeds go back up to normal, and then 30 seconds later is when the error occurs and the test is stopped. I know these low clock speeds are incorrect because I watched the clock speeds in Adrenaline during the test, and they were up where they should be the entire time. See the two attached examples of this. One of them being during Test 1 and one of them during Test 2. On ALL of my tests that completed with no error, the graph shows the clock speed to be right where it should be for the whole test. No weird behavior like this. This makes me think there is some kind of communication error between TimeSpy and Adrenaline happening, and it's causing the run to be stopped. It's very hit or miss if I can get a run to pass at 2550 - 2650, I would say about 50% of the time it will pass without error. It's really random and doesn't act like normal instability would. This leads me to believe that I have more left in my card after some updates to Adrenaline or TimeSpy happen. Or it could just be instability causing the TimeSpy clock speed reporting to go bonkers every time.

I didn't have any success undervolting this card below 1150mV, but that was before I started playing with MPT and raising power limit. The "Auto Undervolt" feature recommends 1150mV for my card. What does yours say? I am going to try undervolting again, but with a higher power limit this time, and see if I can get any stable results doing this. I am suspecting though that my card just likes the extra voltage. That's how my 5700xt was as well. I will post an update in a while.


----------



## HyperC

How are you guys locking the vcore mine always changes i can move the slider and apply it 20 times and vcore still boost max


----------



## SLAY3R8888

HyperC said:


> How are you guys locking the vcore mine always changes i can move the slider and apply it 20 times and vcore still boost max


Limiting with MPT to the voltage you want. That's the only way.


----------



## SLAY3R8888

I made a post before HyperC replying to LtMatt, but it became invisible after editing to correct my grammar. Hopefully it becomes not invisible.

Edit: It's visible now.


----------



## SLAY3R8888

Update: I have attempted both undervolting and lowering SOC voltage, both of these make the card unstable in TimeSpy if I lower these values much at all. Still the best stable result with my card is around 20,500 graphics score in TimeSpy. This is with 2500 - 2600 MHz set frequency, 1175mV, 385w total power limit, 2150 MHz Vram w/ fast timings, and fan curve hitting 100% at 55c. I can get basically the same score if I lower the voltage slightly to 1162mV with MPT, but it is more stable if I set the voltage to 1175mV. 2550 - 2650 MHz is a hit or miss whether it will actually make it through a TimeSpy run. I think I am done messing with it for now. I mean it this time!! Lol. Time to go play some games


----------



## ZealotKi11er

SLAY3R8888 said:


> Update: I have attempted both undervolting and lowering SOC voltage, both of these make the card unstable in TimeSpy if I lower these values much at all. Still the best stable result with my card is around 20,500 graphics score in TimeSpy. This is with 2500 - 2600 MHz set frequency, 1175mV, 385w total power limit, 2150 MHz Vram w/ fast timings, and fan curve hitting 100% at 55c. I can get basically the same score if I lower the voltage slightly to 1162mV with MPT, but it is more stable if I set the voltage to 1175mV. 2550 - 2650 MHz is a hit or miss whether it will actually make it through a TimeSpy run. I think I am done messing with it for now. I mean it this time!! Lol. Time to go play some games


My card does with same thing with TimeSpy. 2500/2600. 20500 score.


----------



## SLAY3R8888

ZealotKi11er said:


> My card does with same thing with TimeSpy. 2500/2600. 20500 score.


Looks like my card is also stable in Time Spy Stress Test at 2500/2600 and 1175mV. I'm glad the stress test doesn't use TimeSpy Test 2 or else this would probably be a lot harder to pass... lol


----------



## geriatricpollywog

Techpowerup took down this review until Feb 18 because they accidentally posted it early but the first page is on archive.org and includes pricing and specs.








MSI Radeon RX 6900 XT Gaming X Trio Review


The Gaming X Trio from MSI is the first RX 6900 XT custom design that we review. It comes with a large overclock out of the box and the cooler is much better than what the AMD RX 6900 XT reference design offers. We also saw substantially improved overclocking potential from MSI's new Radeon...




web.archive.org


----------



## Ipak

Hey, joining big navi club 

Was very fortunate to found one Asrock reference model @ around 1,1k € (most avaible 3080 was more then that) so not bad i think. 

Running "pleb" aluminium fluid gaming from EKWB, but at least it's affordable.

350W limit 2700mhz boost target, max temps 65'C edge 95'C junction (previous Vega 64 also with aluminium block edge temps also was around 60-65)

Time Spy Extreme
Time Spy

240+360 aluminium radiators + 3900X with enhanced pbo

P.S. old faithful Corsair AXi 760W power supply handling it like a champ.


----------



## HyperC

still tweaking but getting better


----------



## LtMatt

SLAY3R8888 said:


> Great looking build you have there! Looks perfect! I wish I was a better photographer, I feel my system looks so much better in person than in any photo I take. The Merc is beautiful, isn't it?
> 
> 1. I might try messing with TDC at some point, but it doesn't seem to be imposing any barriers for me, even if I get the card to draw 400w.
> 
> 2. I might try this sometime, thanks for the info on SOC voltage. Appreciate the explanation.
> 
> 3. My stock core clock in Radeon Software is 2509 MHz. I am not thinking my temps are an issue, as I am running the aggressive fan cure, and I have 5 intake fans and 2 exhaust. 3 of the intake fans are really good Vardar EVO RGB, and all case fans run at 100% (2,000 rpm) all the time. In TimeSpy, even when running 1175mV and 400w power limit, my junction temp is in the low 70's for most of the test and reaches 81 - 82c at one point in the test for a second or two. So I would consider those temps to be very acceptable and a non-issue when it comes to overclocking stability / clock speeds, but I could be wrong.
> 
> 4. My setup is actually in my garage, I have an insulated garage door that helps with keeping temps closer to room temperature, and I regulate the temperatures with space heaters around me during the winter. During the summer I have tower fans as well as the ability to open a door to the house to let air conditioned air cool my area. This is the only spot in my house where I won't disturb the wife and kids, and I can game and make noise while they are sleeping, etc. So it's the most ideal spot for my man cave in this house.
> 
> 5. Honestly I bet your fan bearings are just fine. It does sound like a jet engine at 100%, but I think I am on the opposite end of the noise tolerance spectum from you. First of all, I have an air purifier and space heaters running, all within a few feet of where I sit. They make a lot of white noise. I also run my Noctua NH-U12A fans and my case fans all at full speed of 2,000 rpm. So therefore I think these things really drown out the sound of the GPU fans. I can still hear the GPU above everything else, but it's not annoying or loud to me. Also, the only noise level comparison I have is to my first ever GPU, my Gigabyte OC 5700xt, which had smaller fans and was louder than this card at 100% speed IMO. However the temps got higher on that card, I could easily hit 90 - 100c in TimeSpy when running my max OC on it. That's why I'm so impressed with the cooling on this Merc, I think it really gets along well with my over abundance of case fans pushing fresh air in and out constantly. I knew I needed so many case fans for overclocking because I wasn't planning to use any water cooling. Bottom line, I am one of those wierdos that likes hearing the roar of the GPU fans ramp up when I launch a game, lol, but I still can't hear it with my headset on (or at least it doesn't bother me?)
> 
> 6. My temp differential is around 20c towards the end of test 2 in TimeSpy, hottest temps I see are 62c edge and 82c junction. That's when running 1175mV and 391w total power limit. For most of the test my temps are more like 55c edge and 72c junction on average. For gaming, I honestly don't think I've played a game that uses even 300w. For example, in BF5 with framerate uncapped, 1440p Ultra, I get around 180 - 220 fps and my temps are like 45c edge and 55c junction. In other games there might be a 15c differential instead of 10c, but yeah the temps are super low and not an issue really, so I don't pay much attention to them generally when gaming. I use a CapFrameX + RivaTuner overlay that shows my temps in all games except COD -- I can't use the overlay with COD because it doesn't work correctly and it flickers in my YT recordings. But even when checking my temps randomly in COD and other games, for example if I have Radeon Software open on my second monitor, my temps are always lower than what I see in a TimeSpy run. When I see they are accepable, I stop paying attention to them.
> 
> 7. If I were in your situation, I wouldn't be concerned with your junction reaching 92c at all. I ran my 5700xt at 1206mV and 2150MHz with junctions hitting high 90's and even 100c at times, and overclocking stability was just fine, and it didn't seem to degrade for a whole year of heavy gaming on the OC, until I eventually upgraded to the Merc. As long as you're not constantly over 100c and flirting with 110c junction, I wouldn't worry about it. That's just me though.
> 
> So I noticed something interesting. Anytime I have a "0" score run where it throws an error and doesn't finish the run, If I look at any of my TimeSpy graphs for those runs, there is incorrect GPU / Vram clock speed reported in TimeSpy, where the clocks are reported as super low for about 20 seconds, then the graph's clock speeds go back up to normal, and then 30 seconds later is when the error occurs and the test is stopped. I know these low clock speeds are incorrect because I watched the clock speeds in Adrenaline during the test, and they were up where they should be the entire time. See the two attached examples of this. One of them being during Test 1 and one of them during Test 2. On ALL of my tests that completed with no error, the graph shows the clock speed to be right where it should be for the whole test. No weird behavior like this. This makes me think there is some kind of communication error between TimeSpy and Adrenaline happening, and it's causing the run to be stopped. It's very hit or miss if I can get a run to pass at 2550 - 2650, I would say about 50% of the time it will pass without error. It's really random and doesn't act like normal instability would. This leads me to believe that I have more left in my card after some updates to Adrenaline or TimeSpy happen. Or it could just be instability causing the TimeSpy clock speed reporting to go bonkers every time.
> 
> I didn't have any success undervolting this card below 1150mV, but that was before I started playing with MPT and raising power limit. The "Auto Undervolt" feature recommends 1150mV for my card. What does yours say? I am going to try undervolting again, but with a higher power limit this time, and see if I can get any stable results doing this. I am suspecting though that my card just likes the extra voltage. That's how my 5700xt was as well. I will post an update in a while.
> View attachment 2478139
> View attachment 2478140


Cheers for the feedback.

That man cave sounds ideal, I have to shut my door if i need to be quiet and perhaps wear headphones Lol.

Yeah I'm not too worried about temps. This Merc was actually a B Grade item so had either been used before or was a display model in the store.

I ended up purchasing a brand new Merc yesterday as they became available with full warranty, and this new one has similar temps etc to yours. +/- the difference in our ambient temp/and case airflow. I suspect my ambient might often be higher than yours if you are in a garage. I have a thick wad of loft insulation above me so my man cave can store heat very easily, one of the reasons i got air con installed for the summer heat, even though in the UK we barely have much sun.

Good spot on Timespy quirks. I have noticed that when i am undervolting in 3DMark, if i run the fan speeds at 100% i can crash, whereas sometimes if i run a more conservative fan profile i can be stable at the same initial voltage. Must be related to the Junction temp, as that increases voltage climbs in increments, assuming you are not already at max voltage or temperature.

For my daily 24/7 clocks i like to set the power limit to +300W in MPT and leave power limit at 0% in Radeon Software. I run 2470/2570 Core and 2112Mhz memory. This means that for 99% of the time, you'll have 2500Mhz minimum in game core clock, i like a nice round number helps the OSD OCD Lol.

1.075v starting voltage, which climbs to 1.100v (verified via HWINFO64 OSD as the voltage listed in GPU Tuning is never accurate) once Junction temp gets into the mid 90s, max fan speed of 32% which is very quiet. Most games will be at or just below the 300W limit, some will bounce off the power limit and core clock and voltage will drop a little until the power usage drops but i find this a nice balance between better than stock performance, whilst keeping power usage and temps down, with a lower than stock fan speed %. I think stock is 35%, or 40%.

I ran auto undervolt and it suggested 1150mv too. Did on all my cards, not sure there is too much thought behind it Lol. Auto Memory OC suggests 2150Mhz. However on my other two 6900 XTs, in some games i saw a performance drop off in some games much above 2112Mhz in Radeon Software. Weirdly though, synthetics still show a performance increase. I recommend testing the games you play to see at what point error correction kicks in. I've yet to do it on my new Merc, but will test it today now I have the day off work to get some solid testing and gameplay in.


----------



## SLAY3R8888

LtMatt said:


> Cheers for the feedback.
> 
> That man cave sounds ideal, I have to shut my door if i need to be quiet and perhaps wear headphones Lol.
> 
> Yeah I'm not too worried about temps. This Merc was actually a B Grade item so had either been used before or was a display model in the store.
> 
> I ended up purchasing a brand new Merc yesterday as they became available with full warranty, and this new one has similar temps etc to yours. +/- the difference in our ambient temp/and case airflow. I suspect my ambient might often be higher than yours if you are in a garage. I have a thick wad of loft insulation above me so my man cave can store heat very easily, one of the reasons i got air con installed for the summer heat, even though in the UK we barely have much sun.
> 
> Good spot on Timespy quirks. I have noticed that when i am undervolting in 3DMark, if i run the fan speeds at 100% i can crash, whereas sometimes if i run a more conservative fan profile i can be stable at the same initial voltage. Must be related to the Junction temp, as that increases voltage climbs in increments, assuming you are not already at max voltage or temperature.
> 
> For my daily 24/7 clocks i like to set the power limit to +300W in MPT and leave power limit at 0% in Radeon Software. I run 2470/2570 Core and 2112Mhz memory. This means that for 99% of the time, you'll have 2500Mhz minimum in game core clock, i like a nice round number helps the OSD OCD Lol.
> 
> 1.075v starting voltage, which climbs to 1.100v (verified via HWINFO64 OPSD as the voltage listed in GPU Tuning is never accurate) once Junction temp gets into the mid 90s, max fan speed of 32% which is very quiet. Most games will be at or just below the 300W limit, some will bounce off the power limit and core clock and voltage will drop a little until the power usage drops but i find this a nice balance between better than stock performance, whilst keeping power usage and temps down, with a lower than stock fan speed %. I think stock is 35%, or 40%.
> 
> I ran auto undervolt and it suggested 1150mv too. Did on all my cards, not sure there is too much thought behind it Lol. Auto Memory OC suggests 2150Mhz. However on my other two 6900 XTs, in some games i saw a performance drop off in some games much above 2112Mhz in Radeon Software. Weirdly though, synthetics still show a performance increase. I recommend testing the games you play to see at what point error correction kicks in. I've yet to do it on my new Merc, but will test it today now I have the day off work to get some solid testing and gameplay in.


Thanks for the info. You're using a lot less power than me in general. I have tried undervolting to the mV that you're using, and my card is not even stable at 2450 - 2550 at those voltages. I can get it to make it through a run at 2400 - 2500 with around 1140 - 1150mV, but the score is significantly lower. I think both of your cards are different than my card. My card wants all the voltage. I keep the power limit at 385w for gaming, which I never will hit, but I've stress tested it with Furmark and the thermals can handle pulling that much power constantly if needed. In TimeSpy I only hit that power limit like once for the whole test so its a perfect power limit for 1175mV, and I don't have to be switching it when I change my activities from benchmarking to gaming. Gaming I rarely ever use over 300w. I'm sure I might if I played a super demanding title.

As far as error correction. Let me make sure I'm following what you're saying. Are you saying that error correction "kicking in" is when there is slight instability causing a decrease in game performance, because there are errors needing corrected? I'm pretty sure that's what that means.

I have watched my gaming performance a lot after I overclock, I will usually do some synthetic benching and then take a break from OC'ing for a while and just play a bunch of games on the OC.

With cards as fast as ours, even in 1440p Ultra it's hard to get the GPU loaded up all the way in some games that I enjoy. I have a 5800x and it's great, but there are still times where the GPU utilization is inconsistent, either due to the CPU time being a hair slower than the GPU time, or the game engine just not being totally optimized.

Be careful when you draw conclusions about performance increase / decrease during gameplay, any processes or windows services that randomly start running in the background, or just small changes in the gaming environment can changes in FPS unpredictably. For example, in BF5 I get 180 - 200 fps on average in Multiplayer at 1440p Ultra, but there are random times when if there are a ton of players near me and a lot of things happening at once, my FPS can temporarily drop to around 150 - 160fps for a few seconds. Also depends on which map I am playing. If I'm not in a full lobby, or on an area of the map where not much action is happening, I can see 230+ fps at times. It's just a result of how the game and netcode interact with the CPU. Some multiplayer games don't utilize the CPU in an optimal way, and it can slow down everything at times, even if the CPU is overclocked perfectly. Another great example of this is COD Warzone. I get around 180 - 200fps at 1440p High Textures in that game, but there are times where the fps will drop to like 140. Every time that happens I notice my "CPU Time" has went up a couple ms above the "GPU Time". It's just the way the game is coded. Everyone complains about fps drops in WZ, even people with 3090's and 10 / 12 core CPUs. With such high variance in fps in certain games like that, I try to rule out performance anomalies like that if possible and not read too much into it.

In my opinion, synthetics are 99% of the time the most consistent environment to validate performance improvement. Unless you just see some obvious decreased performance in games that's not explained by any other variables.

Congrats on the brand new Merc. They say the only thing better than a Merc319 is two of them 😏

Today I uploaded my first video to my YT gaming channel since I got my 6900xt, if anyone wants to check it out. I post random gameplays when I'm feeling up to doing some video editing: 




If you skip around to different parts of the video, you'll see that my fps can vary quite a bit in this game. But there are a lot of players and a lot of dynamic situations. In general it's a really good, smooth experience either way.


----------



## danny9428

Hi, finally got my 5950X build up and do a bit of retest on my card.
Unfortunately even with higher clock and shoving more power in at 400W board I'm still not getting the score I did on my old X99 before 









I scored 19 407 in Time Spy


AMD Ryzen 9 5950X, AMD Radeon RX 6900 XT x 1, 131072 MB, 64-bit Windows 10}




www.3dmark.com





Also it seems my CPU is a dud as it isn't stable at PBO and I'm only able to get it to run at manual ocs
though even so CCD2 kinda gives me quite a lot of headache and it's not giving way unless I offset it down by 150mhz from CCD1
I only managed a 4.6/4.45 on air without getting greeted by the good-old 0x124 or slamming the chip into inferno

Now I'm kinda lost on how I even hit 21000 graphics with an 1660v3 :/


----------



## SLAY3R8888

danny9428 said:


> Hi, finally got my 5950X build up and do a bit of retest on my card.
> Unfortunately even with higher clock and shoving more power in at 400W board I'm still not getting the score I did on my old X99 before
> 
> 
> 
> 
> 
> 
> 
> 
> 
> I scored 19 407 in Time Spy
> 
> 
> AMD Ryzen 9 5950X, AMD Radeon RX 6900 XT x 1, 131072 MB, 64-bit Windows 10}
> 
> 
> 
> 
> www.3dmark.com
> 
> 
> 
> 
> 
> Also it seems my CPU is a dud as it isn't stable at PBO and I'm only able to get it to run at manual ocs
> though even so CCD2 kinda gives me quite a lot of headache and it's not giving way unless I offset it down by 150mhz from CCD1
> I only managed a 4.6/4.45 on air without getting greeted by the good-old 0x124 or slamming the chip into inferno
> 
> Now I'm kinda lost on how I even hit 21000 graphics with an 1660v3 :/


Yeah I'm using an all core OC 4.65GHz on my 5800x which gets better Cinebench R20 scores and all around better, more consistent performance than PBO. I tried messing with PBO2 + curve optimizer and still the all core OC has been the easiest way for me to get it to perform the best. I'm sure with the extra cores you have it's not as easy to do. I haven't hit 21k graphics score either, I'm usually right around the same score as you. 20,500 give or take 100. I can get 20,700 if I run a higher GPU frequency that is not very stable and will only make it through the test 1 out of 5 runs, so I don't really count that. I wouldn't stress the fact that you're not hitting 21k anymore, if I ever find a way to do it I will let you know. Around a 20k - 20.5k graphics score when OC'ed is pretty standard for this GPU from what I've seen from other users in here. I have noticed that any GPU temp variation can make my scores drop, for example if I don't run as aggressive of a fan curve or ambient temp is higher. You might just make sure you're getting the card as cool as you possibly can. Maybe a 3DMark update or a driver update will get us there at some point...


----------



## newls1

Im in need of advice please! Trying to tune my OC on this TUF 6900XT and understand AMD's logic behind this "Min and Max" slider bar usage. My benchmarks are doing just fine with the OC ive been using ( 2550min / 2800max MPT @ 325w 375TDP fans 100%) i thought id try to raise the "min" slider to 2600 and ran timespy and lost 1500ish points!! ***, put the slider back to 2550 and score was normal again. Why is this? am i suppose to keep a certain range between min and max?


----------



## newls1

HyperC said:


> still tweaking but getting better
> View attachment 2478422


what is your gpu OC?


----------



## HyperC

I believe min was 2500 and max was either 2675 or 2700 can't honestly remember, funny thing is I noticed my boost went over my max setting not sure if normal or glitch but really not a fan of min max 100hz forced difference guess it could always be worse


----------



## SLAY3R8888

newls1 said:


> Im in need of advice please! Trying to tune my OC on this TUF 6900XT and understand AMD's logic behind this "Min and Max" slider bar usage. My benchmarks are doing just fine with the OC ive been using ( 2550min / 2800max MPT @ 325w 375TDP fans 100%) i thought id try to raise the "min" slider to 2600 and ran timespy and lost 1500ish points!! ***, put the slider back to 2550 and score was normal again. Why is this? am i suppose to keep a certain range between min and max?


So this is my thought process:

1. No I don't think your lack of MHz range is causing this, but what would be more telling is if you really look at the GPU / CPU frequencies graph in TimeSpy from the two tests and see if you notice anything different between them. Maybe some interesting change in clock speeds for either would explain the score drop. 

2. If you're CPU is on PBO, maybe your CPU clock speeds varied enough to drop your score randomly. I'm on an all core OC @ 4.65GHz and it seems to provide a bit more consistent results than PBO, especially if the CPU randomly gets loaded with a few processes in the background. I noticed PBO does a lot of rapid core swapping to keep high voltage off of any one core for too long. I'm not sure how reliable this is for consistency. 

3. It is possible that the increase in minimum GPU clock speed somehow made your card start bouncing off of your total power limit (W). Keep an eye on that and see if that's what's going on. 

4. It could be that you reached a point of instability when you raised the frequency, therefore dropping your score. Which would require either to raise the voltage, power limit, or drop the frequency to get stability back. 

5. It could just be variation in temperatures, or just an anomaly. In my old system and my new one, I've had this exact same thing happen before, and then I end up figuring out that it was just a random incident and it wasn't my settings causing the score drop. The other day my scores started going down a lot no matter what setting I changed, and they were just inconsistent and not making any sense. A PC reboot, closing extra tasks in windows, and raising my fan curve back up, got my scores back up to normal. 

Keep trying random things, even if they might not make logical sense, and run multiple tests to verify results. Hope these ideas help. Good luck.


----------



## LtMatt

SLAY3R8888 said:


> Thanks for the info. You're using a lot less power than me in general. I have tried undervolting to the mV that you're using, and my card is not even stable at 2450 - 2550 at those voltages. I can get it to make it through a run at 2400 - 2500 with around 1140 - 1150mV, but the score is significantly lower. I think both of your cards are different than my card. My card wants all the voltage. I keep the power limit at 385w for gaming, which I never will hit, but I've stress tested it with Furmark and the thermals can handle pulling that much power constantly if needed. In TimeSpy I only hit that power limit like once for the whole test so its a perfect power limit for 1175mV, and I don't have to be switching it when I change my activities from benchmarking to gaming. Gaming I rarely ever use over 300w. I'm sure I might if I played a super demanding title.
> 
> As far as error correction. Let me make sure I'm following what you're saying. Are you saying that error correction "kicking in" is when there is slight instability causing a decrease in game performance, because there are errors needing corrected? I'm pretty sure that's what that means.
> 
> I have watched my gaming performance a lot after I overclock, I will usually do some synthetic benching and then take a break from OC'ing for a while and just play a bunch of games on the OC.
> 
> With cards as fast as ours, even in 1440p Ultra it's hard to get the GPU loaded up all the way in some games that I enjoy. I have a 5800x and it's great, but there are still times where the GPU utilization is inconsistent, either due to the CPU time being a hair slower than the GPU time, or the game engine just not being totally optimized.
> 
> Be careful when you draw conclusions about performance increase / decrease during gameplay, any processes or windows services that randomly start running in the background, or just small changes in the gaming environment can changes in FPS unpredictably. For example, in BF5 I get 180 - 200 fps on average in Multiplayer at 1440p Ultra, but there are random times when if there are a ton of players near me and a lot of things happening at once, my FPS can temporarily drop to around 150 - 160fps for a few seconds. Also depends on which map I am playing. If I'm not in a full lobby, or on an area of the map where not much action is happening, I can see 230+ fps at times. It's just a result of how the game and netcode interact with the CPU. Some multiplayer games don't utilize the CPU in an optimal way, and it can slow down everything at times, even if the CPU is overclocked perfectly. Another great example of this is COD Warzone. I get around 180 - 200fps at 1440p High Textures in that game, but there are times where the fps will drop to like 140. Every time that happens I notice my "CPU Time" has went up a couple ms above the "GPU Time". It's just the way the game is coded. Everyone complains about fps drops in WZ, even people with 3090's and 10 / 12 core CPUs. With such high variance in fps in certain games like that, I try to rule out performance anomalies like that if possible and not read too much into it.
> 
> In my opinion, synthetics are 99% of the time the most consistent environment to validate performance improvement. Unless you just see some obvious decreased performance in games that's not explained by any other variables.
> 
> Congrats on the brand new Merc. They say the only thing better than a Merc319 is two of them 😏
> 
> Today I uploaded my first video to my YT gaming channel since I got my 6900xt, if anyone wants to check it out. I post random gameplays when I'm feeling up to doing some video editing:
> 
> 
> 
> 
> If you skip around to different parts of the video, you'll see that my fps can vary quite a bit in this game. But there are a lot of players and a lot of dynamic situations. In general it's a really good, smooth experience either way.


I watched your video, i could do with someone teaching me to edit videos. Mine are much less polished. You seem pretty good at fast paced shooters too Lol.

I think i see why your temps are so low, your GPU utilization is 50-90% and it seems to spend most of the time closer to 50 than 90. I am running at 4K and my GPU utilization is always locked at 99% regardless of what game i play. Another thing i noticed is the temps get higher the higher resolution you use. Guess those ROPS kick out more heat at 4K than 1080P+ etc. I just moved from 5120x1440 to 4K120HZ and i can see a difference in heat output by my various 6900 XTs. 

Regarding the memory. I was just testing games using static areas where FPS were fairly consistent and locked. I would then alt tab in out and of the game changing memory clock. I found in a variety of games (i tested Far Cry 5, COD MW, and a few others - in single player mode GPU bound scenarios only) i would see a FPS gains up to around 2012Mhz. At some point after that, i would lose performance and the FPS would go back to where it was when memory was running at stock. So i would assume the memory error correction kicks in and dials back performance. It's weird how the synthetics always show gains up to 2150Mhz. So i just leave the memory at 2112Mhz, which results in 2100Mhz locked in game and improves performance. 2150Mhz = 2138Mhz in game, so I'm only missing out on 38Mhz at worse and can't see that offering much more perf anyway.


----------



## SLAY3R8888

LtMatt said:


> I watched your video, i could do with someone teaching me to edit videos. Mine are much less polished. You seem pretty good at fast paced shooters too Lol.
> 
> I think i see why your temps are so low, your GPU utilization is 50-90% and it seems to spend most of the time closer to 50 than 90. I am running at 4K and my GPU utilization is always locked at 99% regardless of what game i play. Another thing i noticed is the temps get higher the higher resolution you use. Guess those ROPS kick out more heat at 4K than 1080P+ etc. I just moved from 5120x1440 to 4K120HZ and i can see a difference in heat output by my various 6900 XTs.
> 
> Regarding the memory. I was just testing games using static areas where FPS were fairly consistent and locked. I would then alt tab in out and of the game changing memory clock. I found in a variety of games (i tested Far Cry 5, COD MW, and a few others - in single player mode GPU bound scenarios only) i would see a FPS gains up to around 2012Mhz. At some point after that, i would lose performance and the FPS would go back to where it was when memory was running at stock. So i would assume the memory error correction kicks in and dials back performance. It's weird how the synthetics always show gains up to 2150Mhz. So i just leave the memory at 2112Mhz, which results in 2100Mhz locked in game and improves performance. 2150Mhz = 2138Mhz in game, so I'm only missing out on 38Mhz at worse and can't see that offering much more perf anyway.


I am busy at the moment but I will reply to this tomorrow sometime.


----------



## newls1

having a very strange issue after installing the latest driver (12.2.2 beta) FC5 no matter waht settings i overclock to in wattman, the game is now hard capped @ 2592mhz... so weird. I litterly put min slider to 2700 max slider to 2850 and game is still capped at 259mhz. Only game doing this, everything else is as normal. Any ideas, or just choke this up to another dunia game engine issue broken by a new driver?!


----------



## SLAY3R8888

newls1 said:


> having a very strange issue after installing the latest driver (12.2.2 beta) FC5 no matter waht settings i overclock to in wattman, the game is now hard capped @ 2592mhz... so weird. I litterly put min slider to 2700 max slider to 2850 and game is still capped at 259mhz. Only game doing this, everything else is as normal. Any ideas, or just choke this up to another dunia game engine issue broken by a new driver?!


Are you using MorePowerTool to raise the Power Limit (W)? My first thought is that maybe that specific game requires the card to use more power to reach those clocks, and if you haven't modified the power limit, your card might be reaching that power limit and then getting stuck there unable to clock any higher. My first thought would be to monitor your power consumption in Adrenaline Software. If you have never raised the power limit in MPT, and you're using +15% on the power limit slider, then your total power limit should be 322W. If your card is hitting that power limit often, then that could be your cause. If this is the case, you will either need to undervolt the card to get it using less power (cap your voltage using MPT), thus enabling it to clock higher within the stock power limit, or if the card doesn't like undervolting (like my card), then you will need to raise the power limit. For example I am running 335w set in MPT, +15% in the power slider in Adrenaline, = 385w total power limit. At 1175mV the card will still reach this limit sometimes in TimeSpy, but not in most gaming situations I have encountered. Just depends on what resolution you run etc. 

My second thought is that your GPU usage might not be getting maxed out in that game, either due to game engine optimization or your CPU getting too maxed out. I would recommend to monitor your CPU and GPU usage. If your GPU usage is well below 90%, then the card is not clocking all the way up because it's not getting enough of a demand placed on it by the game. Could be a really CPU intensive title or something. Some fps shooter games I play, the card will drop down by 50 - 75 MHz sometimes below my minimum set clock speed, if the GPU usage is showing really low (like 50% - 70%).

Just a few thoughts, might not be either of these, but those are the things I would be checking. Look at your clock speed when you're sitting idle in Windows. It's probably below 50 MHz. The card can still down clock itself if there is not enough of a demand being placed on it, it's not always going to run within the parameters you set if it feels there is no need to. Hope this helps.


----------



## SLAY3R8888

LtMatt said:


> I watched your video, i could do with someone teaching me to edit videos. Mine are much less polished. You seem pretty good at fast paced shooters too Lol.
> 
> I think i see why your temps are so low, your GPU utilization is 50-90% and it seems to spend most of the time closer to 50 than 90. I am running at 4K and my GPU utilization is always locked at 99% regardless of what game i play. Another thing i noticed is the temps get higher the higher resolution you use. Guess those ROPS kick out more heat at 4K than 1080P+ etc. I just moved from 5120x1440 to 4K120HZ and i can see a difference in heat output by my various 6900 XTs.
> 
> Regarding the memory. I was just testing games using static areas where FPS were fairly consistent and locked. I would then alt tab in out and of the game changing memory clock. I found in a variety of games (i tested Far Cry 5, COD MW, and a few others - in single player mode GPU bound scenarios only) i would see a FPS gains up to around 2012Mhz. At some point after that, i would lose performance and the FPS would go back to where it was when memory was running at stock. So i would assume the memory error correction kicks in and dials back performance. It's weird how the synthetics always show gains up to 2150Mhz. So i just leave the memory at 2112Mhz, which results in 2100Mhz locked in game and improves performance. 2150Mhz = 2138Mhz in game, so I'm only missing out on 38Mhz at worse and can't see that offering much more perf anyway.


Thanks for checking out my video. I use Filmora to do my video editing, I like it. I watched some beginner videos on YT explaining how to use the software, and then over time I would look up and learn new techniques, and I slowly would add more effects to make my videos look more polished. My old videos were not near as polished as they are now, and they could still be better. They are improving as I get better and faster at editing. I recommend searching for one feature that you are wanting to do, and learn the software one feature at a time. Don't get overwhelmed by trying to learn all of the features at once, this can cause a headache and make you not want to do video editing. Also just learn the features that you need. The software has many features that I will never use. Filmora pricing right now is $69.99 for a lifetime subscription, or $39.99 per year. I might make a video sometime walking through me editing a whole video for YT. If I do, then I will let you know!

Also -- something really important that I learned, is that YouTube's standard AVC codec will destroy your video quality and compress it like crazy. The VP9 codec is far superior and preserves the quality of your original recording much better after it's uploaded. The only way to get the VP9 codec is if you are a really big channel with thousands of subs, OR you can upload your video at 1440p resolution or higher with a sufficient bitrate, and YT will automatically give you the VP9 codec. So I do my recordings in 1080p60, 18k bitrate (x264 in OBS), and then I render out the video in Filmora at 1440p / 30k bitrate. So YouTube sees it as a 1440p video and I get the VP9 codec. My newer YT videos using this technique have such a better video quality than my old videos that got the AVC codec.

You are right, temps can vary a lot depending on what game and resolution you are playing. I plan to stay on 1440p for a while, as titles will get more demanding over time and will eventually ask more of this GPU in 1440p. Some of these older games that I like to play, were developed before our current hardware existed, especially games that are several years old, and I don't expect much more optimizations from them. I will do a video soon showing BF5 campaign with the same settings, to show how much higher performance and GPU utilization can be when it's not a multiplayer scenario with 60+ players in a lobby. It's the same story if I compare my 1440p performance in Modern Warfare multiplayer vs. Warzone. I get always 250+ fps in MW multiplayer 6v6 or 10v10 lobbies, but in WZ it drops to around 200fps for highs and 140fps for lows, average around 170 - 180fps. It's just how the game engines and netcode interact with the CPU when you are in a large scale battle with lots of players and huge maps. A lot more calculations to be made, etc.

Thanks for the info on the VRam clock, I will check this out sometime and see if I can replicate what you are experiencing. The difference could be the higher resolution putting more demand on the VRam. For example if you run TimeSpy, it only renders in 1440p. TimeSpy Extreme however renders in 4k. Also, are you still running a lower SOC Voltage? I wonder if this is causing your VRam to cause errors sometimes? Just wondering. Either way like you said, you're not missing out on hardly anything by leaving the VRam clocked slightly lower so it's not a big deal regardless.


----------



## LtMatt

SLAY3R8888 said:


> Thanks for checking out my video. I use Filmora to do my video editing, I like it. I watched some beginner videos on YT explaining how to use the software, and then over time I would look up and learn new techniques, and I slowly would add more effects to make my videos look more polished. My old videos were not near as polished as they are now, and they could still be better. They are improving as I get better and faster at editing. I recommend searching for one feature that you are wanting to do, and learn the software one feature at a time. Don't get overwhelmed by trying to learn all of the features at once, this can cause a headache and make you not want to do video editing. Also just learn the features that you need. The software has many features that I will never use. Filmora pricing right now is $69.99 for a lifetime subscription, or $39.99 per year. I might make a video sometime walking through me editing a whole video for YT. If I do, then I will let you know!
> 
> Also -- something really important that I learned, is that YouTube's standard AVC codec will destroy your video quality and compress it like crazy. The VP9 codec is far superior and preserves the quality of your original recording much better after it's uploaded. The only way to get the VP9 codec is if you are a really big channel with thousands of subs, OR you can upload your video at 1440p resolution or higher with a sufficient bitrate, and YT will automatically give you the VP9 codec. So I do my recordings in 1080p60, 18k bitrate (x264 in OBS), and then I render out the video in Filmora at 1440p / 30k bitrate. So YouTube sees it as a 1440p video and I get the VP9 codec. My newer YT videos using this technique have such a better video quality than my old videos that got the AVC codec.
> 
> You are right, temps can vary a lot depending on what game and resolution you are playing. I plan to stay on 1440p for a while, as titles will get more demanding over time and will eventually ask more of this GPU in 1440p. Some of these older games that I like to play, were developed before our current hardware existed, especially games that are several years old, and I don't expect much more optimizations from them. I will do a video soon showing BF5 campaign with the same settings, to show how much higher performance and GPU utilization can be when it's not a multiplayer scenario with 60+ players in a lobby. It's the same story if I compare my 1440p performance in Modern Warfare multiplayer vs. Warzone. I get always 250+ fps in MW multiplayer 6v6 or 10v10 lobbies, but in WZ it drops to around 200fps for highs and 140fps for lows, average around 170 - 180fps. It's just how the game engines and netcode interact with the CPU when you are in a large scale battle with lots of players and huge maps. A lot more calculations to be made, etc.
> 
> Thanks for the info on the VRam clock, I will check this out sometime and see if I can replicate what you are experiencing. The difference could be the higher resolution putting more demand on the VRam. For example if you run TimeSpy, it only renders in 1440p. TimeSpy Extreme however renders in 4k. Also, are you still running a lower SOC Voltage? I wonder if this is causing your VRam to cause errors sometimes? Just wondering. Either way like you said, you're not missing out on hardly anything by leaving the VRam clocked slightly lower so it's not a big deal regardless.


Appreciate the feedback on the video editing, i think i might just do that. I'd like to do side by side comparisons videos i think with different GPUs. I might have to look up Filmora and do some research. 

I didn't know that about YouTube. I did wonder however when the quality was always butchered, guess i know now. I've uploaded a few video on my channel here, but I'd like to do more when time allows. I've not uploaded any videos using my Merc yet but need to get round to it. 

I leave SOC voltage at stock now. It makes no difference on the Merc and it seems to add a bit more stability when at default. It helps for sure on the MBA though!


----------



## SLAY3R8888

LtMatt said:


> Appreciate the feedback on the video editing, i think i might just do that. I'd like to do side by side comparisons videos i think with different GPUs. I might have to look up Filmora and do some research.
> 
> I didn't know that about YouTube. I did wonder however when the quality was always butchered, guess i know now. I've uploaded a few video on my channel here, but I'd like to do more when time allows. I've not uploaded any videos using my Merc yet but need to get round to it.
> 
> I leave SOC voltage at stock now. It makes no difference on the Merc and it seems to add a bit more stability when at default. It helps for sure on the MBA though!


No problem, you should be able to record at 1080p or even 720p, and as long as your bitrate is high enough to make a good quality recording, render your final video out of Filmora at 1440p + 30k bitrate and YT should give it the VP9 codec. I haven't tried this with 720p yet, but this is something I've been wanting to experiment with for a while. Might try 720p with a high bitrate and see if the image quality comes out the same once rendered out to 1440p and using the VP9 codec. If so, I might start recording at 720p and rendering to 1440p in Filmora, just to get those CPU times a tad bit lower in CPU intensive titles. This GPU is so fast that if the CPU has any sort of delay, it can slow down performance a bit. My 5800x is handling 1080p60, 18k bitrate very well, and the video quality is excellent, but I'm going to try 720p60, 13k bitrate and see how the video quality comes out on YT, and see if it nets me any performance gains.


----------



## newls1

SLAY3R8888 said:


> Are you using MorePowerTool to raise the Power Limit (W)? My first thought is that maybe that specific game requires the card to use more power to reach those clocks, and if you haven't modified the power limit, your card might be reaching that power limit and then getting stuck there unable to clock any higher. My first thought would be to monitor your power consumption in Adrenaline Software. If you have never raised the power limit in MPT, and you're using +15% on the power limit slider, then your total power limit should be 322W. If your card is hitting that power limit often, then that could be your cause. If this is the case, you will either need to undervolt the card to get it using less power (cap your voltage using MPT), thus enabling it to clock higher within the stock power limit, or if the card doesn't like undervolting (like my card), then you will need to raise the power limit. For example I am running 335w set in MPT, +15% in the power slider in Adrenaline, = 385w total power limit. At 1175mV the card will still reach this limit sometimes in TimeSpy, but not in most gaming situations I have encountered. Just depends on what resolution you run etc.
> 
> My second thought is that your GPU usage might not be getting maxed out in that game, either due to game engine optimization or your CPU getting too maxed out. I would recommend to monitor your CPU and GPU usage. If your GPU usage is well below 90%, then the card is not clocking all the way up because it's not getting enough of a demand placed on it by the game. Could be a really CPU intensive title or something. Some fps shooter games I play, the card will drop down by 50 - 75 MHz sometimes below my minimum set clock speed, if the GPU usage is showing really low (like 50% - 70%).
> 
> Just a few thoughts, might not be either of these, but those are the things I would be checking. Look at your clock speed when you're sitting idle in Windows. It's probably below 50 MHz. The card can still down clock itself if there is not enough of a demand being placed on it, it's not always going to run within the parameters you set if it feels there is no need to. Hope this helps.


thank you for that long and detailed reply, much appreciated. however, prior to installing this driver my clock speeds in game were where they should be for my given oc (2650-2750mhz IN GAME) and yes, I am using MPT and my parameters are functioning as should in everything else just not this particular game now since using this driver.


----------



## SLAY3R8888

Yeah then I would probably just chalk it up to game / driver optimization.


----------



## LtMatt

SLAY3R8888 said:


> No problem, you should be able to record at 1080p or even 720p, and as long as your bitrate is high enough to make a good quality recording, render your final video out of Filmora at 1440p + 30k bitrate and YT should give it the VP9 codec. I haven't tried this with 720p yet, but this is something I've been wanting to experiment with for a while. Might try 720p with a high bitrate and see if the image quality comes out the same once rendered out to 1440p and using the VP9 codec. If so, I might start recording at 720p and rendering to 1440p in Filmora, just to get those CPU times a tad bit lower in CPU intensive titles. This GPU is so fast that if the CPU has any sort of delay, it can slow down performance a bit. My 5800x is handling 1080p60, 18k bitrate very well, and the video quality is excellent, but I'm going to try 720p60, 13k bitrate and see how the video quality comes out on YT, and see if it nets me any performance gains.


Could you not use ReLive for recoridng at higher than 30Mbps quality HEVC and get VP9 on YouTube?


----------



## SLAY3R8888

LtMatt said:


> Could you not use ReLive for recoridng at higher than 30Mbps quality HEVC and get VP9 on YouTube?


I'm sure you could try, but I prefer using OBS as you can customize your encoder settings way more in detail. I tried Relive before and I've had better luck with OBS Studio. Today I tested recording in 720p and it wasn't worth it. It looks fine, but there was no increase in CPU performance, it's already handling 1080p60 recording very well. 

I like using the CPU (x264) for encoding recordings, it takes load off of the GPU and the 5000 series CPUs can handle it just fine with minimal impact on fps, if any at all.


----------



## Lemon Wolf

Might sound like a simple or odd question but is it possible that you have to reapply MPT settings each time a new driver is being installed?


----------



## newls1

Lemon Wolf said:


> Might sound like a simple or odd question but is it possible that you have to reapply MPT settings each time a new driver is being installed?


of course i know this


----------



## ZealotKi11er

Lemon Wolf said:


> Might sound like a simple or odd question but is it possible that you have to reapply MPT settings each time a new driver is being installed?


MPT does "hack" the drivers. If the new driver is different, then MPT needs to be updated.


----------



## SLAY3R8888

Lemon Wolf said:


> Might sound like a simple or odd question but is it possible that you have to reapply MPT settings each time a new driver is being installed?


Yes you definitely have to reapply. It's a registry modification that gets removed when the old driver gets uninstalled.


----------



## Lemon Wolf

Thx thats good to know, i was wondering about that.


----------



## Bart

Man, I forgot just how awesome MPT is, and how simple it is, and how it is the quickest way to make you realize your GPU needs water to live, just like we do, LOL! I just used it to set the power limits on my TUF 6900 XT to 350W, and oh boy does that open this baby up a bit!! Now I can set my core max to 2700, mem max to 2100, and let it eat. On air this drives the junction temp into triple digits in about 10 seconds.


----------



## LtMatt

@SLAY3R8888
Did some memory performance testing in Red Dead Redemption 2 and again i saw the behaviour play out as i mentioned before, 2112Mhz appears to be the sweet spot, at least in games.

Testing was done using the built in game benchmark.

2160P
RDR2 DX12
Ultra Settings - Tree Tess Off, MSAA/Reflection Off Water quality High, Refraction/Reflection 3/4.
TAA enabled and sharpness 100%
5950X PBO
6900 XT 2270-2370Mhz Undervolted to 1.000v (actual in game core clock is minimum of 2300Mhz, peaks up to 2325Mhz)
21.2.2
Fast Timings On
No MPT, using default board power limit 289W+15% power limit so no chance of power limit affecting results as power draw was 220W-240W ish according to HWINFO64.

2012Mhz Memory Clock









2112Mhz Memory Clock









2150Mhz Memory Clock









This behaviour in memory clock has been the same now on 1x6800 XT MBA, 1x6900 XT MBA, and 2x 6900 XT Mercs.


----------



## SLAY3R8888

Bart said:


> Man, I forgot just how awesome MPT is, and how simple it is, and how it is the quickest way to make you realize your GPU needs water to live, just like we do, LOL! I just used it to set the power limits on my TUF 6900 XT to 350W, and oh boy does that open this baby up a bit!! Now I can set my core max to 2700, mem max to 2100, and let it eat. On air this drives the junction temp into triple digits in about 10 seconds.


Are you giving the card enough air? Like did you turn your fan curve up? I'm just curious because I've been able to give my card any power limit it wants, and it has never hit triple digits on air before, even at 1175mV and 400w power limit. This is in comfortable ambient temps of around 67F - 70F. I have my fan curve where it hits 100% at 55c. This has been most beneficial for consistent clock speeds. My case also has 7 fans total all running at full blast. But yeah any benchmark or game I've tried, thermally it does great.


----------



## SLAY3R8888

LtMatt said:


> @SLAY3R8888
> Did some memory performance testing in Red Dead Redemption 2 and again i saw the behaviour play out as i mentioned before, 2112Mhz appears to be the sweet spot, at least in games.
> 
> Testing was done using the built in game benchmark.
> 
> 2160P
> RDR2 DX12
> Ultra Settings - Tree Tess Off, MSAA/Reflection Off Water quality High, Refraction/Reflection 3/4.
> TAA enabled and sharpness 100%
> 5950X PBO
> 6900 XT 2270-2370Mhz Undervolted to 1.000v (actual in game core clock is minimum of 2300Mhz, peaks up to 2325Mhz)
> 21.2.2
> No MPT, using default board power limit 289W+15% power limit so no chance of power limit affecting results as power draw was 220W-240W ish according to HWINFO64.
> 
> 2012Mhz Memory Clock
> View attachment 2478912
> 
> 
> 2112Mhz Memory Clock
> View attachment 2478913
> 
> 
> 2150Mhz Memory Clock
> View attachment 2478914
> 
> 
> This behaviour in memory clock has been the same now on 1x6800 XT MBA, 1x6900 XT MBA, and 2x 6900 XT Mercs.


This is good info. Thanks for providing it. Is it possible for you to be able to find a 4k synthetic, such as TimeSpy Extreme, to replicate this behavior? I feel like it has to be the Vram usage of 4k that is bringing out this behavior more. I might try TimeSpy Extreme and see if I can get similar results to you. 
I can't replicate this behavior in any 1440p games that I own, but surely a 4k synthetic would show it enough to help dial in the Vram clock?


----------



## LtMatt

SLAY3R8888 said:


> This is good info. Thanks for providing it. Is it possible for you to be able to find a 4k synthetic, such as TimeSpy Extreme, to replicate this behavior? I feel like it has to be the Vram usage of 4k that is bringing out this behavior more. I might try TimeSpy Extreme and see if I can get similar results to you.
> I can't replicate this behavior in any 1440p games that I own, but surely a 4k synthetic would show it enough to help dial in the Vram clock?


Synthetics scale with memory frequency (all of them), it is only games that do not seem to do so in the same fashion. It is odd, but it seems repeatable for me in multiple games now.


----------



## SLAY3R8888

LtMatt said:


> Synthetics scale with memory frequency (all of them), it is only games that do not seem to do so in the same fashion. It is odd, but it seems repeatable for me in multiple games now.


This is very interesting. Thanks.


----------



## LtMatt

SLAY3R8888 said:


> This is very interesting. Thanks.


Also now tested with core clock running at 2500Mhz+ and with voltage up higher at 1.125v, exact same results to within 0.1% performance difference between 2012Mhz and 2112Mhz and 2150Mhz.


----------



## SLAY3R8888

LtMatt said:


> Also now tested with core clock running at 2500Mhz+ and with voltage up higher at 1.125v, exact same results to within 0.1% performance difference between 2012Mhz and 2112Mhz and 2150Mhz.


I don't own any games with built in benchmarks, other than Borderlands 3 which I have uninstalled at the moment. I might get Hitman 2 or RDR2 for the sake of having another benchmark to run, I have a full list pulled up of all games that have built in benchmarks.

I don't trust my own in-game benchmarking enough, as these results can often appear within margin of error.


----------



## LtMatt

SLAY3R8888 said:


> I don't own any games with built in benchmarks, other than Borderlands 3 which I have uninstalled at the moment. I might get Hitman 2 or RDR2 for the sake of having another benchmark to run, I have a full list pulled up of all games that have built in benchmarks.
> 
> I don't trust my own in-game benchmarking enough, as these results can often appear within margin of error.


Yeah unless it's easily repeatable it can be a bit unreliable at times.

Forgot to mention that was using Fast Timings I've updated my message.


----------



## SLAY3R8888

LtMatt said:


> Yeah unless it's easily repeatable it can be a bit unreliable at times.
> 
> Forgot to mention that was using Fast Timings I've updated my message.


Good to know. I just got done moving my entire PC setup to the dining room table in my house as a temporary solution for the next few days, because it is below 0F here (it never gets this cold!) and my 3 garage heaters are no longer able to keep the garage temp comfortable. I ran some benchmarks in the house and all my CPU / GPU temps are pretty much the exact same inside the house, which is good to see. I figured as much because I usually keep my gaming area close to the same temperature as inside my home, if I'm actually going to be sitting out there I want to be comfortable.


----------



## LtMatt

SLAY3R8888 said:


> Good to know. I just got done moving my entire PC setup to the dining room table in my house as a temporary solution for the next few days, because it is below 0F here (it never gets this cold!) and my 3 garage heaters are no longer able to keep the garage temp comfortable. I ran some benchmarks in the house and all my CPU / GPU temps are pretty much the exact same inside the house, which is good to see. I figured as much because I usually keep my gaming area close to the same temperature as inside my home, if I'm actually going to be sitting out there I want to be comfortable.
> 
> View attachment 2478927


Blimey that sounds cold, but sounds perfect for benchmarking Lol. 

Please install an adblocker for your browser.


----------



## SLAY3R8888

LtMatt said:


> Blimey that sounds cold, but sounds perfect for benchmarking Lol.
> 
> Please install an adblocker for your browser.


LOL


----------



## SLAY3R8888

LtMatt said:


> @SLAY3R8888
> Did some memory performance testing in Red Dead Redemption 2 and again i saw the behaviour play out as i mentioned before, 2112Mhz appears to be the sweet spot, at least in games.
> 
> Testing was done using the built in game benchmark.
> 
> 2160P
> RDR2 DX12
> Ultra Settings - Tree Tess Off, MSAA/Reflection Off Water quality High, Refraction/Reflection 3/4.
> TAA enabled and sharpness 100%
> 5950X PBO
> 6900 XT 2270-2370Mhz Undervolted to 1.000v (actual in game core clock is minimum of 2300Mhz, peaks up to 2325Mhz)
> 21.2.2
> Fast Timings On
> No MPT, using default board power limit 289W+15% power limit so no chance of power limit affecting results as power draw was 220W-240W ish according to HWINFO64.
> 
> 2012Mhz Memory Clock
> View attachment 2478912
> 
> 
> 2112Mhz Memory Clock
> View attachment 2478913
> 
> 
> 2150Mhz Memory Clock
> View attachment 2478914
> 
> 
> This behaviour in memory clock has been the same now on 1x6800 XT MBA, 1x6900 XT MBA, and 2x 6900 XT Mercs.





LtMatt said:


> Blimey that sounds cold, but sounds perfect for benchmarking Lol.
> 
> Please install an adblocker for your browser.


I saw that Red Dead Online was on sale on Steam for $4.99 for a limited time, so I bought it. It said it includes all the same content as the regular copy of RDR2, so hopefully I will have access to the benchmark. Downloading it right now. I want to try dialing in my Vram clock with it.


----------



## LtMatt

SLAY3R8888 said:


> I saw that Red Dead Online was on sale on Steam for $4.99 for a limited time, so I bought it. It said it includes all the same content as the regular copy of RDR2, so hopefully I will have access to the benchmark. Downloading it right now. I want to try dialing in my Vram clock with it.


Bargain. It's a great game.


----------



## SLAY3R8888

LtMatt said:


> Bargain. It's a great game.


Yeah I thought so too. Will be interesting to see if I can replicate the same behavior on the Vram clock except in 1440p with everything turned up all the way. If not, then I will switch the bench to 4k and try that.


----------



## LtMatt

SLAY3R8888 said:


> Yeah I thought so too. Will be interesting to see if I can replicate the same behavior on the Vram clock except in 1440p with everything turned up all the way. If not, then I will switch the bench to 4k and try that.


Yeah I'm curious to see if you can, or whether I've just been unlucky with 4 different samples.


----------



## SLAY3R8888

LtMatt said:


> Yeah I'm curious to see if you can, or whether I've just been unlucky with 4 different samples.


Yeah would be nice to have Vram totally stable and dialed in for all future scenarios, so I don't have to worry about it again. This sounds like the way to do it.


----------



## LtMatt

SLAY3R8888 said:


> Yeah would be nice to have Vram totally stable and dialed in for all future scenarios, so I don't have to worry about it again. This sounds like the way to do it.


Stability is not the issue, it's just extracting the most performance, Memory seems stable at any frequency tbh.


----------



## SLAY3R8888

LtMatt said:


> Stability is not the issue, it's just extracting the most performance, Memory seems stable at any frequency tbh.


When performance goes down from a higher frequency, that indicates instability. Even if it doesn't cause major problems, that's the only explanation of getting lower performance from a higher frequency. Instability...


----------



## Bart

SLAY3R8888 said:


> Are you giving the card enough air? Like did you turn your fan curve up? I'm just curious because I've been able to give my card any power limit it wants, and it has never hit triple digits on air before, even at 1175mV and 400w power limit. This is in comfortable ambient temps of around 67F - 70F. I have my fan curve where it hits 100% at 55c. This has been most beneficial for consistent clock speeds. My case also has 7 fans total all running at full blast. But yeah any benchmark or game I've tried, thermally it does great.


I set my fans to 100% full time, and the card gets plenty of air. It's not the GPU temp that hits triple digits, it's the junction temp. But I'm only experimenting, I'm not trying to 'dial it in' yet, on air there's no point. I just wanted to see where the 'upper limits' were in terms of clocks. My "noob OCing" was very simple (current settings below):

1) load up GPUZ, save BIOS to file;
2) load MPT, load up saved BIOS, set power limit to 360W (375W was where I saw the high temps), reboot;
3) in AMD drivers, set max clock to 2750, max mem clock to 2130, set fast memory timings, power limit bumped to 7% (just because I HAD to break 20K in Time Spy, so it's pulling decent power);

It's sitting in a Lian-Li Dynamic O11 XL, 3 rad fans blowing air up into it, in a cold Canadian basement:


----------



## LtMatt

SLAY3R8888 said:


> When performance goes down from a higher frequency, that indicates instability. Even if it doesn't cause major problems, that's the only explanation of getting lower performance from a higher frequency. Instability...


Or it's just error correction kicking in.


----------



## SLAY3R8888

LtMatt said:


> Or it's just error correction kicking in.


Errors caused from instability? I asked you what you meant by "error correction kicking in" before. Why would there be any errors to begin with if it was stable? I'm not following.


----------



## SLAY3R8888

Bart said:


> I set my fans to 100% full time, and the card gets plenty of air. It's not the GPU temp that hits triple digits, it's the junction temp. But I'm only experimenting, I'm not trying to 'dial it in' yet, on air there's no point. I just wanted to see where the 'upper limits' were in terms of clocks. My "noob OCing" was very simple (current settings below):
> 
> 1) load up GPUZ, save BIOS to file;
> 2) load MPT, load up saved BIOS, set power limit to 360W (375W was where I saw the high temps), reboot;
> 3) in AMD drivers, set max clock to 2750, max mem clock to 2130, set fast memory timings, power limit bumped to 7% (just because I HAD to break 20K in Time Spy, so it's pulling decent power);
> 
> It's sitting in a Lian-Li Dynamic O11 XL, 3 rad fans blowing air up into it, in a cold Canadian basement:


The reason I was asking is because I was also referring to junction temp. Mine never goes over 82c in any benchmark including TimeSpy, that's on air, 1175mV and 385w total power limit. Even at 400w power limit the junction temp doesn't get any higher than that, and I've seen it consume 400w.

My card would do literally the same clocks on air as it would on water, it gets unstable before it gets warm.


----------



## Bart

SLAY3R8888 said:


> The reason I was asking is because I was also referring to junction temp. Mine never goes over 82c in any benchmark including TimeSpy, that's on air, 1175mV and 385w total power limit. Even at 400w power limit the junction temp doesn't get any higher than that, and I've seen it consume 400w.
> 
> My card would do literally the same clocks on air as it would on water, it gets unstable before it gets warm.


Hmm, that's interesting, I wonder if this is a silicon lottery thing. I haven't even touched ANY of the voltages with MPT, I've _only_ set the power limit higher. Did you undervolt yours with MPT?


----------



## LtMatt

SLAY3R8888 said:


> Errors caused from instability? I asked you what you meant by "error correction kicking in" before. Why would there be any errors to begin with if it was stable? I'm not following.


Sorry i missed that. Yes, although i count instability as a crash/restart rather than the memory correcting errors in the overclock and dialling back performance as a result. If it happened in both synthetics and games I'd be more inclined to call it instability i guess.


----------



## SLAY3R8888

Bart said:


> Hmm, that's interesting, I wonder if this is a silicon lottery thing. I haven't even touched ANY of the voltages with MPT, I've _only_ set the power limit higher. Did you undervolt yours with MPT?


1175mV should be the same voltage you are running if you haven't touched it.. it's the highest voltage allowed in Adrenaline. I have tried undervolting the card before, but it didn't help overclocking any for me. Just raising power limits and using the max voltage is what my card seems to like best. Best TimeSpy graphics score I have gotten is 20,600 and that was at 2550 - 2650 MHz. But it only makes it through the test half the time at that frequency. 2500 - 2600 consistently passes the test and gets me a graphics score of 20,500. And junction temp is in the low 70's for most of the test. Breaks 80c in one part of the test for a few seconds. Hits the 385w power limit once or twice during the test also.


----------



## SLAY3R8888

LtMatt said:


> Sorry i missed that. Yes, although i count instability as a crash/restart rather than the memory correcting errors in the overclock and dialling back performance as a result. If it happened in both synthetics and games I'd be more inclined to call it instability i guess.


Fair enough. I will let you know when I do my testing with RDR2 benchmark!


----------



## LtMatt

Bart said:


> Hmm, that's interesting, I wonder if this is a silicon lottery thing. I haven't even touched ANY of the voltages with MPT, I've _only_ set the power limit higher. Did you undervolt yours with MPT?


I think @SLAY3R8888 is blessed not necessarily in the silicon department, but perhaps the heatsink pressure/mount lottery. Plus he perhaps has ambient temperatures lower than most people.

I was surprised at his temperatures as well, so I wouldn't worry too much does not mean anything is wrong with yours. Just that perhaps you are more in line with the rest of us. Case airflow, ambient temps, silicon quality, heatsink-pressure-mount quality, heatsink effectiveness etc all play a part. I should add my ambient temperature right now is 25c.

In my experience, ideally you want a 15-25c peak difference in Edge Vs Junction temperatures in a worst case scenario. If you fall within this range with your AIB, when running synthetics that pull 350W plus you're a-okay in my book. My MBAs used to run 30c+ in this scenario, but then there reference cooler is not as good as the AIBs. My MBAs would run 10-15c in gaming scenarios at 260-300W so contact was decent but the synthetics were a bit to much.

My 6900 XT Merc is 10-19c difference in games, often somewhere around 15-17c under 300-320W gaming load. This increases a bit to peaks of 25c ish difference under synthetics where power draw is 350-400W.


----------



## SLAY3R8888

To clear up any confusion about my low temps on air being because of my setup being in my garage, here is a GPU-Z log of a TimeSpy run I just did with my PC inside my house, ambient temp 72F (22c). Which I would say is a very normal ambient temp. You can see the highest my junction temp gets is 83c for the entire run. This is at 1175mV and 385w power limit, which was also reached during the run. Also Bart, to be clear, I wasn't trying to indicate that I thought anything was wrong with your card (sorry if I came across that way), I was just surprised at how many people default to water cooling things like this when they seem to OC to their max potential on air. I do understand the enthusiast side to water cooling things, and also the lower noise level, so for that it definitely makes sense. When you said you thought your card needed water to survive, it surprised me and made me want to ask questions. But I can tell you are already set on water cooling your card regardless, which there is nothing wrong with that. It will be awesome. It could be that I got lucky with the cooling solution on my card, but I think it's also likely that where you guys have radiators in your case, instead I have several case fans blowing completely ambient temp air directly onto the card at a high velocity.


----------



## LtMatt

SLAY3R8888 said:


> To clear up any confusion about my low temps on air being because of my setup being in my garage, here is a GPU-Z log of a TimeSpy run I just did with my PC inside my house, ambient temp 72F (22c). Which I would say is a very normal ambient temp. You can see the highest my junction temp gets is 83c for the entire run. This is at 1175mV and 385w power limit, which was also reached during the run. Also Bart, to be clear, I wasn't trying to indicate that I thought anything was wrong with your card (sorry if I came across that way), I was just surprised at how many people default to water cooling things like this when they seem to OC to their max potential on air. I do understand the enthusiast side to water cooling things, and also the lower noise level, so for that it definitely makes sense. When you said you thought your card needed water to survive, it surprised me and made me want to ask questions. But I can tell you are already set on water cooling your card regardless, which there is nothing wrong with that. It will be awesome. It could be that I got lucky with the cooling solution on my card, but I think it's also likely that where you guys have radiators in your case, instead I have several case fans blowing completely ambient temp air directly onto the card at a high velocity.


I'm gonna try and copy your settings and do a Timespy run with GPU-z logging in the background.

Did you set the power limit at 385W in MPT?

Fair point about the case fans. Mine are running at very low speed so they are silent, and my front 3 intakes are on my 420MM AIO, but that blows in cool air generally as the CPU runs cool.


----------



## Bart

Nah bro, you didn't come across that way at all, you just provided good info. Plus I would never hate on a fellow metalhead. :-D 

My ambient should be awesome, considering my rigs are in a cold Canadian basement. When I boot up, my liquid temps are around 19C, so it's a chilly room. But the fans on my card might be crap, the mount itself could be crap, etc. You just never know these days. Considering I'm sub-20C ambient, and I have 3 fans blowing up into the thing, with 3 fans pulling hot air off the card up top, and the rear exhaust configured as intake, I _should_ be in great shape for thermals. More testing is required I think. I can't get anywhere NEAR 20,500 in Time Spy, I just barely broke 20K.


----------



## SLAY3R8888

LtMatt said:


> I'm gonna try and copy your settings and do a Timespy run with GPU-z logging in the background.
> 
> Did you set the power limit at 385W in MPT?
> 
> Fair point about the case fans. Mine are running at very low speed so they are silent, and my front 3 intakes are on my 420MM AIO, but that blows in cool air generally as the CPU runs cool.


My exact settings are - 385W power limit in MPT, and everything else stock in MPT. 2500 - 2600 for set frequency. 1175mV. Vram 2150MHz w/ fast timings. Yeah that could possibly make a difference, all my case fans are running at 2,000 RPM - the 5 intake and 2 exhaust lol... yes I know overkill, but I like overclocking and I wanted to try to do it without water, so yeah I built my case around max airflow.


----------



## ZealotKi11er

Bart said:


> Nah bro, you didn't come across that way at all, you just provided good info. Plus I would never hate on a fellow metalhead. :-D
> 
> My ambient should be awesome, considering my rigs are in a cold Canadian basement. When I boot up, my liquid temps are around 19C, so it's a chilly room. But the fans on my card might be crap, the mount itself could be crap, etc. You just never know these days. Considering I'm sub-20C ambient, and I have 3 fans blowing up into the thing, with 3 fans pulling hot air off the card up top, and the rear exhaust configured as intake, I _should_ be in great shape for thermals. More testing is required I think. I can't get anywhere NEAR 20,500 in Time Spy, I just barely broke 20K.


What cooler do you have on the GPU?


----------



## LtMatt

SLAY3R8888 said:


> My exact settings are - 385W power limit in MPT, and everything else stock in MPT. 2500 - 2600 for set frequency. 1175mV. Vram 2150MHz w/ fast timings. Yeah that could possibly make a difference, all my case fans are running at 2,000 RPM - the 5 intake and 2 exhaust lol... yes I know overkill, but I like overclocking and I wanted to try to do it without water, so yeah I built my case around max airflow.


Software glitch see post below.


----------



## Bart

ZealotKi11er said:


> What cooler do you have on the GPU?


It's stock ATM. The Asus TUF variant isn't reference, so no water blocks are available yet.

EDIT: oh for monitoring temps, I was using the Radeon driver overlay (CTRL-SHIFT-O), not sure how accurate that thing is.


----------



## LtMatt

SLAY3R8888 said:


> My exact settings are - 385W power limit in MPT, and everything else stock in MPT. 2500 - 2600 for set frequency. 1175mV. Vram 2150MHz w/ fast timings. Yeah that could possibly make a difference, all my case fans are running at 2,000 RPM - the 5 intake and 2 exhaust lol... yes I know overkill, but I like overclocking and I wanted to try to do it without water, so yeah I built my case around max airflow.


Reinstalled GPU-Z.

92c hotspot peak temperature, GPU-Z log attached and HWINFO64 and GPU-z now displaying the same temp.

Ambient 23c. I think you have a great contact between the heatsink and GPU die on your Merc, coupled with your great case airflow.


----------



## SLAY3R8888

LtMatt said:


> Reinstalled GPU-Z.
> 
> 92c hotspot peak temperature, GPU-Z log attached and HWINFO64 and GPU-z now displaying the same temp.
> 
> Ambient 23c. I think you have a great contact between the heatsink and GPU die on your Merc, coupled with your great case airflow.
> View attachment 2478988


Thanks for the test results. It appears that the Red Dead Online version doesn't have the benchmark feature, you have to buy the story mode  I thought this might be the case, but Steam lied to me when it clearly stated that "The content included with the standalone version of Red Dead Online is identical to what is included with RDR2". It definitely doesn't include the identical content to RDR2 lol, it doesn't include the story mode.
I have some other games in mind that have built in benchmarks, that I will use to find my optimal Vram clock. I will report back


----------



## LtMatt

SLAY3R8888 said:


> Thanks for the test results. It appears that the Red Dead Online version doesn't have the benchmark feature, you have to buy the story mode  I thought this might be the case, but Steam lied to me when it clearly stated that "The content included with the standalone version of Red Dead Online is identical to what is included with RDR2". It definitely doesn't include the identical content to RDR2 lol, it doesn't include the story mode.
> I have some other games in mind that have built in benchmarks, that I will use to find my optimal Vram clock. I will report back
> View attachment 2478990


I thought that was a bit cheap Lol.


----------



## SLAY3R8888

LtMatt said:


> I thought that was a bit cheap Lol.


Me too, I had to check it out though because of how that notice was worded... lol


----------



## SLAY3R8888

LtMatt said:


> I thought that was a bit cheap Lol.


Next I will try War Thunder, which is free and has a benchmark.


----------



## Bart

Yeah I'm starting to think I have a mount that isn't fantastic. I have 10 very good fans in this thing (Fractal Design Prisma AL-120s, PWM variant, 2.88mm pressure), and even if I set them all to full blast, my 'hotspot' temp still hits over 100C, in a cold basement, on the very first run, LOL! Now that I have every fan in this rig set to "deafen old people" levels, this is what I got:



Notice the yellow highlighted bit. According to HWinfo64, I'm currently "current throttling" at my current settings (360W max / 7% power limit increase). Now here's a question for you hard-core tweakers: my clock max is set to 2750, but not once have I ever seen it hit 2700. It sticks at high 2600s for the entire test, but never touches 2700. With that in mind, why does my score drop when I reduce the core from 2750 down to 2725? If the thing never crests 2700, I'm having trouble understanding how changing that limit between 2750 down to 2725 drops the benchmark scores so significantly.


----------



## SLAY3R8888

Bart said:


> Yeah I'm starting to think I have a mount that isn't fantastic. I have 10 very good fans in this thing (Fractal Design Prisma AL-120s, PWM variant, 2.88mm pressure), and even if I set them all to full blast, my 'hotspot' temp still hits over 100C, in a cold basement, on the very first run, LOL! Now that I have every fan in this rig set to "deafen old people" levels, this is what I got:
> 
> 
> 
> Notice the yellow highlighted bit. According to HWinfo64, I'm currently "current throttling" at my current settings (360W max / 7% power limit increase). Now here's a question for you hard-core tweakers: my clock max is set to 2750, but not once have I ever seen it hit 2700. It sticks at high 2600s for the entire test, but never touches 2700. With that in mind, why does my score drop when I reduce the core from 2750 down to 2725? If the thing never crests 2700, I'm having trouble understanding how changing that limit between 2750 down to 2725 drops the benchmark scores so significantly.


When you drop it from 2750 to 2700, do your clock speeds go down? That would be why I would expect your score to drop. The way the driver works for these cards, it's not going to hold exactly at the max clock speed. It should be ~50MHz below the max clock set. But it can drop lower depending on other conditions such as temperature and power limit. So if it's holding just under 2700 when max is set at 2750, I would expect that it's holding around 2650 when you drop the max to 2700, if that makes sense.


----------



## ZealotKi11er

Bart said:


> Yeah I'm starting to think I have a mount that isn't fantastic. I have 10 very good fans in this thing (Fractal Design Prisma AL-120s, PWM variant, 2.88mm pressure), and even if I set them all to full blast, my 'hotspot' temp still hits over 100C, in a cold basement, on the very first run, LOL! Now that I have every fan in this rig set to "deafen old people" levels, this is what I got:
> 
> 
> 
> Notice the yellow highlighted bit. According to HWinfo64, I'm currently "current throttling" at my current settings (360W max / 7% power limit increase). Now here's a question for you hard-core tweakers: my clock max is set to 2750, but not once have I ever seen it hit 2700. It sticks at high 2600s for the entire test, but never touches 2700. With that in mind, why does my score drop when I reduce the core from 2750 down to 2725? If the thing never crests 2700, I'm having trouble understanding how changing that limit between 2750 down to 2725 drops the benchmark scores so significantly.


2750MHz is the target clock. It is always in fluctuation. Based on the load you will get more vdrop hence why you never really hit 2750.


----------



## Bart

SLAY3R8888 said:


> When you drop it from 2750 to 2700, do your clock speeds go down? That would be why I would expect your score to drop. The way the driver works for these cards, it's not going to hold exactly at the max clock speed. It should be ~50MHz below the max clock set. But it can drop lower depending on other conditions such as temperature and power limit. So if it's holding just under 2700 when max is set at 2750, I would expect that it's holding around 2650 when you drop the max to 2700, if that makes sense.


It's kinda funny, the clocks _do_ go down overall, but the peaks are still high, like I still see it hitting 2660+ in places. But overall the clocks and scores drop a little. Now I'm experimenting with your 385W limit, and I find I still have to give it +7% in the power limit slider to get Time Spy to pass at that limit. It seems this GPU just likes to peak at over 400W in certain places, especially that 2nd graphics test in Time Spy. According to Hwinfo64, I'm still current limited even though I peaked at 411W. I wonder what the actual voltage / current limits are on this Asus TUF card.


----------



## SLAY3R8888

Bart said:


> It's kinda funny, the clocks _do_ go down overall, but the peaks are still high, like I still see it hitting 2660+ in places. But overall the clocks and scores drop a little. Now I'm experimenting with your 385W limit, and I find I still have to give it +7% in the power limit slider to get Time Spy to pass at that limit. It seems this GPU just likes to peak at over 400W in certain places, especially that 2nd graphics test in Time Spy. According to Hwinfo64, I'm still current limited even though I peaked at 411W. I wonder what the actual voltage / current limits are on this Asus TUF card.


Ok so when I said I was using a 385W power limit, I meant the total, including the % in the power slider. Whatever % your power slider is set at applies to the power limit you set in MPT. So, for example, I have mine at 335W in MPT and +15% power slider = 385W actual power limit. I could also set it at 385W in MPT and put th slider at 0% and it would be the same result. Just making sure you understood that. 

I don't understand why HWInfo is saying you are current limited or whatever, I couldn't view your last photo of it so I'm not sure what that's all about. I haven't run into an issue like that. 

If your clock speeds are hovering around roughly 50MHz below your max clock speed, give or take 20MHz, then it's behaving how it should.


----------



## SLAY3R8888

SLAY3R8888 said:


> Ok so when I said I was using a 385W power limit, I meant the total, including the % in the power slider. Whatever % your power slider is set at applies to the power limit you set in MPT. So, for example, I have mine at 335W in MPT and +15% power slider = 385W actual power limit. I could also set it at 385W in MPT and put th slider at 0% and it would be the same result. Just making sure you understood that.
> 
> I don't understand why HWInfo is saying you are current limited or whatever, I couldn't view your last photo of it so I'm not sure what that's all about. I haven't run into an issue like that.
> 
> If your clock speeds are hovering around roughly 50MHz below your max clock speed, give or take 20MHz, then it's behaving how it should.


Also yes, I have used 400w limit before and it will hit that limit depending on my settings. I just have it at 385w for now because it seemed like a safe spot for me where it doesn't hit the limit very often and allows me to maintain ~2500 - 2550MHz speed for the whole test (2500/2600 set min/max). If you're pushing for much higher clocks and still giving it 1175mV, then you will be definitely needing a higher power limit.


----------



## Bart

SLAY3R8888 said:


> Ok so when I said I was using a 385W power limit, I meant the total, including the % in the power slider. Whatever % your power slider is set at applies to the power limit you set in MPT. So, for example, I have mine at 335W in MPT and +15% power slider = 385W actual power limit. I could also set it at 385W in MPT and put th slider at 0% and it would be the same result. Just making sure you understood that.
> 
> I don't understand why HWInfo is saying you are current limited or whatever, I couldn't view your last photo of it so I'm not sure what that's all about. I haven't run into an issue like that.
> 
> If your clock speeds are hovering around roughly 50MHz below your max clock speed, give or take 20MHz, then it's behaving how it should.


Cool, I wasn't sure if you were setting 385 or using the power slider, thanks for clarifying! Maybe I should calm mine down, LOL! I just wanted to break 20K in Time Spy, since my OCD doesn't like to get a score like 19,939.  You seem to score a lot higher with less clock speed than I do, but I'm not too worried about temps. I'm just trying to learn the limits of the card.


----------



## SLAY3R8888

Bart said:


> Cool, I wasn't sure if you were setting 385 or using the power slider, thanks for clarifying! Maybe I should calm mine down, LOL! I just wanted to break 20K in Time Spy, since my OCD doesn't like to get a score like 19,939.  You seem to score a lot higher with less clock speed than I do, but I'm not too worried about temps. I'm just trying to learn the limits of the card.


One thing I would recommend to help you break 20k, is to close any extra software running on your PC that you don't need. Close all monitoring tools, etc. Maybe even do a fresh reboot and then close everything in your system tray that you can before you do the runs. Once I know my temps are under control and I'm not going to crash and burn, I will close any monitoring software like HWinfo, etc and all other software to help eek out that extra bit of score. Any little bit can help. It could even slightly help to run 3DMark as admin.


----------



## SLAY3R8888

SLAY3R8888 said:


> One thing I would recommend to help you break 20k, is to close any extra software running on your PC that you don't need. Close all monitoring tools, etc. Maybe even do a fresh reboot and then close everything in your system tray that you can before you do the runs. Once I know my temps are under control and I'm not going to crash and burn, I will close any monitoring software like HWinfo, etc and all other software to help eek out that extra bit of score. Any little bit can help. It could even slightly help to run 3DMark as admin.


Also I have noticed with my testing that any variation of temps can effect your score, even if the temps still appear to be within an acceptable range. It can effect how consistently your card holds boost clocks, etc. When I made my fan curve slightly less aggressive (hitting 100% fan speed at 65c instead of 55c), I noticed I couldn't quite hit the same scores again until I put it back to the more aggressive profile. My temps were barely even higher with the other curve, but apparently it was making some sort of a difference towards my score. At least that was my conclusion after doing several runs.


----------



## Bart

SLAY3R8888 said:


> Also I have noticed with my testing that any variation of temps can effect your score, even if the temps still appear to be within an acceptable range. It can effect how consistently your card holds boost clocks, etc. When I made my fan curve slightly less aggressive (hitting 100% fan speed at 65c instead of 55c), I noticed I couldn't quite hit the same scores again until I put it back to the more aggressive profile. My temps were barely even higher with the other curve, but apparently it was making some sort of a difference towards my score. At least that was my conclusion after doing several runs.


Yup, I've noticed this myself, air temps / flow make the difference between a run crashing or completing. Until I maxxed out all 10 fans in my case (and set the GPU fans to full-time 100%), I wasn't stable at 2750. It's touchy as heck. But much fun to mess around with.  I'm within striking distance of the top scores for people with my combo, so the power is there for sure. I can only hope a good water block hits the market for this thing, but that might be a long wait and not many options (maybe Aquacomputer).


----------



## SLAY3R8888

Bart said:


> Yup, I've noticed this myself, air temps / flow make the difference between a run crashing or completing. Until I maxxed out all 10 fans in my case (and set the GPU fans to full-time 100%), I wasn't stable at 2750. It's touchy as heck. But much fun to mess around with.  I'm within striking distance of the top scores for people with my combo, so the power is there for sure. I can only hope a good water block hits the market for this thing, but that might be a long wait and not many options (maybe Aquacomputer).


I know this is an OC group and we are all trying to get the best performance/scores possible, but tbh if your card is even near 20k graphics score in TimeSpy then it's performing beastly as it should 💪.. if that makes you feel any better lol


----------



## ZealotKi11er

SLAY3R8888 said:


> I know this is an OC group and we are all trying to get the best performance/scores possible, but tbh if your card is even near 20k graphics score in TimeSpy then it's performing beastly as it should 💪.. if that makes you feel any better lol


6900 xt needs at least 21k. Someone in 6800xt club has 21.4k.


----------



## SLAY3R8888

ZealotKi11er said:


> 6900 xt needs at least 21k. Someone in 6800xt club has 21.4k.


Lol... well 3DMark Timespy graphics score... when filtering by 1 GPU .. 6800xt average score = 18144, 6900xt avg = 19047. 6800xt top score is 21302, 6900xt top score = 22059.

So whoever has 21.4k is faster than every 6800xt in the world on the 3DMark Timespy results list.

Still though...20,600 puts you in the top 100 (top 1%) of all 6800xt scores, so that's pretty good.

I think 6900xt's could go higher if they could get more voltage.... it is true that you can't raise the voltage in MPT, correct? Only lower it? I haven't tried raising it, but that's what my card needs to get to 21k graphics score. I just need a little more voltage to get stable at the next couple frequency steps. I think I had read that the driver will freak out on you if you try to raise the voltage limit higher than 1175mV, but I have never tried it. I ran 1206mV on my 5700xt lol. Sometimes just a little more voltage will get you there.

I think 6900xt's are at a disadvantage here due to being capped at 1175mV. They like a LOT more voltage than the 6800xt's from what I've seen.


----------



## SLAY3R8888

LtMatt said:


> @SLAY3R8888
> Did some memory performance testing in Red Dead Redemption 2 and again i saw the behaviour play out as i mentioned before, 2112Mhz appears to be the sweet spot, at least in games.
> 
> Testing was done using the built in game benchmark.
> 
> 2160P
> RDR2 DX12
> Ultra Settings - Tree Tess Off, MSAA/Reflection Off Water quality High, Refraction/Reflection 3/4.
> TAA enabled and sharpness 100%
> 5950X PBO
> 6900 XT 2270-2370Mhz Undervolted to 1.000v (actual in game core clock is minimum of 2300Mhz, peaks up to 2325Mhz)
> 21.2.2
> Fast Timings On
> No MPT, using default board power limit 289W+15% power limit so no chance of power limit affecting results as power draw was 220W-240W ish according to HWINFO64.
> 
> 2012Mhz Memory Clock
> 
> 
> 2112Mhz Memory Clock
> 
> 
> 2150Mhz Memory Clock
> 
> 
> This behaviour in memory clock has been the same now on 1x6800 XT MBA, 1x6900 XT MBA, and 2x 6900 XT Mercs.


So I did a bunch of VRam clock testing using War Thunder's "Tank Battle" benchmark. If anyone is looking for a free game that has great benchmarks built in, to dial in your VRam clock, I highly recommend War Thunder. It has several different ones, and tons of graphics settings you can customize. After doing back to back runs with different graphics settings, going back and forth to different VRam clocks so ensure consistent results, it looks like 2124MHz set (2112 Effective) was the winner for the best VRam clock for my GPU. I think this might be the exact same setting that you came up with @LtMatt . I didn't have to use 4k to get these results, I used 1440p with really high graphics settings. There was a pretty noticeable drop in benchmark results from 2124MHz to 2150MHz VRam clock. I also tried many other VRam clock speeds, lower and higher at all sorts of increments -- 2124 was the best.

If anyone isn't wanting to go through all of this effort, just set your VRam clock at 2112 or 2124, somewhere right in there. I am confident this will work well for you and be optimal. @LtMatt has confirmed this with his testing on 4 different 6900xt's, and it looks like I got a similar result with mine as well. I am glad to be done dialing in the VRam, as it's a pain in the ass and is definitely the least fun part of overclocking imo .

I then went and did some TimeSpy runs, and was noticing more consistent results back to back at 2500/2600 core with the new VRam clock (I was getting 20400 graphics score every single run), so I cranked the fans up full blast and was able to bump the card up to 2520 / 2620, and it made it through the test just fine. Got a 20,602 graphics score. *I think I found my new overclock settings for gaming: 2520 / 2620 MHz, 1175mV, 385w total power limit, 2124 MHz VRam clock w/ Fast Timings*. I still think this card could go higher if I was just able to bump the voltage up past 1175mV. At 2550 / 2650 it fails TimeSpy, and I'm pretty sure that with a tad more voltage it would be stable. Temps are great. If anyone knows of a way to raise the voltage, please let me know. (I don't think you can with MPT because of the driver freaking out? Haven't tried it yet. I have only used it to lower voltages so far, which obviously my card did not like.)

You can see the two little hiccups for the GPU frequency in the graph (the pink line), those are the two times where it hits the 385w power limit and slightly downclocks for a brief moment. I could raise the limit again to 400w and eek out a few more score points, but I didn't feel like messing with it right now.

Running my 5800x at 4.675GHz @ 1.35V all core OC.


----------



## delgon

I got a 6900xt reference one just 1 week ago and I'm very pleased with it 
TBH I was waiting for 3080 to arrive but well, still waiting after 4 months (queue system) and my previous GPU just suddenly died (980 ti) so I had to get something now and AMDs were much cheaper (and available) in my country.
Tomorrow I should receive my waterblock for it so I'll have something to test 

I was also messing with increasing the Power Limit with MPT. Right now I'm at 290 in MTP +15% from Radeon Software which gives around 337W PPT.
I did not want to mess with it that much on air and reference cooler but wanna do something more when watercooling as everything should get much cooler.

My question is, what do you think is kinda reasonable safe Power Limit that can be set (either just in MPT +0% or MPT +15% as it should be the same). I'm not sure how powerful is the reference power delivery and do not wanna blow it up 

@SLAY3R8888 did you try Port Royal? I find its results much more stable and can find unstable GPU core clocks much faster. For example, I can run Time Spy just fine a few times but Port Royal will crash literally 1s after it starts with those clocks


----------



## SLAY3R8888

delgon said:


> I got a 6900xt reference one just 1 week ago and I'm very pleased with it
> TBH I was waiting for 3080 to arrive but well, still waiting after 4 months (queue system) and my previous GPU just suddenly died (980 ti) so I had to get something now and AMDs were much cheaper (and available) in my country.
> Tomorrow I should receive my waterblock for it so I'll have something to test
> 
> I was also messing with increasing the Power Limit with MPT. Right now I'm at 290 in MTP +15% from Radeon Software which gives around 337W PPT.
> I did not want to mess with it that much on air and reference cooler but wanna do something more when watercooling as everything should get much cooler.
> 
> My question is, what do you think is kinda reasonable safe Power Limit that can be set (either just in MPT +0% or MPT +15% as it should be the same). I'm not sure how powerful is the reference power delivery and do not wanna blow it up
> 
> @SLAY3R8888 did you try Port Royal? I find its results much more stable and can find unstable GPU core clocks much faster. For example, I can run Time Spy just fine a few times but Port Royal will crash literally 1s after it starts with those clocks


Congrats on your 6900xt purchase! I personally think you made a good choice. Welcome to the club. The 6900xt is an awesome GPU, I have been loving mine. I had the chance to buy a 3080 and I opted for this instead. It's better for my use case (1440p Ultra gaming, no ray tracing), and also has a lot more VRam than the 3080 which is good.

It's hard to say what is a "safe" power limit, but I think have seen people go up to around 350w with "confidence" that it's "safe" on a reference card. On my Merc319 version, I have taken it up to 400w power limit and watched it hit the power limit, and it handled it just fine, aka nothing crashed or burned.. lol. For peace of mind I have been leaving mine at 385w power limit. I would venture to say that as long as your temps are under control and not going over 100c junction often, you could probably push into similar territory. Maybe stay a bit closer to 350w to be safe on the reference. Do your own research and do what you are comfortable with. And it doesn't matter how you do the power slider, just make sure it adds up with the slider to the total you're looking for. For example, mine is 335 + 15% slider = 385w. I could also put it at 385 + 0% on the slider, would come out the same either way. I did mine using the +15% method so I won't accidentally bump the slider up in Adrenaline and push a much higher power limit by mistake.

ewww Port Royal, a Ray Tracing bench?  jk.. lol. I had never tried Port Royal before, because I don't have interest in running Ray Tracing, but I did it, and I made it through the run on my first try. Here are my results with the same OC settings mentioned in my previous post:


----------



## LtMatt

SLAY3R8888 said:


> So I did a bunch of VRam clock testing using War Thunder's "Tank Battle" benchmark. If anyone is looking for a free game that has great benchmarks built in, to dial in your VRam clock, I highly recommend War Thunder. It has several different ones, and tons of graphics settings you can customize. After doing back to back runs with different graphics settings, going back and forth to different VRam clocks so ensure consistent results, it looks like 2124MHz set (2112 Effective) was the winner for the best VRam clock for my GPU. I think this might be the exact same setting that you came up with @LtMatt . I didn't have to use 4k to get these results, I used 1440p with really high graphics settings. There was a pretty noticeable drop in benchmark results from 2124MHz to 2150MHz VRam clock. I also tried many other VRam clock speeds, lower and higher at all sorts of increments -- 2124 was the best.
> 
> If anyone isn't wanting to go through all of this effort, just set your VRam clock at 2112 or 2124, somewhere right in there. I am confident this will work well for you and be optimal. @LtMatt has confirmed this with his testing on 4 different 6900xt's, and it looks like I got a similar result with mine as well. I am glad to be done dialing in the VRam, as it's a pain in the ass and is definitely the least fun part of overclocking imo .
> 
> I then went and did some TimeSpy runs, and was noticing more consistent results back to back at 2500/2600 core with the new VRam clock (I was getting 20400 graphics score every single run), so I cranked the fans up full blast and was able to bump the card up to 2520 / 2620, and it made it through the test just fine. Got a 20,602 graphics score. *I think I found my new overclock settings for gaming: 2520 / 2620 MHz, 1175mV, 385w total power limit, 2124 MHz VRam clock w/ Fast Timings*. I still think this card could go higher if I was just able to bump the voltage up past 1175mV. At 2550 / 2650 it fails TimeSpy, and I'm pretty sure that with a tad more voltage it would be stable. Temps are great. If anyone knows of a way to raise the voltage, please let me know. (I don't think you can with MPT because of the driver freaking out? Haven't tried it yet. I have only used it to lower voltages so far, which obviously my card did not like.)
> 
> You can see the two little hiccups for the GPU frequency in the graph (the pink line), those are the two times where it hits the 385w power limit and slightly downclocks for a brief moment. I could raise the limit again to 400w and eek out a few more score points, but I didn't feel like messing with it right now.
> 
> Running my 5800x at 4.675GHz @ 1.35V all core OC.
> 
> View attachment 2479053


Good work and testing great to see you get the same results. What driver version are you using?


----------



## SLAY3R8888

LtMatt said:


> Good work and testing great to see you get the same results. What driver version are you using?


Thanks! 21.2.1... just looked and realized a new one came out a few days ago. Have you tried 21.2.2 yet? I will be checking it out soon.


----------



## SLAY3R8888

SLAY3R8888 said:


> Thanks! 21.2.1... just looked and realized a new one came out a few days ago. Have you tried 21.2.2 yet? I will be checking it out soon.


Just installed the new driver (21.2.2). Didn't see any difference in TimeSpy scores or overclocking potential. It feels the same to me so far.


----------



## LtMatt

SLAY3R8888 said:


> Thanks! 21.2.1... just looked and realized a new one came out a few days ago. Have you tried 21.2.2 yet? I will be checking it out soon.


Yes not much difference as you found. On 21.2.1 here atm.

Achieved this score with the following:

2470-2570Mhz Core @1.175v
2112Mhz Memory
100% fan Speed
385W+15% PL
5950X PBO










I was then able to lower voltage to around 1.137v (any lower and i get a soft timespy crash you know the one) and got this score. Slightly lower in graphics score, slightly higher in CPU score. Reduced peak power draw from 386W to 350W.


----------



## SLAY3R8888

LtMatt said:


> Yes not much difference as you found. On 21.2.1 here atm.
> 
> Achieved this score with the following:
> 
> 2470-2570Mhz Core @1.175v
> 2112Mhz Memory
> 100% fan Speed
> 385W+15% PL
> 5950X PBO
> 
> View attachment 2479062
> 
> 
> I was then able to lower voltage to around 1.137v (any lower and i get a soft timespy crash you know the one) and got this score. Slightly lower in graphics score, slightly higher in CPU score. Reduced peak power draw from 386W to 350W.
> 
> View attachment 2479063


Very nicely done! Good stuff!


----------



## LtMatt

SLAY3R8888 said:


> Very nicely done! Good stuff!


I'm now testing the max limits for this Merc. Safe to say my old Merc was better quality silicon, could achieve slightly higher clock speeds at less voltage. This new one looks to be around 1% worse on graphics score in Timespy. The benefit of this one though is better temperatures. Swings and roundabouts.


----------



## newls1

LtMatt said:


> Yes not much difference as you found. On 21.2.1 here atm.
> 
> Achieved this score with the following:
> 
> 2470-2570Mhz Core @1.175v
> 2112Mhz Memory
> 100% fan Speed
> 385W+15% PL
> 5950X PBO
> 
> View attachment 2479062
> 
> 
> I was then able to lower voltage to around 1.137v (any lower and i get a soft timespy crash you know the one) and got this score. Slightly lower in graphics score, slightly higher in CPU score. Reduced peak power draw from 386W to 350W.
> 
> View attachment 2479063


are you watercooled? and Are you setting 335 for Watts and using 15% slider for 385total? and what for TDC in MPT? Are you changing anything else in MPT. Can I see a pic of your MPT page? that would be a huge help. Having a hard time trying to find a happy medium while im still on factory Asus TUF heatsink, as im waiting for a WB to come in stock from performance pcs for this GPU.. Its coming as alphacool lists a part number (11955) just having to wait is killing me. Been thinking about selling this card and just get a 3090 but having a hard time spending another 1000$ above the cost after selling this gpu!


----------



## newls1

SLAY3R8888 said:


> Congrats on your 6900xt purchase! I personally think you made a good choice. Welcome to the club. The 6900xt is an awesome GPU, I have been loving mine. I had the chance to buy a 3080 and I opted for this instead. It's better for my use case (1440p Ultra gaming, no ray tracing), and also has a lot more VRam than the 3080 which is good.
> 
> It's hard to say what is a "safe" power limit, but I think have seen people go up to around 350w with "confidence" that it's "safe" on a reference card. On my Merc319 version, I have taken it up to 400w power limit and watched it hit the power limit, and it handled it just fine, aka nothing crashed or burned.. lol. For peace of mind I have been leaving mine at 385w power limit. I would venture to say that as long as your temps are under control and not going over 100c junction often, you could probably push into similar territory. Maybe stay a bit closer to 350w to be safe on the reference. Do your own research and do what you are comfortable with. And it doesn't matter how you do the power slider, just make sure it adds up with the slider to the total you're looking for. For example, mine is 335 + 15% slider = 385w. I could also put it at 385 + 0% on the slider, would come out the same either way. I did mine using the +15% method so I won't accidentally bump the slider up in Adrenaline and push a much higher power limit by mistake.
> 
> ewww Port Royal, a Ray Tracing bench?  jk.. lol. I had never tried Port Royal before, because I don't have interest in running Ray Tracing, but I did it, and I made it through the run on my first try. Here are my results with the same OC settings mentioned in my previous post:
> 
> View attachment 2479056


May have quoted the wrong person above for this question (sorry Ltmatt)! I think I should have quoted slay3r......

are you watercooled? and Are you setting 335 for Watts and using 15% slider for 385total? and what for TDC in MPT? Are you changing anything else in MPT. Can I see a pic of your MPT page? that would be a huge help. Having a hard time trying to find a happy medium while im still on factory Asus TUF heatsink, as im waiting for a WB to come in stock from performance pcs for this GPU.. Its coming as alphacool lists a part number (11955) just having to wait is killing me. Been thinking about selling this card and just get a 3090 but having a hard time spending another 1000$ above the cost after selling this gpu!


----------



## Bart

newls1 said:


> May have quoted the wrong person above for this question (sorry Ltmatt)! I think I should have quoted slay3r......
> 
> are you watercooled? and Are you setting 335 for Watts and using 15% slider for 385total? and what for TDC in MPT? Are you changing anything else in MPT. Can I see a pic of your MPT page? that would be a huge help. Having a hard time trying to find a happy medium while im still on factory Asus TUF heatsink, as im waiting for a WB to come in stock from performance pcs for this GPU.. Its coming as alphacool lists a part number (11955) just having to wait is killing me. Been thinking about selling this card and just get a 3090 but having a hard time spending another 1000$ above the cost after selling this gpu!


You got a part number for the alphacool block for our cards??!?!?! NICE!! I've been waiting too. I wouldn't sell this card just yet man, unless you really need Nvidia features. If we can ever unlock the voltage on these TUF cards and get them underwater, they might have all sorts of crazy potential.


----------



## newls1

Bart said:


> You got a part number for the alphacool block for our cards??!?!?! NICE!! I've been waiting too. I wouldn't sell this card just yet man, unless you really need Nvidia features. If we can ever unlock the voltage on these TUF cards and get them underwater, they might have all sorts of crazy potential.


yes, states 11955 on alphacools site


----------



## Bart

newls1 said:


> yes, states 11955 on alphacools site


That doesn't say 6900XT though, that scares me. In other listings for blocks that support both, they clearly state "6800 XT / 6900 XT", that one only says 6800XT. Have they confirmed 6900XT fitment anywhere?


----------



## newls1

OK, needs pro help here to try to help me understand whats going on here?! I just redid my MPT settings with the following:

335 (watts)
390 (TDC) 
clicked "write sppt" window came up and said "successfully added" so i rebooted machine.... Upon reboot, went into wattman, set min slider to 500MHz (all the way to the bottom), set max slider to 2750, and used 12% power limit.

Ran Timespy and hit CTRL SHIFT O so I can watch my wattage, temps etc... Hard cap @ 375w (which adds up perfectly) and of course hot spot temp was on edge on test 2, but it completed and GOT THE HIGHEST SCORE EVER WHICH BLEW ME AWAY, HERE TAKE A LOOK! Why do you think my score improved over 1000points since last run, only changed 2 things. Redid my MPT settings and changed min slider to 500Mhz (lowest possible) Ill never understand AMD drivers!. You think this is a good score for this card?


----------



## Bart

Dude my OCD would be going BANANAS being that close to the top score for your combo, LOL!


----------



## SLAY3R8888

newls1 said:


> May have quoted the wrong person above for this question (sorry Ltmatt)! I think I should have quoted slay3r......
> 
> are you watercooled? and Are you setting 335 for Watts and using 15% slider for 385total? and what for TDC in MPT? Are you changing anything else in MPT. Can I see a pic of your MPT page? that would be a huge help. Having a hard time trying to find a happy medium while im still on factory Asus TUF heatsink, as im waiting for a WB to come in stock from performance pcs for this GPU.. Its coming as alphacool lists a part number (11955) just having to wait is killing me. Been thinking about selling this card and just get a 3090 but having a hard time spending another 1000$ above the cost after selling this gpu!


I can't send you a picture of my MPT settings at the moment, but you have it exactly right. I changed Power Limit (W) to 335w in MPT and did +15% on power limit slider, for 385w total. It seems just right where it barely kisses the limit a time or two.

Nothing else has been changed in MPT for me. Just that one setting. Everything else is stock. Never messed with TDC, as I assume it's not any kind of real limitation, just like with Ryzen CPUs.. just a rating of sorts.. I could be wrong though.


----------



## newls1

Bart said:


> Dude my OCD would be going BANANAS being that close to the top score for your combo, LOL!


Is there a score board somewhere? How do you know this is a near top score?? Clue me in!


----------



## SLAY3R8888

newls1 said:


> OK, needs pro help here to try to help me understand whats going on here?! I just redid my MPT settings with the following:
> 
> 335 (watts)
> 390 (TDC)
> clicked "write sppt" window came up and said "successfully added" so i rebooted machine.... Upon reboot, went into wattman, set min slider to 500MHz (all the way to the bottom), set max slider to 2750, and used 12% power limit.
> 
> Ran Timespy and hit CTRL SHIFT O so I can watch my wattage, temps etc... Hard cap @ 375w (which adds up perfectly) and of course hot spot temp was on edge on test 2, but it completed and GOT THE HIGHEST SCORE EVER WHICH BLEW ME AWAY, HERE TAKE A LOOK! Why do you think my score improved over 1000points since last run, only changed 2 things. Redid my MPT settings and changed min slider to 500Mhz (lowest possible) Ill never understand AMD drivers!. You think this is a good score for this card?
> 
> View attachment 2479133


This is very interesting, congrats on the high score!! Sounds like YOU need to be giving US your pro secrets 😏 

Can you please tell me exactly what your old settings were when you got 1000 points lower and what you changed them to? I know you half way told me what you changed but I want to know what the values were beforehand, etc. Clock speed min / max before and after, and MPT settings before and after. 

That might help narrow down what gave you such a high score. It could be setting the min clock down to 500MHz somhow helping? I don't see how that would help. But I have never tried this before, I always do a 1000 MHz spread, and I have never tried changing TDC before. Maybe TDC has some sort of an impact on how the power delivery works. The reason why I haven't messed with it, is because I hadn't had any power delivery issues just raising the Watts, and I'm scared to play with settings that I don't completely understand. I know when OC'ing Ryzen it doesn't matter if you raise TDC, it just goes past the artificial limit anyway. It's just a rating per say. Idk how it works with these GPU's though. 

I will try some things and see if I can replicate what happened to you. That would be awesome. However the realist side in me says that my chip isn't going to go any higher without more voltage. Please send full list of old / new settings when you can. I'm going to go jump on my computer and change the TDC and see what happens


----------



## newls1

SLAY3R8888 said:


> I can't send you a picture of my MPT settings at the moment, but you have it exactly right. I changed Power Limit (W) to 335w in MPT and did +15% on power limit slider, for 385w total. It seems just right where it barely kisses the limit a time or two.
> 
> Nothing else has been changed in MPT for me. Just that one setting. Everything else is stock. Never messed with TDC, as I assume it's not any kind of real limitation, just like with Ryzen CPUs.. just a rating of sorts.. I could be wrong though.


Cool man, thanks.. Cant wait for a waterblock.. Hotspot is HATING LIFE being on air!


----------



## SLAY3R8888

newls1 said:


> Is there a score board somewhere? How do you know this is a near top score?? Clue me in!


Go to 3DMark.com and click results tab and search for the criteria you are wanting. Can filter by graphics score, certain CPU / GPU combos, etc. Also make sure to select GPU count = 1 so you're not competing with crossfire setups. You can see what the top TimeSpy score is for any CPU / GPU combo. For example yesterday I was happy to see that I'm getting a higher score than even the highest scored 3080 when paired with my same CPU .


----------



## Bart

newls1 said:


> Is there a score board somewhere? How do you know this is a near top score?? Clue me in!


Look at the pic you posted, at the little graph on the left side. Unless my old eyeballs are lying, you are eerily close to the top score ever for your CPU / GPU combo.


----------



## SLAY3R8888

newls1 said:


> May have quoted the wrong person above for this question (sorry Ltmatt)! I think I should have quoted slay3r......
> 
> are you watercooled? and Are you setting 335 for Watts and using 15% slider for 385total? and what for TDC in MPT? Are you changing anything else in MPT. Can I see a pic of your MPT page? that would be a huge help. Having a hard time trying to find a happy medium while im still on factory Asus TUF heatsink, as im waiting for a WB to come in stock from performance pcs for this GPU.. Its coming as alphacool lists a part number (11955) just having to wait is killing me. Been thinking about selling this card and just get a 3090 but having a hard time spending another 1000$ above the cost after selling this gpu!


Sorry I forgot to answer part of your question. No I am air cooled, not watercooled. XFX Merc319 With 7 case fans on full blast. Lots and lots of airflow. The cooler on the Merc is beefy as hell, it's doing a good job.

I have no plans for doing water cooling, this card hasn't even gotten close to 90c junction yet, usually barely gets over 80c for junction . I'm sure a bit more voltage and I would find a thermal limit soon enough...


----------



## SLAY3R8888

Bart said:


> Look at the pic you posted, at the little graph on the left side. Unless my old eyeballs are lying, you are eerily close to the top score ever for your CPU / GPU combo.


Yes I was looking at different combos yesterday, and off the top of my head he should be right up there at the top. That is a REALLY good score that he just got.


----------



## newls1

SLAY3R8888 said:


> This is very interesting, congrats on the high score!! Sounds like YOU need to be giving US your pro secrets 😏
> 
> Can you please tell me exactly what your old settings were when you got 1000 points lower and what you changed them to? I know you half way told me what you changed but I want to know what the values were beforehand, etc. Clock speed min / max before and after, and MPT settings before and after.
> 
> That might help narrow down what gave you such a high score. It could be setting the min clock down to 500MHz somhow helping? I don't see how that would help. But I have never tried this before, I always do a 1000 MHz spread, and I have never tried changing TDC before. Maybe TDC has some sort of an impact on how the power delivery works. The reason why I haven't messed with it, is because I hadn't had any power delivery issues just raising the Watts, and I'm scared to play with settings that I don't completely understand. I know when OC'ing Ryzen it doesn't matter if you raise TDC, it just goes past the artificial limit anyway. It's just a rating per say. Idk how it works with these GPU's though.
> 
> I will try some things and see if I can replicate what happened to you. That would be awesome. However the realist side in me says that my chip isn't going to go any higher without more voltage. Please send full list of old / new settings when you can. I'm going to go jump on my computer and change the TDC and see what happens


My prior wattman settings were 2650 min / 2750 max and MPT settings were 325 (watts) 375 (TDC)

New settings

Wattman settings 500MHz min / 2750 max
MPT 335 (watts) 390 (TDC)

applied sppt and rebooted...

ran timespy, and got above results.


----------



## newls1

SLAY3R8888 said:


> Yes I was looking at different combos yesterday, and off the top of my head he should be right up there at the top. That is a REALLY good score that he just got.


checking now, thanks guys


----------



## newls1

Bart said:


> That doesn't say 6900XT though, that scares me. In other listings for blocks that support both, they clearly state "6800 XT / 6900 XT", that one only says 6800XT. Have they confirmed 6900XT fitment anywhere?


yes, i emailed them, and they said same block


----------



## Bart

newls1 said:


> yes, i emailed them, and they said same block


Awesome, thanks for confirming that!!! That's a weight off my shoulders! Since we'll most likely never get a HeatKiller for this thing, at least we might have a decent alternative.


----------



## SLAY3R8888

newls1 said:


> My prior wattman settings were 2650 min / 2750 max and MPT settings were 325 (watts) 375 (TDC)
> 
> New settings
> 
> Wattman settings 500MHz min / 2750 max
> MPT 335 (watts) 390 (TDC)
> 
> applied sppt and rebooted...
> 
> ran timespy, and got above results.


Okay so I just tried changing the TDC from stock to 390, and I also tried dropping my min clock to 500MHz. Both changes, even when done separately or together, yielded the same TimeSpy graphics score of around 20,600, and no additional OC headroom for my card. Still stuck at 2620MHz here. Temps are low, my card just isn't stable at any higher set frequency.

So I can confirm that upping the TDC from stock value doesn't do anything. I think the only change that may have helped you, was raising your power limit that extra bit. Looks like you have a really good chip, and perhaps it is scaling extra well because of your 10900k, I see it's getting a beastly CPU score. That could be helping things. You might compare your old CPU score to your new CPU score and see if it's any different. If it's about the same, and you haven't made any OC changes to your 10900k since your last TimeSpy run, then I would say it's probably the raising of the power limit (W) that did it for you. Your card really liked it and was able to boost higher. My card isn't stable above 2620MHz even with the higher power limit, perhaps it needs more than 1175mV to go any higher.

Congrats again on the score, that is awesome!


----------



## newls1

Bart said:


> Awesome, thanks for confirming that!!! That's a weight off my shoulders! Since we'll most likely never get a HeatKiller for this thing, at least we might have a decent alternative.


EK also has one coming


----------



## newls1

SLAY3R8888 said:


> Okay so I just tried changing the TDC from stock to 390, and I also tried dropping my min clock to 500MHz. Both changes, even when done separately or together, yielded the same TimeSpy graphics score of around 20,600, and no additional OC headroom for my card. Still stuck at 2620MHz here. Temps are low, my card just isn't stable at any higher set frequency.
> 
> So I can confirm that upping the TDC from stock value doesn't do anything. I think the only change that may have helped you, was raising your power limit that extra bit. Looks like you have a really good chip, and perhaps it is scaling extra well because of your 10900k, I see it's getting a beastly CPU score. That could be helping things. You might compare your old CPU score to your new CPU score and see if it's any different. If it's about the same, and you haven't made any OC changes to your 10900k since your last TimeSpy run, then I would say it's probably the raising of the power limit (W) that did it for you. Your card really liked it and was able to boost higher. My card isn't stable above 2620MHz even with the higher power limit, perhaps it needs more than 1175mV to go any higher.
> 
> Congrats again on the score, that is awesome!


I havent touched my cpu OC for weeks... My GPU's average core speed was LOWER on that run, then some in the past which those prior runs yielded a lower score (1000+ points lower) this run was much more leveled out and constant speed of 25xxMHz instead of bursts of 2700+/2600+/2500+ see what im saying. By me setting 500Mhz min slider i noticed the gpu was very much more consistant with its core speed, and wondering if that plays a roll in final score?


----------



## Bart

newls1 said:


> EK also has one coming


I said "decent alternative" not "overpriced junk".  EK is a last resort for me, always. Their QC is atrocious, and they focus more on being the first to market in bulk, rather than focusing on quality.


----------



## SLAY3R8888

newls1 said:


> I havent touched my cpu OC for weeks... My GPU's average core speed was LOWER on that run, then some in the past which those prior runs yielded a lower score (1000+ points lower) this run was much more leveled out and constant speed of 25xxMHz instead of bursts of 2700+/2600+/2500+ see what im saying. By me setting 500Mhz min slider i noticed the gpu was very much more consistant with its core speed, and wondering if that plays a roll in final score?


Yes I see what you are saying. And yeah maybe it's possible that fps or clock consistency is considered as part of their formula for the scoring, not just average fps. With my 2520 / 2620 OC, it sits at 2550 - 2570 for the whole test. It just dips down to around 2500 when it touches the power limit for a few seconds. Other than that, very consistent. I can replicate the same score many times in a row. 

As soon as I hear of somebody figuring out how to raise the voltage, or maybe MPT gets an update with that feature where it will work correctly, I will take it up to 1200mV and see if I can go any higher.


----------



## kazukun

LtMatt said:


> Yes not much difference as you found. On 21.2.1 here atm.
> 
> Achieved this score with the following:
> 
> 2470-2570Mhz Core @1.175v
> 2112Mhz Memory
> 100% fan Speed
> 385W+15% PL
> 5950X PBO
> 
> View attachment 2479062
> 
> 
> I was then able to lower voltage to around 1.137v (any lower and i get a soft timespy crash you know the one) and got this score. Slightly lower in graphics score, slightly higher in CPU score. Reduced peak power draw from 386W to 350W.
> 
> View attachment 2479063


The major difference is in the Mesh Shader Feature Test.


----------



## LtMatt

kazukun said:


> The major difference is in the Mesh Shader Feature Test.


That’s a rather uninteresting bench tbh.


----------



## newls1

SLAY3R8888 said:


> Yes I see what you are saying. And yeah maybe it's possible that fps or clock consistency is considered as part of their formula for the scoring, not just average fps. With my 2520 / 2620 OC, it sits at 2550 - 2570 for the whole test. It just dips down to around 2500 when it touches the power limit for a few seconds. Other than that, very consistent. I can replicate the same score many times in a row.
> 
> As soon as I hear of somebody figuring out how to raise the voltage, or maybe MPT gets an update with that feature where it will work correctly, I will take it up to 1200mV and see if I can go any higher.


i find the scoring system for timespy to be rather (for lack of a better term) fu*ked up! I can run back to back benchies and the score will be 700-1000 points off from one another and it drives me crazy. I know temps on GPU are extremely important and dependant for a great score outcome, but when I can control the computers environment (have a portable room A/C blowing on radiators and in pc case) to maintain temps, score will still vary dramatically at times.... ill never understand it. On a different note though, i bought the full version of 3dmark yesterday and it was the best decision ive made in a long time.. I NO LONGER HAVE TO WASTE GPU CYCLES AND HEATSOAK WATCHING THAT DAMN 2 SCREEN CYCLE DEMO PLAY ANYMORE BEFORE A BENCHIE!!!


----------



## SLAY3R8888

newls1 said:


> i find the scoring system for timespy to be rather (for lack of a better term) fu*ked up! I can run back to back benchies and the score will be 700-1000 points off from one another and it drives me crazy. I know temps on GPU are extremely important and dependant for a great score outcome, but when I can control the computers environment (have a portable room A/C blowing on radiators and in pc case) to maintain temps, score will still vary dramatically at times.... ill never understand it. On a different note though, i bought the full version of 3dmark yesterday and it was the best decision ive made in a long time.. I NO LONGER HAVE TO WASTE GPU CYCLES AND HEATSOAK WATCHING THAT DAMN 2 SCREEN CYCLE DEMO PLAY ANYMORE BEFORE A BENCHIE!!!


Yessss, I have the full version and I will never watch that demo ever again LOL. So I was reading older posts in this and other forums today, and it seems that some people with AIB cards were successfully able to raise the voltage above 1175mV with MPT. There was also a consensus in these discussions that you can only successfully do it if you have an AIB card, and reference card owners were unable to do this. Nobody specifically said that they did a bios flash to accomplish this, so I'm assuming they just used the "write SPPT" feature. I know this sounds like it makes zero sense, but I'm going to try raising the voltage to 1200mV on my Merc and see if it works. Will report back, wish me luck 🤞! If you don't hear back soon then call the police because it probably means I bricked my system XD


----------



## newls1

SLAY3R8888 said:


> Yessss, I have the full version and I will never watch that demo ever again LOL. So I was reading older posts in this and other forums today, and it seems that some people with AIB cards were successfully able to raise the voltage above 1175mV with MPT. There was also a consensus in these discussions that you can only successfully do it if you have an AIB card, and reference card owners were unable to do this. Nobody specifically said that they did a bios flash to accomplish this, so I'm assuming they just used the "write SPPT" feature. I know this sounds like it makes zero sense, but I'm going to try raising the voltage to 1200mV on my Merc and see if it works. Will report back, wish me luck 🤞! If you don't hear back soon then call the police because it probably means I bricked my system XD


please let us know!! hopefully your successful 🤞🤞


----------



## SLAY3R8888

newls1 said:


> please let us know!! hopefully your successful 🤞🤞


Couldn't get it to work at 1200mV, when TimeSpy is almost finished loading, the card does a quick ramp up and the clock speed goes over my set max clock, and the benchmark errors out before it begins, and the driver times out every time. Even if I raise the max set clock above what I know to be stable, it still doesn't work. Maybe I got too greedy with the mV, I will try writing in 1185mV with MPT and see if that works.


----------



## SLAY3R8888

newls1 said:


> please let us know!! hopefully your successful 🤞🤞


Okay so I couldn't raise the voltage in MPT at all without it erroring out right as TimeSpy begins. But then I noticed something... the max frequency slider in Adrenaline goes up to 3000MHz, but the stock bios file that I pulled from the card, shows in MPT that the max frequency is 2660MHz. This is not a limit that I created, it is a limit that MPT shows for the stock bios. Can somebody else go under the frequency tab of their MPT and tell me what their GFX Maximum (MHZ) says? See attached photo. I had never paid attention to these values before, because the max in Adrenaline is 3000MHz. Sorry if this has already been covered by someone in this forum, but I feel like this might explain why I am reaching a barrier when I try to OC the card to 2700MHz... maybe it's not actually instability, but it's the driver saying NO when my frequency goes over 2660MHz... this could explain why it doesn't work correctly for people when they try to raise their voltage in MPT. The card tries to clock higher with the extra voltage, and it hits that limit set in bios that's different from what the slider says. This would also explain why I have never once crashed my PC while overclocking this card. It's always a driver timeout, which could be a symptom of the driver just not liking the parameters and shutting everything down (instead of actual instability causing the timeout).

Idk, maybe I'm onto something, or maybe I'm crazy, I know these are a lot of "theories", but I would be interested to see what other people's max frequency is set at in MPT, and if this has any correlation to the best clock speeds they have been able to obtain. Let me know! 

I also tried raising this "GFX Maximum" value in MPT to 2900, and the test still errored out. I couldn't get TimeSpy to run without the driver timing out right away, until I wrote a new SPPT with the original max frequency and max voltage values.

I wonder if this is where bios flashing comes into play with their bios editor tool, in order to raise the voltage and MHz successfully.... hmm...


----------



## newls1

when i get home, ill check on this for you but currently on shift, and wont be home till 10am tomorrow. I will say that my card will game @ 2750+ all day long


----------



## SLAY3R8888

newls1 said:


> when i get home, ill check on this for you but currently on shift, and wont be home till 10am tomorrow. I will say that my card will game @ 2750+ all day long


Sounds good, I haven't tried gaming on my card past the frequency that it will pass TimeSpy on. (2520 - 2620 MHz) That's what I've been using for gaming as well and it's been great.


----------



## newls1

there must be a difference between our "cores" cause mine will TS @ nearly 2800mhz no issues other then hotspot going to 105c+ so i stop it....


----------



## cennis

I've seen some impressive overclocking from the above using ASUS TUF and XFX MERC 6900XTs, are the reference cards also capable of these 385W + 15% power limits and 2700mhz+? Obviously silicon lottery aside, but are there any hardware limitations on the reference cards if they are going to be watercooled?


----------



## newls1

cennis said:


> I've seen some impressive overclocking from the above using ASUS TUF and XFX MERC 6900XTs, are the reference cards also capable of these 385W + 15% power limits and 2700mhz+? Obviously silicon lottery aside, but are there any hardware limitations on the reference cards if they are going to be watercooled?


i dont think there are any limitations but id have to think they are a little handicapped from a card like a "tuf" or "merc" as our vrm setups are more robust.


----------



## ZealotKi11er

cennis said:


> I've seen some impressive overclocking from the above using ASUS TUF and XFX MERC 6900XTs, are the reference cards also capable of these 385W + 15% power limits and 2700mhz+? Obviously silicon lottery aside, but are there any hardware limitations on the reference cards if they are going to be watercooled?


Its all silicon lottery and cooling. Without cooling you cant even get 2500MHz. With Silicon lottery you can hit 2700+.


----------



## LtMatt

My Merc will top out at 2560-2660Mhz in Radeon Software at 1.175v which results in 2600Mhz core clock in game. This is long term game stable for hours. That's with the fans running at 40% and junction temps peaking at around 100c. 10Mhz more on the core clock and it crashes after 15 minutes. No doubt if i was water cooled i would have more headroom, but I am fairly sure mine is a below average sample.

I can bench higher for short runs like Firestrike, Timespy etc, but for long term gaming stability that's as far as she will go on air.


----------



## SLAY3R8888

LtMatt said:


> My Merc will top out at 2550-2660Mhz in Radeon Software at 1.175v which results in 2600Mhz core clock in game. That's with the fans running at 40% and junction temps peaking at around 100c. 10Mhz more on the core clock crashes. No doubt if i was water cooled i would have more headroom, but I am fairly sure mine is a below average sample.


Were any of your 4 samples like my card and just didn't want to be undervolted? I wish I could try with more than 1175mV. I've encountered hardware before that wasn't necessarily a bad sample for OC, it just wanted more voltage to do it. I'm used to having more voltage to play with lol.


----------



## LtMatt

SLAY3R8888 said:


> Were any of your 4 samples like my card and just didn't want to be undervolted? I wish I could try with more than 1175mV. I've encountered hardware before that wasn't necessarily a bad sample for OC, it just wanted more voltage to do it. I'm used to having more voltage to play with lol.


No, all of them loved an undervolt.

Try these profiles I've been working on. I find it hard to believe one of them won't be stable on your Merc.

All of them bar the 2600 have an undervolt from stock.

Start with the 2300/2000 profile and if it crashes add on +10-20mv of voltage and retest.

Download and Load into GPU Tuning.

Edit link removed]


----------



## weleh

Sold my RTX MSI GXT 3070 for 1050€ today to a miner.

Bought a 6800 XT Nitro+ SE. This thing is a beast.


----------



## SLAY3R8888

LtMatt said:


> No, all of them loved an undervolt.
> 
> Try these profiles I've been working on. I find it hard to believe one of them won't be stable on your Merc.
> 
> All of them bar the 2600 have an undervolt from stock.
> 
> Start with the 2300/2000 profile and if it crashes add on +10-20mv of voltage and retest.
> 
> Download and Load into GPU Tuning.
> 
> Edit link removed]


I've tried undervolting this card so many times, it just doesn't like it. But I've read in forums a lot of other people have cards like mine, that just want more voltage and don't like the UV at all.

And honestly I don't even see the point in undervolting this card, thermally it doesn't need an UV. It needs MOAR voltage lol..


----------



## SLAY3R8888

SLAY3R8888 said:


> I've tried undervolting this card so many times, it just doesn't like it. But I've read in forums a lot of other people have cards like mine, that just want more voltage and don't like the UV at all.
> 
> And honestly I don't even see the point in undervolting this card, thermally it doesn't need an UV. It needs MOAR voltage lol..


It could even be something as simple as the voltage controller / power delivery system just not being calibrated quite the same as your card(s). That would explain it wanting more voltage to be stable, as well as it being cooler at the supposed "higher voltage".

Honestly even in the past, when I have had a successful undervolt of a CPU / GPU, I always end up upping the voltage again and getting more performance anyway. A lot of clock stretching can happen with undervolts and you end up gimping your performance for no reason. Unless thermals are an issue then that's a different story of course.


----------



## nyk20z3

Finally got lucky today at Micro Center in Queens NY, I walked in and was able to get a Strix 6900XT that ive been trying to get my hands on. Out the gate there are no screws in the box to mount the radiator which is inexcusable on a $1500 card. I uninstalled the nvidia drivers and installed the Radeon suite. I wanted to run furmark a little bit and it would just crash every time I tried to run it. I tried uninstalling GPU tweak and it will just crash before it actually uninstalls. The rgb on the card was working but then randomly it shuts off once I get to the desktop and then armory crate doesn't even show the gpu on the aura list.

I am just going to reinstall windows and hopefully this issue is resolved. It’s a lovely card but so far it’s not been a good experience!


----------



## ZealotKi11er

Furmark is the most basic thing that should run. download hwinfo and check all temp sensors.


----------



## Beagle Box

nyk20z3 said:


> Finally got lucky today at Micro Center in Queens NY, I walked in and was able to get a Strix 6900XT that ive been trying to get my hands on. Out the gate there are no screws in the box to mount the radiator which is inexcusable on a $1500 card. I uninstalled the nvidia drivers and installed the Radeon suite. I wanted to run furmark a little bit and it would just crash every time I tried to run it. I tried uninstalling GPU tweak and it will just crash before it actually uninstalls. The rgb on the card was working but then randomly it shuts off once I get to the desktop and then armory crate doesn't even show the gpu on the aura list.
> 
> I am just going to reinstall windows and hopefully this issue is resolved. It’s a lovely card but so far it’s not been a good experience!
> 
> View attachment 2479325
> View attachment 2479326


No screws, box looks beat to hell. Did they sell this as new?


----------



## delgon

I finished my loop yesterday.
Well, my card is not a beast overclocker but is still quite good imo.
My final result is 2600-2700 Core @ 1.175v. Tried lower voltages to lower some of the power consumption but it crashes or needs lower clocks to operate.
For memory, I went with 2140. IDK why buy from my testing it is a little bit faster than 2124. 2150 acts weirdly sometimes as sometimes scores tank quite a bit like 1.5k lower for "no reason" but when I lower the clock to 2140 (107%) it works just fine.
I like that my clocks now are rather stable and do not fluctuate that much like before. And temps are like 20C lower (the hotspot was my concern before as it was around 105C most of the time) even tho now I use 75W higher preset from using MPT.

Time Spy Graphics score: 21260 







Port Royal: 11224















@SLAY3R8888 I think it is not limited by the bios to 2660. I was able to get quite stable 2700 MHz on the clock (it crashes when it hits 2725 for me tho :/)
I also decided to do 335 + 15% = 385W as my power limit. I think this should be close to the safe upper limit as you can get 75W from the PCIe slot and 2x150W from both 8 pins. In theory, the 8 pin connector is rated for 150W per standard but should be able to push a little bit more depending on the quality of the cables, do not wanna melt them tho 

@nyk20z3 Please tell me you used DDU to switch graphic drivers, not just normal "uninstall"


----------



## LtMatt

delgon said:


> I finished my loop yesterday.
> Well, my card is not a beast overclocker but is still quite good imo.
> My final result is 2600-2700 Core @ 1.175v. Tried lower voltages to lower some of the power consumption but it crashes or needs lower clocks to operate.
> For memory, I went with 2140. IDK why buy from my testing it is a little bit faster than 2124. 2150 acts weirdly sometimes as sometimes scores tank quite a bit like 1.5k lower for "no reason" but when I lower the clock to 2140 (107%) it works just fine.
> I like that my clocks now are rather stable and do not fluctuate that much like before. And temps are like 20C lower (the hotspot was my concern before as it was around 105C most of the time) even tho now I use 75W higher preset from using MPT.
> 
> Time Spy Graphics score: 21260
> View attachment 2479333
> 
> Port Royal: 11224
> View attachment 2479335
> 
> View attachment 2479334
> 
> 
> @SLAY3R8888 I think it is not limited by the bios to 2660. I was able to get quite stable 2700 MHz on the clock (it crashes when it hits 2725 for me tho :/)
> I also decided to do 335 + 15% = 385W as my power limit. I think this should be close to the safe upper limit as you can get 75W from the PCIe slot and 2x150W from both 8 pins. In theory, the 8 pin connector is rated for 150W per standard but should be able to push a little bit more depending on the quality of the cables, do not wanna melt them tho
> 
> @nyk20z3 Please tell me you used DDU to switch graphic drivers, not just normal "uninstall"


Good results. Your 6900 XT seems very similar to mine in terms of maximum clock speed, voltage and Timespy score. 

You need to re-do your pictures though, can't view them properly. Just insert full size pictures into your post.


----------



## SLAY3R8888

delgon said:


> I finished my loop yesterday.
> Well, my card is not a beast overclocker but is still quite good imo.
> My final result is 2600-2700 Core @ 1.175v. Tried lower voltages to lower some of the power consumption but it crashes or needs lower clocks to operate.
> For memory, I went with 2140. IDK why buy from my testing it is a little bit faster than 2124. 2150 acts weirdly sometimes as sometimes scores tank quite a bit like 1.5k lower for "no reason" but when I lower the clock to 2140 (107%) it works just fine.
> I like that my clocks now are rather stable and do not fluctuate that much like before. And temps are like 20C lower (the hotspot was my concern before as it was around 105C most of the time) even tho now I use 75W higher preset from using MPT.
> 
> Time Spy Graphics score: 21260
> View attachment 2479333
> 
> Port Royal: 11224
> View attachment 2479335
> 
> View attachment 2479334
> 
> 
> @SLAY3R8888 I think it is not limited by the bios to 2660. I was able to get quite stable 2700 MHz on the clock (it crashes when it hits 2725 for me tho :/)
> I also decided to do 335 + 15% = 385W as my power limit. I think this should be close to the safe upper limit as you can get 75W from the PCIe slot and 2x150W from both 8 pins. In theory, the 8 pin connector is rated for 150W per standard but should be able to push a little bit more depending on the quality of the cables, do not wanna melt them tho
> 
> @nyk20z3 Please tell me you used DDU to switch graphic drivers, not just normal "uninstall"





ZealotKi11er said:


> Furmark is the most basic thing that should run. download hwinfo and check all temp sensors.


Furmark is also one of the most stressful things you can run on a GPU. It's like running Aida64 + FPU on your CPU, it's totally overkill for normal gaming use lol. 

But I will use it just to see a worse case scenario of what kind of temps my card might hit. If I was having the issues he's having, I would be checking for bios updates, windows updates, armoury crate updates, driver updates. @nyk20z3


----------



## SLAY3R8888

delgon said:


> I finished my loop yesterday.
> Well, my card is not a beast overclocker but is still quite good imo.
> My final result is 2600-2700 Core @ 1.175v. Tried lower voltages to lower some of the power consumption but it crashes or needs lower clocks to operate.
> For memory, I went with 2140. IDK why buy from my testing it is a little bit faster than 2124. 2150 acts weirdly sometimes as sometimes scores tank quite a bit like 1.5k lower for "no reason" but when I lower the clock to 2140 (107%) it works just fine.
> I like that my clocks now are rather stable and do not fluctuate that much like before. And temps are like 20C lower (the hotspot was my concern before as it was around 105C most of the time) even tho now I use 75W higher preset from using MPT.
> 
> Time Spy Graphics score: 21260
> View attachment 2479333
> 
> Port Royal: 11224
> View attachment 2479335
> 
> View attachment 2479334
> 
> 
> @SLAY3R8888 I think it is not limited by the bios to 2660. I was able to get quite stable 2700 MHz on the clock (it crashes when it hits 2725 for me tho :/)
> I also decided to do 335 + 15% = 385W as my power limit. I think this should be close to the safe upper limit as you can get 75W from the PCIe slot and 2x150W from both 8 pins. In theory, the 8 pin connector is rated for 150W per standard but should be able to push a little bit more depending on the quality of the cables, do not wanna melt them tho
> 
> @nyk20z3 Please tell me you used DDU to switch graphic drivers, not just normal "uninstall"


Congrats on the scores, good work! That's awesome!


----------



## ZealotKi11er

SLAY3R8888 said:


> Furmark is also one of the most stressful things you can run on a GPU. It's like running Aida64 + FPU on your CPU, it's totally overkill for normal gaming use lol.
> 
> But I will use it just to see a worse case scenario of what kind of temps my card might hit. If I was having the issues he's having, I would be checking for bios updates, windows updates, armoury crate updates, driver updates. @nyk20z3


Furmark is only stressful if there are no power limits in place. It's a very power-hungry app that draws a lot of current.
If you run with stock power limit you will get very low clk and voltage. Usually, the card becomes unstable at high voltage/clk.


----------



## SLAY3R8888

LtMatt said:


> Good results. Your 6900 XT seems very similar to mine in terms of maximum clock speed, voltage and Timespy score.
> 
> You need to re-do your pictures though, can't view them properly. Just insert full size pictures into your post.


@delgon I noticed the 1175mV voltage you were having to run for your OC to be successful. This is very similar to my card not liking any sort of undervolt. I just tried it again, even going down to 1160 or 1165mV it does not like, it just reduces the clock speed I can run by 50 - 100MHz. It acts like it's barely getting the voltage it wants at the full 1175mV. I've been creeping it closer to 2700MHz and trying to dial it in. But the only way for me is by using the full 1175mV, no undervolt.


----------



## SLAY3R8888

LtMatt said:


> My Merc will top out at 2560-2660Mhz in Radeon Software at 1.175v which results in 2600Mhz core clock in game. This is long term game stable for hours. That's with the fans running at 40% and junction temps peaking at around 100c. 10Mhz more on the core clock and it crashes after 15 minutes. No doubt if i was water cooled i would have more headroom, but I am fairly sure mine is a below average sample.
> 
> I can bench higher for short runs like Firestrike, Timespy etc, but for long term gaming stability that's as far as she will go on air.


So you also ended up not using an undervolt for your best OC. Guess your card is more like mine than I thought, it doesn't want an undervolt either. Maybe our definitions of "loving an undervolt" are different, lol. To me, if it can't run my best OC on an undervolt, or if any undervolting lowers the scores at said frequency, then it doesn't love an undervolt. I've never tried undervolting it on lower frequencies like you mentioned, because I have no interest in running the card at those frequencies... I've only tried undervolting and then working the clocks up as high as they will go, and 1175mV always is able to get superior clock speeds and scores for me.


----------



## LtMatt

SLAY3R8888 said:


> So you also ended up not using an undervolt for your best OC. Guess your card is more like mine than I thought, it doesn't want an undervolt either. Maybe our definitions of "loving an undervolt" are different, lol. To me, if it can't run my best OC on an undervolt, or if any undervolting lowers the scores at said frequency, then it doesn't love an undervolt. I've never tried undervolting it on lower frequencies like you mentioned, because I have no interest in running the card at those frequencies... I've only tried undervolting and then working the clocks up as high as they will go, and 1175mV always is able to get superior clock speeds and scores for me.


When i say love an undervolt, I mean that you can achieve more performance, less power draw, lower fan speed/temps etc by reducing voltage and achieving better than stock performance. 

All of my profiles bar the 2600Mhz are undervolted from stock of 1.175v. 

My 2300Mhz core profile runs at just 1.025v which offers a significant power saving and is barely any slower than stock performance. Plus you can kick back and runs the fans at 25% which is completely silent. Even the 2500Mhz profile runs at 1.125v which is still a decent reduction from stock. 

I should add that my 6900 XT MBA could run at the 2500Mhz profile at only 1.100v.


----------



## nyk20z3

Installed a fresh copy of windows and i am having the same issues. I go to install a program and it just stops and doesn't install, i had zero issues before installing this card yesturday.


----------



## nyk20z3

Just installed my 2080 super strix and now i can actually install things and nothing crashes. I cant imagine a gpu causing the issues i was having but you never know.


----------



## SLAY3R8888

nyk20z3 said:


> Just installed my 2080 super strix and now i can actually install things and nothing crashes. I cant imagine a gpu causing the issues i was having but you never know.


Yeah I've never had a GPU make it to where I can't install programs, so idk what that's all about.


----------



## LtMatt

nyk20z3 said:


> Just installed my 2080 super strix and now i can actually install things and nothing crashes. I cant imagine a gpu causing the issues i was having but you never know.


What are your system specs, PSU etc?


----------



## nyk20z3

SLAY3R8888 said:


> Yeah I've never had a GPU make it to where I can't install programs, so idk what that's all about.


I just opened up core temp and then it closed after 5 secs, i just tried to install Origin and it wont even open to install. I also tried to install metro exodus off epic so i can see if there are any issues actually gaming and got this error. And this is after a fresh install of windows, bios is the latest etc.


----------



## nyk20z3

LtMatt said:


> What are your system specs, PSU etc?


10900K, Asus Extreme XII, Phanteks 850 watt Revolt pro, main m.2 drive is a crucial P1 1tb, ive been running this set up for months now and no issues. As soon as i installed the 6900xt last night is when i started having all these weird issues. I removed the nvidia drivers and installed the radeon suite. And now i am on a fresh copy of windows and the same issues are happening.

I just went to login in to the desktop and the screen went blank, i can see a gpu error on the mobo and then it re started.


----------



## nyk20z3

Just went to open Metro Last Light redux and it crashes/closes after about 5 secs. This GPU has to be faulty imo, ive never experienced anything like this in the 15 years ive been building and gaming. But then again i always had an Nvidia card lol. This is very frustrating and it sucks i will have to return this lovely card!


----------



## LtMatt

nyk20z3 said:


> Just went to open Metro Last Light redux and it crashes/closes after about 5 secs. This GPU has to be faulty imo, ive never experienced anything like this in the 15 years ive been building and gaming. But then again i always had an Nvidia card lol. This is very frustrating and it sucks i will have to return this lovely card!


Must be faulty I'd have thought. Was it used before you bought it or sold as new? Looks a bit like a customer return, ex-display item or b grade by your box pictures.


----------



## nyk20z3

LtMatt said:


> Must be faulty I'd have thought. Was it used before you bought it or sold as new? Looks a bit like a customer return, ex-display item or b grade by your box pictures.


Nope brand new, i had already opened it before taking the pic and the box was in new condition when i bought it


----------



## nyk20z3

Well after another fresh install it seems to be okay. I played about 45 minutes of Metro last light redux on ultrawide and the temps hover around 45c which. What programs do you guys recommend to test it further and see what it can do?


----------



## HyperC

Guys, I leave you alone for 1 week and beat my TS score I am very sadden  I might have to mod my card to push the limits


----------



## SLAY3R8888

nyk20z3 said:


> Well after another fresh install it seems to be okay. I played about 45 minutes of Metro last light redux on ultrawide and the temps hover around 45c which. What programs do you guys recommend to test it further and see what it can do?


One thing I would recommend for sure, that I do often, if you think corrupt system files might be causing an issue, instead of going through the hassle of reinstalling windows for every issue you havel, I usually just run Command Prompt as admin and type "sfc /scannow". This has fixed many little issues in windows if it got corrupted and crashed after instability or something. It's an automatic system file checker and will repair issues it finds.

Nothing wrong with reinstalling windows, but just thought you might want to know about that little trick if you didn't already, I've been on the same Windows installation since I first built my PC 15 months ago. I've had my share of issues (everything on my PC is overclocked including Ram), but I've always been able to fix it by just checking little things like updating windows, doing the scan mentioned above, etc. 

Glad your card seems to be working, my first instinct is that your card is fine and something else is going on with your PC. But you know what they say about opinions (lol), nobody knows for sure what is going on with your PC, just have to keep trying things.

Some benchmark programs. Well I would recommend 3DMark for sure. A lot of us in here run TimeSpy to compare our scores. It's good for overclocking, if your card will pass TimeSpy benchmark and stress test then it's most likely going to be stable for gaming from what I've seen. I'd recommend getting the non-steam version straight from the 3DMark website. And if you pay for the nicer edition of it, you get to customize the run and take off the "demo" section of the run which is an annoying waste of time when doing multiple runs back to back for tweaking settings.

If you want to really stress thermals and see how it does there, Aida64 or Furmark are both good for little quick run thermal /stability tests.

I've been enjoying using War Thunder for some in-game benchmarking. I've never actually played the game, but it is free and has tons of benchmarks built into the game that give you a score and average / low fps etc. Can help you dial in your OC better. It also helped me to realize that my OBS encoding settings were hurting my fps slightly and helped me dial that in as well.

I'm half asleep, just woke up, so I'm going to stop rambling lol. Congrats on the card and I bet you're relieved that it might be okay


----------



## SLAY3R8888

HyperC said:


> Guys, I leave you alone for 1 week and beat my TS score I am very sadden  I might have to mod my card to push the limits


What mods do you speak of? . I want to be able to raise my voltage..... I need it so bad.... I'm tempted to make a petition for all of us to sign, imploring AMD to please give us a driver without the 1175mV limit, since they are already voiding our warranties for overclocking, we implore them to unlock the voltage lol. Or maybe Igor's Lab or someone else will find away around the voltage cap that actually works right. (No MPT doesn't work for raising voltage, the driver freaks out if you go over 1175mV)

Or were you talking about water cooling or something else? Just curious.


----------



## ZealotKi11er

SLAY3R8888 said:


> What mods do you speak of? . I want to be able to raise my voltage..... I need it so bad.... I'm tempted to make a petition for all of us to sign, imploring AMD to please give us a driver without the 1175mV limit, since they are already voiding our warranties for overclocking, we implore them to unlock the voltage lol. Or maybe Igor's Lab or someone else will find away around the voltage cap that actually works right. (No MPT doesn't work for raising voltage, the driver freaks out if you go over 1175mV)
> 
> Or were you talking about water cooling or something else? Just curious.


I am sure AMD in the future will give you more options to unlock and probably have a physical way to tell you if you broke the limits so they don't have to rma the card.


----------



## SLAY3R8888

ZealotKi11er said:


> I am sure AMD in the future will give you more options to unlock and probably have a physical way to tell you if you broke the limits so they don't have to rma the card.


Yeah, I have heard rumors that they already do have transistors in the card that will pop if you give it over a certain amount of power... but sounds like fairy tales to me.. but could be true idk

Otherwise, they would have no way to know if you OC'ed to void your warranty? idk


----------



## nyk20z3

SLAY3R8888 said:


> One thing I would recommend for sure, that I do often, if you think corrupt system files might be causing an issue, instead of going through the hassle of reinstalling windows for every issue you havel, I usually just run Command Prompt as admin and type "sfc /scannow". This has fixed many little issues in windows if it got corrupted and crashed after instability or something. It's an automatic system file checker and will repair issues it finds.
> 
> Nothing wrong with reinstalling windows, but just thought you might want to know about that little trick if you didn't already, I've been on the same Windows installation since I first built my PC 15 months ago. I've had my share of issues (everything on my PC is overclocked including Ram), but I've always been able to fix it by just checking little things like updating windows, doing the scan mentioned above, etc.
> 
> Glad your card seems to be working, my first instinct is that your card is fine and something else is going on with your PC. But you know what they say about opinions (lol), nobody knows for sure what is going on with your PC, just have to keep trying things.
> 
> Some benchmark programs. Well I would recommend 3DMark for sure. A lot of us in here run TimeSpy to compare our scores. It's good for overclocking, if your card will pass TimeSpy benchmark and stress test then it's most likely going to be stable for gaming from what I've seen. I'd recommend getting the non-steam version straight from the 3DMark website. And if you pay for the nicer edition of it, you get to customize the run and take off the "demo" section of the run which is an annoying waste of time when doing multiple runs back to back for tweaking settings.
> 
> If you want to really stress thermals and see how it does there, Aida64 or Furmark are both good for little quick run thermal /stability tests.
> 
> I've been enjoying using War Thunder for some in-game benchmarking. I've never actually played the game, but it is free and has tons of benchmarks built into the game that give you a score and average / low fps etc. Can help you dial in your OC better. It also helped me to realize that my OBS encoding settings were hurting my fps slightly and helped me dial that in as well.
> 
> I'm half asleep, just woke up, so I'm going to stop rambling lol. Congrats on the card and I bet you're relieved that it might be okay


Guess i spoke to soon. This morning all of a sudden GPU tweak is not reading the gpu mhz correctly in game, i tried to open coretemp and it closes after about 3 seconds, gpu tweak opens but the right hand side of the program closes after about 3 seconds, i tired to open my webcam and that also closes, western digital dashboard also closes after 3 seconds, played Metro Last Light this morning and last night but now it will open and then close as well. This is after a fresh install and i also re started again this morning just to flush it again, i am most definitely baffled at this point.


----------



## SLAY3R8888

nyk20z3 said:


> Guess i spoke to soon. This morning all of a sudden GPU tweak is not reading the gpu mhz correctly in game, i tried to open coretemp and it closes after about 3 seconds, gpu tweak opens but the right hand side of the program closes after about 3 seconds, i tired to open my webcam and that also closes, western digital dashboard also closes after 3 seconds, played Metro Last Light this morning and last night but now it will open and then close as well. This is after a fresh install and i also re started again this morning just to flush it again, i am most definitely baffled at this point.


Do you happen to have any other GPU overclocking software installed that might be interfering with the Adrenaline Software? Such as MSI Afterburner, etc etc... if so, maybe try uninstalling that software? Some of those programs interfere with Adrenaline software, and you can't control the GPU with any other software right now anyway, so might be worth a shot to get rid of those other programs if you have any. I personally use CapFrameX + RivaTuner to monitor things with an overlay, but I don't use GPU Tweak etc.

Really strange issue you are having. Your bios is on the most recent version, correct?

I just have a hard time believing that it's the GPU, although I know everything points to that since everything is fine with your other GPU.


----------



## nyk20z3

SLAY3R8888 said:


> Do you happen to have any other GPU overclocking software installed that might be interfering with the Adrenaline Software? Such as MSI Afterburner, etc etc... if so, maybe try uninstalling that software? Some of those programs interfere with Adrenaline software, and you can't control the GPU with any other software right now anyway, so might be worth a shot to get rid of those other programs if you have any. I personally use CapFrameX + RivaTuner to monitor things with an overlay, but I don't use GPU Tweak etc.
> 
> Really strange issue you are having. Your bios is on the most recent version, correct?
> 
> I just have a hard time believing that it's the GPU, although I know everything points to that since everything is fine with your other GPU.


Fresh install so only gpu tweak, i have not added anything else since last night and then it started happening again. Something has to be corrupting some where but the question is why and how does it keep happening over and over again.


----------



## SLAY3R8888

nyk20z3 said:


> Fresh install so only gpu tweak, i have not added anything else since last night and then it started happening again. Something has to be corrupting some where but the question is why and how does it keep happening over and over again.


Try getting rid of GPU Tweak. There's no need for AIB software, adrenaline is best. Just to rule that out as a potential issue.. Uninstall GPU Tweak, do a DDU to get your graphics driver(s) completely removed, then reinstall the latest version of Adrenaline software from AMD's website. That's the next step I recommend.


----------



## newls1

HyperC said:


> Guys, I leave you alone for 1 week and beat my TS score I am very sadden  I might have to mod my card to push the limits


who beat your score?


----------



## nyk20z3

Looks like gpu tweak was the problem. after deleting it everything works fine, i have afterburner installed now but i should look in to adrenaline like you mentioned.


----------



## Bart

nyk20z3 said:


> Looks like gpu tweak was the problem. after deleting it everything works fine, i have afterburner installed now but i should look in to adrenaline like you mentioned.


Beware, Afterburner has been reputed to misbehave with AMD GPUs, I'd ditch that too if you're rig starts acting up again. Most of us are just using MPT to up the power limit, and using the default AMD driver software for OCing.


----------



## nyk20z3

Bart said:


> Beware, Afterburner has been reputed to misbehave with AMD GPUs, I'd ditch that too if you're rig starts acting up again. Most of us are just using MPT to up the power limit, and using the default AMD driver software for OCing.


I deleted afterburner and a fresh install of adrenaline but it wont open no matter what i do. I tried opening from the app tray and right clicking the desktop and it just wont open.


----------



## SLAY3R8888

nyk20z3 said:


> Looks like gpu tweak was the problem. after deleting it everything works fine, i have afterburner installed now but i should look in to adrenaline like you mentioned.


Yesss I'm glad this worked! Sorry it took so long for you to figure it out. Adrenaline software is the only driver you need to control an AMD GPU. It doesn't play well with other overclocking software, just causes more issues. You can use Adrenaline to OC your GPU, and you can use MorePowerTool from Igor's Lab to raise the power limit (W). You shouldn't need to undervolt your card as I expect it will cool very well, but if you do, you will need to limit the voltage below 1175mV in MPT (Adrenaline won't follow your undervolt if you just do it there.) If you need/want to leave Afterburner installed, just make sure you go into the settings and make sure it is not allowed to control your GPU in any way. Or just uninstall it if you don't plan to use it for monitoring etc.

I would have mentioned for you to get rid of GPU Tweak sooner, but I didn't know what it was at the time lol.


----------



## Bart

nyk20z3 said:


> I deleted afterburner and a fresh install of adrenaline but it wont open no matter what i do. I tried opening from the app tray and right clicking the desktop and it just wont open.


Damn, that sucks man, the only time I've seen the adrenaline software get that twitchy was when I pushed the clocks too high.


----------



## SLAY3R8888

nyk20z3 said:


> I deleted afterburner and a fresh install of adrenaline but it wont open no matter what i do. I tried opening from the app tray and right clicking the desktop and it just wont open.


Did you use Display Driver Uninstaller after getting rid of Afterburner? Might need to do that


----------



## SLAY3R8888

Something is making the driver unhappy and I suspect it's something lingering from GPU Tweak or Afterburner, if you didn't do a DDU I highly recommend it. If you don't know how to use DDU, start a convo with me and I will help you.


----------



## newls1

nyk20z3 said:


> Looks like gpu tweak was the problem. after deleting it everything works fine, i have afterburner installed now but i should look in to adrenaline like you mentioned.


what part of delete all these overclocking software programs and only use wattman didnt you get!!!! Not trying to be rude, but STOP using other oc softwares, just use wattman in the drivers... end


----------



## nyk20z3

SLAY3R8888 said:


> Something is making the driver unhappy and I suspect it's something lingering from GPU Tweak or Afterburner, if you didn't do a DDU I highly recommend it. If you don't know how to use DDU, start a convo with me and I will help you.


Thank you very much as DDU resolved this issue, i ran timespy and overclocked the card via adrenaline. If you guys have any advice how to max the performance on this card then please share. Ive been strictly nvidia for many years and have no experience with amd software. Thanks again every one and i am glad to be a apart of the AMD club now 😀


----------



## nyk20z3

newls1 said:


> what part of delete all these overclocking software programs and only use wattman didnt you get!!!! Not trying to be rude, but STOP using other oc softwares, just use wattman in the drivers... end


No offense taken but ive been doing this for many many years and never had issues with afterburner or gpu tweak so i had to troubleshoot to see what the issue was and if i could still use the programs i am use to. You have to understand the computer was fine for months on end and normally a gpu change would not cause issues like this, i literally change builds every few months and once again ive never had these issues. So now that i am on an AMD gpu the game has changed i suppose!


----------



## newls1

this series amd is not happy using anything OTHER then its own wattman... Ive been doing this for 25+ years and learned to adapt to change, and AMD always = change


----------



## SLAY3R8888

nyk20z3 said:


> Thank you very much as DDU resolved this issue, i ran timespy and overclocked the card via adrenaline. If you guys have any advice how to max the performance on this card then please share. Ive been strictly nvidia for many years and have no experience with amd software. Thanks again every one and i am glad to be a apart of the AMD club now 😀
> 
> View attachment 2479511
> View attachment 2479512


Welcome to Team Red! Glad to have you, and glad you got it figured out. I'm going to start a side convo with you shortly and will send you my OC profile for my 6900xt along with some tips I have learned from OC'ing multiple AMD cards, which will help you find the best OC for your card. I don't want to plug up this thread with info that I've already shared before, so I will send it to you in a convo shortly.

Edit: Sent! Let me know if you get it.


----------



## HyperC

newls1 said:


> who beat your score?


delon did by 6 points  .. First time I have noticed these clock speed errors would have been insane if correct


----------



## LtMatt

Bart said:


> Beware, Afterburner has been reputed to misbehave with AMD GPUs, I'd ditch that too if you're rig starts acting up again. Most of us are just using MPT to up the power limit, and using the default AMD driver software for OCing.


This. Best thing about afterburner now is the overlay it offers.


----------



## SLAY3R8888

LtMatt said:


> This. Best thing about afterburner now is the overlay it offers.


Yep and now you can use CapFrameX instead of Afterburner, and IMO it's better and it's not an overclocking tool. And it uses the same projector as Afterburner does (RivaTunerStatisticsServer). Highly recommend, I have used both for overlays and I personally like CapFrameX better. Either one will work though. www.capframex.com


----------



## LtMatt

SLAY3R8888 said:


> Yep and now you can use CapFrameX instead of Afterburner, and IMO it's better and it's not an overclocking tool. And it uses the same projector as Afterburner does (RivaTunerStatisticsServer). Highly recommend, I have used both for overlays and I personally like CapFrameX better. Either one will work though. www.capframex.com


I'll check that out cheers.


----------



## delgon

I did some more benchmarking 
Core: 2610-2710 @ 1.175v + Mem: 2150 @ Fast Timings + 0.987v SoC voltage (from MPT) - Unstable but can pass like 75% of the benchmarks so I had to do them a few times. All those results are the best 1 from 3 (finished) tries of the benchmark. Those are more of a reference point for ppl that are interested, mine is not a beast overclocker as I pushed back to 2565-2665 as my stable overclock (30 min of Port Royal, Time Spy GT1 & GT2 each + some gaming to test it)

Port Royal: 11296








Time Spy: 21448








Time Spy Extreme: 10310








Mesh Shader Test: 39.23 -> 595.75 (1418.8%)









I also did some more temperature and power testing for my stable OC. Here are charts from 20min Port Royal loop that show +- 1.5 of the last benchmark loop (that's what fits in this space) and some averages and maximums from this stress test/bench.
Edge: 65.0 C
Hot Spot: 96.0 C
Tj Memory: 56.4 C
Power: 385W Max, 366W Average
Core: 2626 Max, 2606 Average








To be honest I'm kinda disappointed with my hotspot temps as it is on water. I might try to remount it to get something better. ye, It is pulling 366W on average but man, kinda hot for water IMO even with my silent fan profile on the radiators.


----------



## alexp247365

Any way to tell if you have a good card that just needs more voltage? TimeSpy doesn't want to complete at 2700mhz. (LE Red Devil under water, EVGA plat 1200w psu.)

Temps are good.
Frequency on Timespy fluctuates from 2590mhz up to 2640mhz.
Maybe a setting in MPT Is off?



































Temps are good, but that doesn't seem to have an effect. I'm new to AMD, so any suggestions would be appreciated. It doesn't appear to be resetting Wattman.

Thanks!


----------



## delgon

It looks okey.
Did you set the frequency to 2600 MIN and 2700 MAX or just the max one with min at 500 (default)?
Try setting it to those and check again. Mine crashes sometimes, after a few rounds in Port Royal at 2600-2700 but is very stable at 2565-2665 (that gives around 2606 average).


----------



## ZealotKi11er

My PR: I scored 11 378 in Port Royal


----------



## delgon

ZealotKi11er said:


> My PR: I scored 11 378 in Port Royal


That is a very nice score! So that is what 2655-2755 looks like


----------



## Mickey Padge

Hey, great info in this thread, thanks for that! 

Does anyone know of a waterblock for the Asrock 6900 XT Phantom Gaming please? Or is there still nothing compatible?

Cheers!


----------



## ZealotKi11er

delgon said:


> That is a very nice score! So that is what 2655-2755 looks like


This card did 2750 in PR but could not do over 2600 in TS.


----------



## SLAY3R8888

alexp247365 said:


> Any way to tell if you have a good card that just needs more voltage? TimeSpy doesn't want to complete at 2700mhz. (LE Red Devil under water, EVGA plat 1200w psu.)
> 
> Temps are good.
> Frequency on Timespy fluctuates from 2590mhz up to 2640mhz.
> Maybe a setting in MPT Is off?
> 
> View attachment 2479662
> View attachment 2479664
> 
> 
> View attachment 2479646
> 
> 
> View attachment 2479660
> 
> 
> Temps are good, but that doesn't seem to have an effect. I'm new to AMD, so any suggestions would be appreciated. It doesn't appear to be resetting Wattman.
> 
> Thanks!


Probably just needs more voltage. That's how my Merc is. 2520 - 2620 is the highest I can get through TimeSpy consistently with. Temps are excellent, just need more than 1175mV to go further. Will be keeping my eyes open for ways to raise voltage further in the future. 

If you are hitting the power limit, then you could try raising that more, but other than that I bet it needs more voltage.


----------



## alexp247365

SLAY3R8888 said:


> If you are hitting the power limit, then you could try raising that more, but other than that I bet it needs more voltage.


I think I had it set for 402 watts 350*1.15. Timespy was showing 389 before crashing in the first part. It never made it to the actual GPU tests. That might have been me just playing around with sliders and trying 2800, though.

Also, I noticed if I change this setting in MPT, I pretty much crash on the desktop : 










Wondering if that is a hard cap for my card? I don't think I've seen the card run any higher than 2651mhz, even when I knew the OC wouldn't be stable.


----------



## SLAY3R8888

alexp247365 said:


> I think I had it set for 402 watts 350*1.15. Timespy was showing 389 before crashing in the first part. It never made it to the actual GPU tests. That might have been me just playing around with sliders and trying 2800, though.
> 
> Also, I noticed if I change this setting in MPT, I pretty much crash on the desktop :
> 
> View attachment 2479681
> 
> 
> Wondering if that is a hard cap for my card? I don't think I've seen the card run any higher than 2651mhz, even when I knew the OC wouldn't be stable.


Mine says the same as yours for that 2660 value, idk what that value does, but I've tried to raise it and had the same issue as you, it wont work correctly.

I've also tried raising voltage and same result, driver takes a **** on itself.

I've actually seen my card run over that value I think in TimeSpy test 1, so I don't think it's really a limit for boost clock.

I think voltage cap is the limiting factor for many of us at this point. Your power limit looks good.


----------



## delgon

Here is what I "found out" about some of them and what I think about them.

Tab *Features*. (no pic) This one is pretty weird. You can turn off some functionality like the ability to change Core Clock or Memory Clock in the Software as well as some other things. Not really useful IMO. Might be interesting to test checking/turning on some things in "Feature Control" and seeing if it does anything, not sure what they do tho.









In overdrive, we have in theory the ability to increase the limit on *GFX Clock *and *Memory Clock*. The first one I did not test because I'm well below that limit. With the *Memory Clock,* I noticed that I was able to boot with it but the driver is kinda weird. IIRC the GPU clock goes to hard 500 MHz as max in the case if you wanna increase it.
*1. *This option might be interesting. You can set it to *2 *and you will get another option in the software for VRAM timings. Those make my VRAM unstable almost instantly even @ 2000 MHz. Nice to know tho 









*1. *Those are what you might want to tinker with. If you wanna undervolt your GPU, setting a lower value of *GFX *part is what you wanna do. Setting it lower in the software will not really do what you want. Depending on your limiting factors like temperature or power limit, lowering the voltage might be a good idea to help you with those but it can also lower your max overclock. For me, I get the same OC with either 1.175v (current max) or 1.163v so I went with a little undervolt for less power-hungry operation. The *SoC *part can also be undervolted. I did not see any improvement in performance keeping it at default 1150 and undervolting it lowers the SoC power consumption by a few W in my case, you gotta squeeze what you can  IMO 975 is a good starting point and you can either increase or decrease depending on your stability.
*2. *I would leave them as they are by default.
*3. *That is the option most people wanna touch. That is how much wattage your card can pull (+15% to this value from the Radeon Software if you wanna increase it. You can also just put the total value here and leave it in software at 0). For example, I have mine at 340 +15% => 391W max power usage. I would personally not recommend going above that. Theoretical (standard) save value for reference model would be 375W in total (~325W in the MPT +15% from increased power limit). The reasoning behind it is that your card can pull 75W from the PCIe slot + 150W for each 8-pin connector, those are their rated values. For reference that would be 75W + 2x150W = 375W. In practice, tho your cables can push more than 150W but that is outside of the specification. 6-pin connectors provide an additional 75W of power if your card has any.
*4. *If you are hit by the "Throttle Reason - Current" it might be a way to get rid of it. I would not go too far tho. On the reference model, you have 11 power stages capable of 70A each. I would not recommend putting more than 40A through each of them tho. Increasing it by like 15-25 might be ok but overall you should not need to touch that.
*5. *I would leave it as is. I do not think you will reach the Current limit on the SoC.









*1. *This like some mentioned can be changed but it causes GPU Driver to just crash pretty much instantly. I'm not entirely sure what it really indicates as I and other users were able to get clocks above that in some testing. 
*2. *Setting it to anything above default 500 will make you unable to boot into windows. You will have to boot into Safe Mode to revert it to its default value.
*3. *From my testing changing this value is possible but it does not impact performance in any way. Setting it to 2000 does not cause instability or whatever so it is possible that it does nothing right now. We have no way of measuring SoC frequency so checking it is also hard.
*4. *Those are Memory Clocks as different P-states. The last one is the default memory. You can set it to 1075 (current max) and you will have it that as a "minimum" value in the software. Going higher IS possible and you will boot with it but your *Core Clock *will get locked to 500 MHz, a common occurrence with this card  I can boot with 1150 (2300MHz Memory Clock) just fine but higher values cause artifacts.

That is what I know about those options in MPT. It was a new tool for me as I switched from NVidia but it was fun searching what they do and testing it myself sometimes. I DO NOT guarantee that everything I said here is correct so do not quote me on it, those are my and some other users' observations and testing.


----------



## SLAY3R8888

https://www.elmorlabs.com/index.php/product/evc2sx/



Does anybody have experience with this? ^^^^

Would it allow me to run a higher voltage on my card? Would I have to run the software for this along with adrenaline, or does it completely take over power control and eliminate the usage of adrenaline? 

I'm not understanding how this would work entirely, any explanation would be appreciated.

Yes I understand you have to install it on your graphics card, and that it has software, but I don't know anything more about how / if it would work.

All I'm wanting that Adrenaline or MPT won't let me do, is push more voltage than 1175mV.

EDIT: I watched a YT video of a guy soldering the wires to his 6800xt and yeah that's over my head, don't think I'll be doing this mod lol


----------



## Mickey Padge

The Asrock 6900xt Phantom OC is a beast!

10111 GPU score in Time Spy extreme. Clocking between 2550 to 2650's on the stock cooler 

2566 min 2700 max. 1175mV. 2110 (2100 real) fast timing memory. MPT set 350 +15%.

Just been tinkering today. Doubt there's many running my setup, amazing what old hardware can do 









I scored 7 746 in Time Spy Extreme


Intel Xeon Processor E5-1680 v2, AMD Radeon RX 6900 XT x 1, 32768 MB, 64-bit Windows 10}




www.3dmark.com













I scored 29 278 in Fire Strike


Intel Xeon Processor E5-1680 v2, AMD Radeon RX 6900 XT x 1, 32768 MB, 64-bit Windows 10}




www.3dmark.com













I scored 10 741 in Port Royal


Intel Xeon Processor E5-1680 v2, AMD Radeon RX 6900 XT x 1, 32768 MB, 64-bit Windows 10}




www.3dmark.com


----------



## majestynl

nyk20z3 said:


> No offense taken but ive been doing this for many many years and never had issues with afterburner or gpu tweak so i had to troubleshoot to see what the issue was and if i could still use the programs i am use to. You have to understand the computer was fine for months on end and normally a gpu change would not cause issues like this, i literally change builds every few months and once again ive never had these issues. So now that i am on an AMD gpu the game has changed i suppose!





newls1 said:


> this series amd is not happy using anything OTHER then its own wattman... Ive been doing this for 25+ years and learned to adapt to change, and AMD always = change


Sorry it has nothing to do with AMD but with the crappy software. For many years they put all their effort in Nvidea GPU's and Intel CPU's. Most of the SW's are now again supporting AMD better on CPU side but GPU's are still a bit behind. Hopefully with NAVI things are going back the good way!...


----------



## Mickey Padge

So undervolting only works via MPT it would seem. Dropped 10c off the hot spot with 1087mV, still runs a real 2500+- core speed though. Really would love to get this card under water, not likely right now it would seem. Unless someone tries to cut the back end off the new Red Devil alphacool block! lol


----------



## delgon

In theory, you can also use the Radeon Software but to be sure that you cutout the ceiling of the available voltage that can go to the core, you have to use the MPT.


----------



## Mickey Padge

delgon said:


> In theory, you can also use the Radeon Software but to be sure that you cutout the ceiling of the available voltage that can go to the core, you have to use the MPT.


For me, my card will use the standard volts regardless, only MPT let the voltage be set lower. I seemingly cannot overvolt though, driver crashes, reporting 1018mV...


----------



## ZealotKi11er

In radeon setting its a voltage slider that is linked to the core. the core takes priority. It is not a undervolt if you also have to lower core clk. 
Example: 
You card stock does 2384MHz @ 1.15v
You can not lower voltage to 1v and expect the card to hit 2384MHz.
If you overclock to 2500 @ 1.15v you only will benefit from this OC if you actually have to power to run at 1.15v.


----------



## alexp247365

Anyone have an issue with about an hour into gaming, that the computer completely reboots? Event viewer shows a kernel 41 error, but Wattman doesn't appear to be resetting.


----------



## ZealotKi11er

alexp247365 said:


> Anyone have an issue with about an hour into gaming, that the computer completely reboots? Event viewer shows a kernel 41 error, but Wattman doesn't appear to be resetting.


psu


----------



## alexp247365

Haha, it's like the answer I didn't want to hear but somehow knew it. Had the same Kernel 41's on a new PSU (Seasonic Platinum 1300w purchased a month ago for troubleshooting and returned for other reasons.)

Only thing I can do is try to revisit with another PSU. Any Recommendations for 1300+ that has a silent mode or that the fans don't spin when surfing the internet?

Thanks


----------



## majestynl

alexp247365 said:


> Haha, it's like the answer I didn't want to hear but somehow knew it. Had the same Kernel 41's on a new PSU (Seasonic Platinum 1300w purchased a month ago for troubleshooting and returned for other reasons.)
> 
> Only thing I can do is try to revisit with another PSU. Any Recommendations for 1300+ that has a silent mode or that the fans don't spin when surfing the internet?
> 
> Thanks


Do you get those kernel issue even on a NON OC / stock profile (cpu)??

I can recommend Seasonic / EVGA !!
Never had issues with those brands! And installed approx 200+ units in my life!


----------



## newls1

ZealotKi11er said:


> psu


this 100% .... I had this exact issue when i first plugged in my 6900XT with a known good PSU (well what i thought was a good psu!)


----------



## alexp247365

It is doing it on default profile now on an EVGA 1200w Platinum. Thanks for the suggestions. I'll try and see if anyone is carrying an EVGA P/N: 120-G2-1300-XR for normal MSRP. Amazon has scalpers currently selling them for 600+ dollars!


----------



## majestynl

alexp247365 said:


> It is doing it on default profile now on an EVGA 1200w Platinum. Thanks for the suggestions.


That's sh. T 
Any warranty left.. evga has a long period?!


----------



## alexp247365

I bought it in 2019, so it falls within the 10 year period. I don't really have another psu to wait on the cross ship ,so thinking of buying new one and selling a refurb/or replace for cost.


----------



## Mickey Padge

ZealotKi11er said:


> In radeon setting its a voltage slider that is linked to the core. the core takes priority. It is not a undervolt if you also have to lower core clk.
> Example:
> You card stock does 2384MHz @ 1.15v
> You can not lower voltage to 1v and expect the card to hit 2384MHz.
> If you overclock to 2500 @ 1.15v you only will benefit from this OC if you actually have to power to run at 1.15v.


Not really sure what you mean. My card will default to 1175mV regardless of what I set in the AMD software. 

Via MPT however, I can run 1087mV real, at 2500 core speeds, also real, not just set speed


----------



## ZealotKi11er

Mickey Padge said:


> Not really sure what you mean. My card will default to 1175mV regardless of what I set in the AMD software.
> 
> Via MPT however, I can run 1087mV real, at 2500 core speeds, also real, not just set speed


When your card defaults 1175mV what is you clock speed? Set the clk speed to 2000MHz and see what your voltage is. Is not going to be 1175.

If we have unlimited power the card with run at Fmax/Vmax. If you go over Fmax from default to 2600MHz it will still use 1175mv. 
If you are power limited is a different mess.


----------



## alexp247365

Zealot, something similar to this? I think I was trying to find the same thing you are describing. Read in this forum that if you are around ~50 mhz under max mhz, then you are not limited either by volts, or watts.

max sldr/ mpt locked watts/ mpt locked volts = results

2300mhz – 255w 1087v limited = 2241mhz 1012v 245 w
2350mhz – 255w 1087v limited = 2265mhz 1043v 254 w
2350mhz – 255w 1020v limited = 2288mhz 975v 223w 103 fps
2350mhz – 255w 1030v limited = 2290mhz 981v 224w 103 fps
2500mhz – 255w 1087v limited = 2250mhz 1036v 255 w
2400mhz – 300w 1030v limited = 2340mhz 1012v 245w 108 fps
2500mhz – 300w 1050v limited = 2435mhz 1050v 245w 111 fps
2500mhz – 300w 1062v limited = 2430mhz 1062v 273w 111 fps
2500mhz – 300w 1075v limited = 2435mhz 1075v 283w 115 fps
2600mhz – 300w 1050v limited = Crash
2600mhz – 300w 1062v limited = 2440mhz 1062v probably would crash
2600mhz – 300w 1075v limited = Crash after 10 minutes – 290 w


----------



## ZealotKi11er

alexp247365 said:


> Zealot, something similar to this? I think I was trying to find the same thing you are describing. Read in this forum that if you are around ~50 mhz under max mhz, then you are not limited either by volts, or watts.
> 
> max sldr/ mpt locked watts/ mpt locked volts = results
> 
> 2300mhz – 255w 1087v limited = 2241mhz 1012v 245 w
> 2350mhz – 255w 1087v limited = 2265mhz 1043v 254 w
> 2350mhz – 255w 1020v limited = 2288mhz 975v 223w 103 fps
> 2350mhz – 255w 1030v limited = 2290mhz 981v 224w 103 fps
> 2500mhz – 255w 1087v limited = 2250mhz 1036v 255 w
> 2400mhz – 300w 1030v limited = 2340mhz 1012v 245w 108 fps
> 2500mhz – 300w 1050v limited = 2435mhz 1050v 245w 111 fps
> 2500mhz – 300w 1062v limited = 2430mhz 1062v 273w 111 fps
> 2500mhz – 300w 1075v limited = 2435mhz 1075v 283w 115 fps
> 2600mhz – 300w 1050v limited = Crash
> 2600mhz – 300w 1062v limited = 2440mhz 1062v probably would crash
> 2600mhz – 300w 1075v limited = Crash after 10 minutes – 290 w


Yes, 50-75MHz under set clk is normal for rDNA2 engine. If you are over 100MHz lower you are either voltage/power/temperature limited.


----------



## Mickey Padge

Very confusing, I was just trying to get the best clocks for the lowest voltage. Seems to be 1087mV at a real 2500 core speed. Pulls around 300w

Setting voltages in the AMD tuner does nothing, regardless of clocks.


----------



## ZealotKi11er

Mickey Padge said:


> Very confusing, I was just trying to get the best clocks for the lowest voltage. Seems to be 1087mV at a real 2500 core speed. Pulls around 300w
> 
> Setting voltages in the AMD tuner does nothing, regardless of clocks.


No confusing at all. voltage slider is just like nvidias, it does nothing. You cant undervolt with nvidia unless you edit the curve manually which AMD does not give you that option in the driver but MPT does the same thing. 
Also if you card stock fmax is 2400 and u set to 2700 you are basically doing +300MHz. The issue is that it does not apply to the entire curve. It only does after ur vmax.


----------



## majestynl

ZealotKi11er said:


> No confusing at all. voltage slider is just like nvidias, it does nothing. You cant undervolt with nvidia unless you edit the curve manually which AMD does not give you that option in the driver but MPT does the same thing.
> Also if you card stock fmax is 2400 and u set to 2700 you are basically doing +300MHz. The issue is that it does not apply to the entire curve. It only does after ur vmax.


Completely true! In past Radeon SW did also had the curve editor but they removed it.


----------



## newls1

is it possible in the AMD driver CP to set a FPS lock on a per game basis? I did this on nvidia in a per game basis just cant seem to find this option. I have a 144hz panel and want to lock my games to 142hz. How can i do this?


----------



## majestynl

newls1 said:


> is it possible in the AMD driver CP to set a FPS lock on a per game basis? I did this on nvidia in a per game basis just cant seem to find this option. I have a 144hz panel and want to lock my games to 142hz. How can i do this?


----------



## majestynl

Edited: ***Double post error


----------



## alexp247365

Anyone know if it is possible to edit something in the registry to enable this one setting?









If there was a way to enable this in the registry, I could load the driver only, use MSI to monitor, and MPT to set freq/voltage accordingly. Just about anything else can be done in existing software.

Here's to hoping.


----------



## majestynl

alexp247365 said:


> Anyone know if it is possible to edit something in the registry to enable this one setting?
> View attachment 2479846
> 
> 
> If there was a way to enable this in the registry, I could load the driver only, use MSI to monitor, and MPT to set freq/voltage accordingly. Just about anything else can be done in existing software.
> 
> Here's to hoping.


With Adaptive Sync you need to enable it in the Radeon Software. So I'm afraid you can't.

With normal freesync sometimes It was activated by default. And enabling just in the OSD of your monitor was enough. But then still i wouldn't trust it. Personally I would never delete Radeon SW in first place. It's well optimized to run in BG.


----------



## blackzaru

Hey guys/gals,

I should normally receive a 6900XT this week, and already have a waterblock on the way as well. I have 2 options for thermal interface: Noctua NH-T2 thermal paste or ThermalRight Silver King liquid metal (I also have MG Chemicals 422C conformal silicone coating).

Is there any substaintial gains to get those extra few degrees on a watercooling loop with AMD's gpus like there is doing it with Nvidia's gpus ?(I have been a very long Nvidia user... As in, almost decade long. I am talking about waterblock with paste vs waterblock with liquid metal)

Thanks for any answer.


----------



## newls1

Just ordered my Alphacool Waterblock for the "TUF" 6900 on alphacools website. Guess ill see it in 30 or so days (coming from germany, and Im in the USA) Cant damn wait to have this card underwater so this hotspot temp stops yelling at me! Also crossing my fingers that the 6800XT and 6900XT "TUF" cards use the same pcb cause it says 6800-6800xt cards and doesnt mention 6900, but doing the best research i could, and also a member in here posted a pcb shot of his 6900xt, the pcb's looked identical. crossing my fingers!


----------



## Bart

newls1 said:


> Just ordered my Alphacool Waterblock for the "TUF" 6900 on alphacools website. Guess ill see it in 30 or so days (coming from germany, and Im in the USA) Cant damn wait to have this card underwater so this hotspot temp stops yelling at me! Also crossing my fingers that the 6800XT and 6900XT "TUF" cards use the same pcb cause it says 6800-6800xt cards and doesnt mention 6900, but doing the best research i could, and also a member in here posted a pcb shot of his 6900xt, the pcb's looked identical. crossing my fingers!


Crossing my fingers for you!! If yours fits, I'll be ordering one ASAP.


----------



## newls1

they just became orderable, so as soon as I get it, ill teardown my card and let you know. ill take pics along the way


----------



## newls1

Bart said:


> Crossing my fingers for you!! If yours fits, I'll be ordering one ASAP.


Here look at this thread I made weeks back... dude in there showed us his tuf 6900xt BARE! Same PCB as 6800XT tuf... i compared it with a zoomed in pic of 6800xt, i cant see any difference, can you? 








Asus TUF 6900XT PCB pic??


Ive been emailing alphacool back and forth about a possible block that might work with this gpu but they are wanting a pcb picture. I'm currently not going to tear apart my gpu for a pic shot, and googling the interwebs reveals no pics of this bare card. might someone here have a PCB shot of...




www.overclock.net


----------



## Bart

newls1 said:


> Here look at this thread I made weeks back... dude in there showed us his tuf 6900xt BARE! Same PCB as 6800XT tuf... i compared it with a zoomed in pic of 6800xt, i cant see any difference, can you?
> 
> 
> 
> 
> 
> 
> 
> 
> Asus TUF 6900XT PCB pic??
> 
> 
> Ive been emailing alphacool back and forth about a possible block that might work with this gpu but they are wanting a pcb picture. I'm currently not going to tear apart my gpu for a pic shot, and googling the interwebs reveals no pics of this bare card. might someone here have a PCB shot of...
> 
> 
> 
> 
> www.overclock.net


Not off-hand, but I'm old and not an expert on this stuff, so my eyes might be missing something. But for our sake, I hope not! The thing that nags me is why Alphacool doesn't list it as a combo block if it fits properly. They listed other non-reference blocks as 'combo' blocks, so why not that one? Unless they never had a TUF 6900 to confirm. Oh well, kudos to you for having the balls to "buy and try".


----------



## newls1

Bart said:


> Not off-hand, but I'm old and not an expert on this stuff, so my eyes might be missing something. But for our sake, I hope not! The thing that nags me is why Alphacool doesn't list it as a combo block if it fits properly. They listed other non-reference blocks as 'combo' blocks, so why not that one? Unless they never had a TUF 6900 to confirm. Oh well, kudos to you for having the balls to "buy and try".


its all i could do! if it doesnt work, someone will want to buy this block of me....

EDIT** Just got an email from aquatuning, and looks like its shipping today.. kinda shocked!


----------



## HeLeX63

newls1 said:


> My prior wattman settings were 2650 min / 2750 max and MPT settings were 325 (watts) 375 (TDC)
> 
> New settings
> 
> Wattman settings 500MHz min / 2750 max
> MPT 335 (watts) 390 (TDC)
> 
> applied sppt and rebooted...
> 
> ran timespy, and got above results.


I thought we only change Power Limit GPU (W) to increase power... why have you changed TDC Limit GFX ???


----------



## Roboionator

Hi, i will now switch from nvidia to ati and am wondering if there will be a new driver in the future that will improve the graphics capability?
What should I pay attention I was 10 years ago on AMD GPU.
AMD Radeon™ RX 6900 XT Graphics
THX


----------



## newls1

newls1 said:


> Just ordered my Alphacool Waterblock for the "TUF" 6900 on alphacools website. Guess ill see it in 30 or so days (coming from germany, and Im in the USA) Cant damn wait to have this card underwater so this hotspot temp stops yelling at me! Also crossing my fingers that the 6800XT and 6900XT "TUF" cards use the same pcb cause it says 6800-6800xt cards and doesnt mention 6900, but doing the best research i could, and also a member in here posted a pcb shot of his 6900xt, the pcb's looked identical. crossing my fingers!


This is freaking incredible! So I ordered this block straight from alphacool in Germany 2 days ago. I live in the USA (State of Georgia) this block by tracking number shows delivery date is tomorrow and its already in Philadelphia!! This is the fastest shipping ive ever in my life heard of for a package having to cross the huge pond! PROPS TO ALPHACOOL!! Amazing service! so 5 days to have something from Germany to the Lower USA.... Insane!!!! Ive had package ship from newegg take twice as long!


----------



## OrionBG

newls1 said:


> This is freaking incredible! So I ordered this block straight from alphacool in Germany 2 days ago. I live in the USA (State of Georgia) this block by tracking number shows delivery date is tomorrow and its already in Philadelphia!! This is the fastest shipping ive ever in my life heard of for a package having to cross the huge pond! PROPS TO ALPHACOOL!! Amazing service! so 5 days to have something from Germany to the Lower USA.... Insane!!!! Ive had package ship from newegg take twice as long!


Lucky you!  I've ordered my Red Devil block from them on the 6th of February and it is still not even shipped... and I live in Europe...
Anyway... I also ordered some parts from EK and they are also not shipped yet so no hurry... 
I decided that I really like the look of EK's ZMT tube and I'll be changing all the hard tubes with it...
... oh and I forgot that I'm also waiting for an LCD screen from AliExpress that will be showing some PC stats...


----------



## newls1

some teardown pics and cleaning all the paste and thermal pads off.. Such a large PCB. Sure hope this blocks fits, im kinda dedicated now!


----------



## Bart

Hope that block fits dude!!! If it does, I'm ordering one ASAP!!


----------



## newls1

Bart said:


> Hope that block fits dude!!! If it does, I'm ordering one ASAP!!


you'll be the first to know, trust me.... the PCB's look identical so i feel pretty good about this. If they do, just know from day i ordered to block being at my house in the USA, looks to be 5 days! So you wont have to wait long


----------



## HeLeX63

alexp247365 said:


> Any way to tell if you have a good card that just needs more voltage? TimeSpy doesn't want to complete at 2700mhz. (LE Red Devil under water, EVGA plat 1200w psu.)
> 
> Temps are good.
> Frequency on Timespy fluctuates from 2590mhz up to 2640mhz.
> Maybe a setting in MPT Is off?
> 
> I cant barely run timespy on my card. LE Red Devil under water as well... Crashes all the time. I noticed that any game that has ray tracing causes my stable clocks to become unstable. Only been able to complete a single timespy run on stock speeds. OC would crash.


----------



## Bart

newls1 said:


> you'll be the first to know, trust me.... the PCB's look identical so i feel pretty good about this. If they do, just know from day i ordered to block being at my house in the USA, looks to be 5 days! So you wont have to wait long


Is it done yet??!?!?! 😁 J/K bro, my stupid black screen issues returned with a vengeance, so I am no longer eager to block this card just yet. Not until I figure out this black screen issue. I might be RMAing this piece of crap, unless it turns out I just have a funky PSU / PCIE cable.


----------



## newls1

When I first got this card, I had random shut off, black screens, and constant reboots. I was using a 1300w eVGA PSU.. Replaced it with a 1300w Super FLower (same PSU really) and all my issues went away.. ALL OF THEM. Couldnt believe my eVGA PSU was at fault when it was just fine powering my 2080Ti.... The 6900XT pulls some power i guess and will find the weakest link fast.


----------



## Bart

newls1 said:


> When I first got this card, I had random shut off, black screens, and constant reboots. I was using a 1300w eVGA PSU.. Replaced it with a 1300w Super FLower (same PSU really) and all my issues went away.. ALL OF THEM. Couldnt believe my eVGA PSU was at fault when it was just fine powering my 2080Ti.... The 6900XT pulls some power i guess and will find the weakest link fast.


Could have been a funky PCIE cable somewhere too. I'm starting to suspect that with mine. I just swapped to a LOWER model PSU, a Seasonic 750W Focus gold as opposed to the Corsair RM850 I was using, no black screens yet. But after one quick Time Spy run, it becomes apparent that even without black screens, 750W is nowhere near enough for the 6900XT + 5950X. But I was black screening like crazy this evening and I had that 750W laying around not used, so I decided to test with that. I think a new PSU is in the cards for me now.


----------



## alexp247365

newls1 said:


> Replaced it with a 1300w Super FLower


Still out of stock at the link you gave me:





Amazon.com: Super Flower Leadex Gold 1300W 80 Plus Gold, ECO Fanless & Silent Mode, Full Modular Power Supply, Dual Ball Bearing Fan,SF-1300F14MG: Computers & Accessories


Buy Super Flower Leadex Gold 1300W 80 Plus Gold, ECO Fanless & Silent Mode, Full Modular Power Supply, Dual Ball Bearing Fan, SF-1300F14MG: Internal Power Supplies - Amazon.com ✓ FREE DELIVERY possible on eligible purchases



www.amazon.com


----------



## Bart

Yeah I talked to my local nerd store, and they say miners have cleaned out higher end units. If you're looking for 1000-1300w, you're SOL right now without spending way too much money.


----------



## CS9K

blackzaru said:


> Hey guys/gals,
> 
> I should normally receive a 6900XT this week, and already have a waterblock on the way as well. I have 2 options for thermal interface: Noctua NH-T2 thermal paste or ThermalRight Silver King liquid metal (I also have MG Chemicals 422C conformal silicone coating).
> 
> Is there any substaintial gains to get those extra few degrees on a watercooling loop with AMD's gpus like there is doing it with Nvidia's gpus ?(I have been a very long Nvidia user... As in, almost decade long. I am talking about waterblock with paste vs waterblock with liquid metal)
> 
> Thanks for any answer.


I personally don't think it'll be worth your trouble to use LM.

I've done experimentation with different pastes over the years, and find that Gelid GC-Extreme and Hydronaut resist pump-out the best on bare-die applications. I have GC-Extreme on my 6800 XT, as does the b/f on his 3080FE.


----------



## nyk20z3

Any one here mine with Nicehash using a 6900xt ?


----------



## doomsdaybg

nyk20z3 said:


> Any one here mine with Nicehash using a 6900xt ?


I mine some times.


----------



## nyk20z3

doomsdaybg said:


> I mine some times.


Just checking, ive never done it before but why not put the card to use when i am not gaming etc.


----------



## SLAY3R8888

HeLeX63 said:


> I thought we only change Power Limit GPU (W) to increase power... why have you changed TDC Limit GFX ???


I could be wrong, but from my testing and others that I've spoken to, there is no tangible overclocking benefit from raising TDC. Should just be an artificial limit. I could be wrong on this, but I have seen my card pull 385 - 400w, and TDC has never needed to be raised, and has never caused a barrier. I wouldn't worry about messing with it, you'll be fine  Hope this helps


----------



## newls1

okay UPDATE FOR ASUS TUF 6800/6800XT AND 6900XT OWNERS...... The Alphacool 11955 block for 6800/6800XT WILL FIT THE 6900XT as i am now proof. Freshly ordered from germany direct from manufacture and it took 4 days to get here.. Shipping speed was unreal!! Gemany to USA in 4 days . Block fits the 69009XT with zero issues and temps are unbelievable. Ill post up a pic of "heaven" that was running for 10-12minutes in a 68-69f ambient room.


----------



## newls1

1st TimeSpy run with waterblock and beat my personal best by over 200POINTS!!! *Now ranked 34th best for timespy with 1 gpu!!* Dropped my "hotspot" temps 50c!!!!! I was 106c on TUF heatsink, now look at hotspot temp in pic... HOLY SMOKES!! Huge difference. I LOVE YOU ALPHACOOL!


----------



## Bart

Woohoo! I just placed my order!!


----------



## newls1

Bart said:


> Woohoo! I just placed my order!!


If I can just add 1 thing..... I always make it a habit to just check all allen screws prioir to assembly/mating to gpu and glad I did. There were a few screws that felt "a little to loose" for my liking, so I gave every single allen screw a nice hand tighten job. Gives me a little more personal security that there will not be a leak. Just looking out, other then that, this blocks build quality is amazing


----------



## Bart

newls1 said:


> If I can just add 1 thing..... I always make it a habit to just check all allen screws prioir to assembly/mating to gpu and glad I did. There were a few screws that felt "a little to loose" for my liking, so I gave every single allen screw a nice hand tighten job. Gives me a little more personal security that there will not be a leak. Just looking out, other then that, this blocks build quality is amazing


You bet, that's standard operating procedure for me too. Thanks for the heads up though!!


----------



## newls1

Im 34th.... pretty stoked about this!!!


----------



## CS9K

newls1 said:


> Im 34th.... pretty stoked about this!!!


Awesome! You've got a damn good specimen on your hands! A bit envious, I'm not going to lie. My reference RX 6800 XT is only stable up to 2550MHz on the max clock slider under water. Temps are great, I've got thermal headroom for ages, just need me some voltage control to go further.


----------



## HyperC

Think the new drivers are giving me worse scores, hope somebody cracks the amd restrictions soon, like yesterday


----------



## ZealotKi11er

CS9K said:


> Awesome! You've got a damn good specimen on your hands! A bit envious, I'm not going to lie. My reference RX 6800 XT is only stable up to 2550MHz on the max clock slider under water. Temps are great, I've got thermal headroom for ages, just need me some voltage control to go further.


You need a better card. If your current card can do 2550MHz, even with unlocked voltage you might be able to get 2600MHz


----------



## CS9K

ZealotKi11er said:


> You need a better card. If your current card can do 2550MHz, even with unlocked voltage you might be able to get 2600MHz


I know it wasn't your intention, but you say that as if it wasn't a minor miracle in the first place that I actually managed to score a reference RX 6800 XT at MSRP.

I'm quite aware of the bin I have, and am not really mad about it. The performance increase at 4k from another 100-200MHz would be nice, but isn't the end of the world to not have for now. 

_edit_ I suppose I should add: I've researched around and looked at numbers people are getting out of RX 6900 XT's, reference and non-reference alike. IF I got a decent bin and put it under water, it would be a tangible (and welcome) upgrade at 4k. But, if I got even an 'average' bin RX 6900 XT, I'm effectively no better off than I am now. Should I have just gone RX 6900 XT in the first place? Yes, and I call it "the gamer's lament". I've been doing this long enough I should have known better. Ah well...


HyperC said:


> Think the new drivers are giving me worse scores, hope somebody cracks the amd restrictions soon, like yesterday


Relevant to my quoted post above: My card was stable at 2600MHz on the slider with 20.12.2, but 2600MHz is not stable as of 21.1.1 and onwards, hence 2550MHz.


----------



## newls1

i guess i should feel lucky to game @ 2750+ on my 6900...


----------



## Bart

newls1 said:


> i guess i should feel lucky to game @ 2750+ on my 6900...


I think that's pretty much at the upper limit for what this card can do without being able to increase the voltage. Which is pretty awesome IMO! Maybe that's the benefit of spending the extra few hundred quid. Considering that by the time we block this card, we're in the 2 grand range (at least in CDN dollars), it's nice to have SOME benefits, LOL!


----------



## newls1

Bart said:


> I think that's pretty much at the upper limit for what this card can do without being able to increase the voltage. Which is pretty awesome IMO! Maybe that's the benefit of spending the extra few hundred quid. Considering that buy the time we block this card, we're in the 2 grand range (at least in CDN dollars), it's nice to have SOME benefits, LOL!


what the hell is a quid 😁... USD here my friend, but yeah, still almost 2000$ for sure


----------



## HyperC

Someone posted awhile back about der8auer using 1.20v I have yet to find anything on this was he using a certain driver or mpt? Looking to blow up this 1800 card. First time using amd in many years and the one time I switched over they nerf the cards  ... another odd thing I have noticed looking at the bios the asus lc card has higher TDC and for soc
Wonder what the reason is behind that.. Might try using 2 psu tonight and go crazy with mpt numbers


----------



## majestynl

ZealotKi11er said:


> You need a better card. If your current card can do 2550MHz, even with unlocked voltage you might be able to get 2600MHz


You can't now this. He could have a very hungry chip that likes voltage. Can't imagine he would do just +50mhz with let's say 2000mv + voltage..

I did had VII and Vega 64's that where very hungry and managed to clock high only with lots of juice.


----------



## HeLeX63

I got a Port Royal Score of 11249 on my 6900XT with OC. How does this stack up against others on here with the 6900XT ?


----------



## jfrob75

HeLeX63 said:


> I got a Port Royal Score of 11249 on my 6900XT with OC. How does this stack up against others on here with the 6900XT ?


That score compares nicely with my 11301 port royal score. I have the EK water block on my AMD reference 6900 XT. Like others that have been commenting, my card probably could OC higher if the core voltage could be raised.


----------



## newls1

Here is the my Asus TUF 6900XT with the RGB going (for the record, im not a fan of RGB, but figured since its wired into the block, might as well plug it in 😏 )


----------



## HyperC

newls1 said:


> Here is the my Asus TUF 6900XT with the RGB going (for the record, im not a fan of RGB, but figured since its wired into the block, might as well plug it in 😏 )
> 
> View attachment 2480468
> 
> 
> View attachment 2480469


Looks like we are using the same core x9 case, height clearance sucks that and hiding wires is the worst part


----------



## newls1

HyperC said:


> Looks like we are using the same core x9 case, height clearance sucks that and hiding wires is the worst part


yes, but I do like the fact that im running 2 420mmx60mm rads in push/pull up top blowing all that air down on my VRM and RAM... really helps a ton. Also using a 240mmx45mm rad on the bottom for no reason other then lowering the flow rate just a tad or the water in my res was like a whirlpool LOL


----------



## H3||scr3am

I have an AMD reference card, what kind of clocks/settings are you guys using? I hear about undervolting, but I have no idea how to do this on AMD as I come from team Green for the most part...


----------



## blackzaru

H3||scr3am said:


> I have an AMD reference card, what kind of clocks/settings are you guys using? I hear about undervolting, but I have no idea how to do this on AMD as I come from team Green for the most part...


After using MPT to push power to 350W and tdc to 370A, I'm running at 2525MHz at 1.050V. Although, every chip is different.


----------



## H3||scr3am

MPT = more power tools, so thats flashing the BIOS yeah? I assume no soft modes to undervolt? the radeon tools appear useless.... I set the GPU to target 1100mV and it just pulls the full 1175 anyways... guess I need to read up on MPT and the flashing utility with it...so used to NVFlash


----------



## newls1

H3||scr3am said:


> MPT = more power tools, so thats flashing the BIOS yeah? I assume no soft modes to undervolt? the radeon tools appear useless.... I set the GPU to target 1100mV and it just pulls the full 1175 anyways... guess I need to read up on MPT and the flashing utility with it...so used to NVFlash


you dont have to flash the bios on your new card. MPT modifies the SPPT so once you "program" what settings you want and apply them, simply rebooting machine is all you have to do. just be careful, the card is thirsty for more "watts" make sure you have a means to cool it.


----------



## weleh

Are you guys using HWINFO to monitor effective clocks?


----------



## newls1

weleh said:


> Are you guys using HWINFO to monitor effective clocks?


yeppers


----------



## weleh

On my best Time Spy run on my 6800 XT Nitro + SE, the difference between clock and average is 43 Mhz, is this normal? Or considered clock stretching?









I scored 19 195 in Time Spy


AMD Ryzen 7 5800X, AMD Radeon RX 6800 XT x 1, 16384 MB, 64-bit Windows 10}




www.3dmark.com


----------



## newls1

this is the 6900XT thread, why is such peasant sub par 6800XT posting in this thread


----------



## Martin778

I think that my brand new 6900XT is either about to break or a very bad chip. It works fine in games and 3dmarks but will spit out the driver if I even touch the voltage slider, even without load, just on desktop. Tried reinstalling the drivers but nope...I don't know how people get 2000MHz+ at ~1.02-1.05V, I tried 2000 @ 1.05 and it crashed in 1 second.

For example, I go to my Wattman settings, set the voltage slider to ~90%, PL +15% and the second Unigine Heaven starts, the whole thing crashes.


----------



## HyperC

newls1 said:


> this is the 6900XT thread, why is such peasant sub par 6800XT posting in this thread


Yes how dare you post such rubbish here


----------



## HyperC

Martin778 said:


> I think that my brand new 6900XT is either about to break or a very bad chip. It works fine in games and 3dmarks but will spit out the driver if I even touch the voltage slider, even without load, just on desktop. Tried reinstalling the drivers but nope...I don't know how people get 2000MHz+ at ~1.02-1.05V, I tried 2000 @ 1.05 and it crashed in 1 second.
> 
> For example, I go to my Wattman settings, set the voltage slider to ~90%, PL +15% and the second Unigine Heaven starts, the whole thing crashes.


I haven't tried UV... I mean I tried it just not with MPT before you start thinking that your card sucks... what does it do normally with stock setting and OC settings if you have issues there might want to check out the power supply maybe the 12v is dropping or if you are using MPT reload the OG saved bios and test again


----------



## Martin778

I don't have any other tools that would adjust the PL (Afterburner only used for OSD with RT). But the card behaves in a very weird way. If I set the core clock to 2000 and voltage to 1100mV, it tries to do 2000MHz at 0.83V and crashes within 1-2 seconds.
I doubt it's the PSU as I run 2 separate 8 pin cables and it's a 1300W Seasonic, also the card runs relatively well stock but still worse than 6800XT. The same PSU ran 3090 @ 450W PL just fine.
I don't really understand what the card is trying to do. Isn't there a curve editor hidden somewhere?


----------



## CS9K

Martin778 said:


> I don't have any other tools that would adjust the PL (Afterburner only used for OSD with RT). But the card behaves in a very weird way. If I set the core clock to 2000 and voltage to 1100mV, it tries to do 2000MHz at 0.83V and crashes within 1-2 seconds.
> I doubt it's the PSU as I run 2 separate 8 pin cables and it's a 1300W Seasonic, also the card runs relatively well stock but still worse than 6800XT. The same PSU ran 3090 @ 450W PL just fine.
> I don't really understand what the card is trying to do. Isn't there a curve editor hidden somewhere?


Ignore the absolute value that the voltage slider displays. The voltage slider applies an offset to the voltage/MHz curve of the GPU. 

So your 1105 setting is 1175 - 1105 = a 70mV _negative_ offset to the default curve.

Start at 1175 on the voltage slider, set your desired maximum clock speed, then reduce the voltage slider until you find instability. Then, raise it a bit to re-gain stability.

Also, the tentacles of MSI Afterburner run deep into your PC. I don't recommend using it at all with AMD GPU's if you can help it.


----------



## Martin778

Oooh, so it looks like it works different than with MSI AB / Nvidia cards. I thought that it would use the set voltage as a target value = [email protected] for example.

Is there no curve editor for Navi, just like for GTX/RTX cards? I'm having a hard time understanding how it works with AMD. I dropped from 1175mV to 1140mV but it didnt do anything as the clock was already lower than the set 2550.


----------



## newls1

CS9K said:


> Ignore the absolute value that the voltage slider displays. The voltage slider applies an offset to the voltage/MHz curve of the GPU.
> 
> So your 1105 setting is 1175 - 1105 = a 70mV _negative_ offset to the default curve.
> 
> Start at 1175 on the voltage slider, set your desired maximum clock speed, then reduce the voltage slider until you find instability. Then, raise it a bit to re-gain stability.
> 
> Also, the tentacles of MSI Afterburner run deep into your PC. I don't recommend using it at all with AMD GPU's if you can help it.


(IN CAPS) REMOVE AFTERBURNER NOW!!! AND MAKE SURE ALL TRACES ARE OFF THE PC! use MPT and set watts to 335 and set the sppt with that and reboot, then add 10% power limit (368.5watts) increase your voltage slider to max for right now. Set min slider to 2400 (or leave it at 500mhz) set max slider to 2650 and go and recheck with timespy... MAKE SURE YOUR MPT ADJUSTMENT ACTUALLY PROGRAMMED THE SPPT's by hitting "ctrl+shift+o" to bring up the driver based radeon overlay to show temps and power, and watch what it reads for "watts" it should be in the 300's now for its peaks..... and you should be stable providing you adjusted your fan curve to compensate for the increased power limit. once stable, adjust fans to your liking, and reduce "%" of powerlimit to throttle yourself if you feel the need

**EDIT Please make sure you know how to actually use MPT program, if you need help let us know but there are guidelines in this thread somewhere way back.


----------



## Martin778

Got it, removed Afterburner and cleaned the registry.
Everything is blank and greyed out in my MPT and I did run it as administrator. What am I missing, exporting the VBIOS with GPU-Z and loading it with MPT?!

+
Ok, exported the BIOS through GPU-Z and loaded it in MPT:


----------



## newls1

Martin778 said:


> Everything is blank and greyed out in my MPT. What am I missing?


cause you have no idea how to use the program. stand by.....

**EDIT
to use MPT you need 2 programs... MPT itself and GPUz
1. open up GPUz and click save bios file. (i just save it to desktop)
2. open up MPT and click load bios file, select your file you saved to desktop
3. go to the "power and voltage" tab, and change the following
A) Power Limit GPU (W) to 335
B) TDC Limit GFX (A) I think I put mine to 390 (this is optional some say this does nothing)
4. Click save (save file to desktop) then click "Write SPPT" button.
5. If you did everything correctly you should have a box that comes up saying something like "successful programming" or something like that.
6. REBOOOOOT!

Now go into wattman, and make the above said adjustments

**EDIT # 2 from my above post
add 10% power limit (368.5watts) increase your voltage slider to max for right now. Set min slider to 2400 (or leave it at 500mhz) set max slider to 2650 and go and recheck with timespy... MAKE SURE YOUR MPT ADJUSTMENT ACTUALLY PROGRAMMED THE SPPT's by hitting "ctrl+shift+o" to bring up the driver based radeon overlay to show temps and power, and watch what it reads for "watts" it should be in the 300's now for its peaks..... and you should be stable providing you adjusted your fan curve to compensate for the increased power limit. once stable, adjust fans to your liking, and reduce "%" of powerlimit to throttle yourself if you feel the need


----------



## Martin778

Ok, did just that. Saw a peak of 317W in Heaven (easier to monitor if ran windowed) then I changed the settings in wattman but TimeSpy crashed in a second. Now I got this :| The whole desktop is squeezed to where the overlay was, also after rebooting. How bizzarre...even after powering the PSU off.
Never had it crash like that before, after reboot it would show "40" on POST display, meaning that it was in some deep sleep state. Rebooted it with DP cable unplugged, plugged it back in, now the desktop is fine but the VGA LED is lit on the MB...one more reboot and now it's OK. Funky.

It did show 309W peak in Heaven so I assume the MPT trick worked. How 'sticky' are these MPT settings, until reboot, until windows / driver reinstall?


----------



## newls1

did you adjust the power limit % to 10%? If you are still having issues with card, its either software issue with your windows install, driver, or PSU (and I hate to say that cause 1300 seasonic is a great PSU) ... My 6900XT took out my eVGA 1200 Platinum PSU the 1st day using this card, with EXACTLY SAME ISSUES you are having. Replaced it with superflower 1300 and im good to go. Remember, this card is a murderer cause it isnt the load it puts on the PSU, its the transient peaks. Watched a video someone on youtube land showing transients of 500w + for ms at a time... this will throw aging PSU's into OCP or other protection. My case I would either insta reboot, or pc would shut off completely and reboot itself

*EDIT
what was the "hotspot" temp before it crashed? Before my waterblock, I would be 100% garunteed to CTD if hotspot got to 106+

**EDIT 2
Here is my post about my PSU 


https://hardforum.com/threads/i-think-im-experiencing-a-bad-psu-please-come-in.2006343/#post-1044913229


----------



## Martin778

Yes, it's set to +10% in Wattman / Radeon drivers. I think it's getting angry because it's hitting 110*C hotspot (to me - another 'great' idea of AMD, just like the 20*C temp offset with first Ryzens...). The hotspot goes to 110 in a roughly 5-6 seconds, even before the MPT mod.


It's a 1 year old 1300W Seasonic Prime Platinum, it never shut off or went black screen on me and I even had an OC'ed 2080Ti Kingpin and MSI 3090 SuprimX on it.


----------



## newls1

thats the reason right there 100%... your hotspot temp. Dude thats insane! remove cooler and repaste that ***** NOW! something isnt right or none of your fans are spinning. You better have fans set to 100% at this stage! What brand/make of 6900 do you have?


----------



## Martin778

It's a brand new card...then it's just broken because my GPU die temp is barely 67*C and doesn't do any weird spiking. It can't be right because the card warms up normally but the hotspot is gibberish.


----------



## newls1

Is it a AMD reference card? if so they use a thermal pad... remove that garbage and use paste. Eitherway, something is very much wrong with your contact if hotspot goes to 110c instantly.... Dont care if its new card or not LOL! Remove that cooler!


----------



## Martin778

Yes, it's a reference card. I think that readout is just nonsense?






p.s don't bother with the fan noise and coils, it's much louder on video and the PC is next to the monitor in an open air case.

+
Now after manipulating the minimum clock slider, it immediately went black, fans on full. Rebooted the PC, POST greeted me with "USB Overcurrent detected", shut down. Rebooted again, to 640x480 desktop. 
Man what is this nightmare of a card.


----------



## newls1

I think its accurate...


----------



## newls1

That is a very easy cooler to remove and RE-PASTE..... throw out that garbage thermal pad


----------



## Martin778

Tbf I'm not sure about repasting as they pretty much advise against doing so: The hotspot of the Radeon RX 6800 (XT), hurdles in the thermal grease replacement and the correct assembly sequence | igor´sLAB


----------



## newls1

ok then........ guess im done here


----------



## CS9K

newls1 said:


> That is a very easy cooler to remove and RE-PASTE..... throw out that garbage thermal pad


From the video posted, I would chock that up to bad contact. Your temps shouldn't spike that quickly regardless of your overclock. 

If you do decide to re-paste yourself instead of RMA, I suggest Gelid GC-Extreme, or Hydronaut. Most pastes are too thin for bare-die applications and will "pump out" over days/weeks. The b/f and I both use GC-Extreme (I an RX 6800 XT, he a 3080 FE, both under water), and temps have been consistent for months. I also don't believe that LM is worth it, but that's just my opinion.


----------



## Martin778

LM is out of the question here, done it on my notebook and it reacted with copper pretty quickly, soaking up the LM and making irremovable darker spots on the heatsink.

I need new TIM anyways so I might give TG Kryonaut Extreme a go. Otherwise the GC-Extreme is a classic, been using it for years.


----------



## CS9K

Martin778 said:


> LM is out of the question here, done it on my notebook and it reacted with copper pretty quickly, soaking up the LM and making irremovable darker spots on the heatsink.
> 
> I need new TIM anyways so I might give TG Kryonaut Extreme a go. Otherwise the GC-Extreme is a classic, been using it for years.


Kryonaut is actually on the thin side, compared to MX-4/5 and NT-H1/2. Hydronaut and GC-Extreme, meanwhile, are thicccc (I don't think either are oil based) and won't dry out in a year @ the kind of heat levels that Navi21 can dish out.


----------



## Martin778

Kryonaut feels dry after 1-2 months on a CPU, at least that was my experience with the first version of Kryonaut, let alone it left scratches on my coolers. Haven't tried the Extreme version yet.


----------



## CS9K

Martin778 said:


> Kryonaut feels dry after 1-2 months on a CPU, at least that was my experience with the first version of Kryonaut, let alone it left scratches on my coolers. Haven't tried the Extreme version yet.


Aye. Most thinner pastes are great for IHS use, but very few pastes work for bare-die. For the record, I spread the GC-Extreme over the die by hand before seating the block. I would not rely on pressure to spread a thick paste like GC-E or Hydronaut.


----------



## Martin778

By the way, how can I reset MPT back to default values. Loading the VBIOS in MPT again and saving?


----------



## ZealotKi11er

Martin778 said:


> By the way, how can I reset MPT back to default values. Loading the VBIOS in MPT again and saving?


Yes or DDU drivers.


----------



## Neclock

I would like to know if any of you have an issue with the USB ports of your card. I have a XFX RX 6900 XT Merc and I have started to get usb overcurrent alerts on the ports. I have never connected anything to the ports and it didn't happen until recently.
Did you ever notice this? Do you recommend a return?


----------



## Martin778

Hmmmm, my MB did that once after a GPU driver crash, USB overcurrent at POST


----------



## alexp247365

Martin, a few of us here have had issues with seemingly good PSU's. I have a 2 year old EVGA 1200w Platinum that worked fine with a 2080ti. However, paired with a Red Devil LE 6900xt ,I can only get it to run stable on default settings (it under water, too!) It will show that it can pull 389w in Timespy, but doesn't finish the demo.

New PSU is on order as of this morning. Same one as Newls1 has - The SuperFlower 1300w Gold.


----------



## Martin778

Well, there isn't much I can upgrade to, except for the 1600AXi maybe....but then, it would be cheaper to exchange the card for a 3080 instead of replacing what's good.

+
Apparently 1300W PSU's are also sold out, just like GPU's.


----------



## majestynl

Martin778 said:


> Ok, did just that. Saw a peak of 317W in Heaven (easier to monitor if ran windowed) then I changed the settings in wattman but TimeSpy crashed in a second. Now I got this :| The whole desktop is squeezed to where the overlay was, also after rebooting. How bizzarre...even after powering the PSU off.
> Never had it crash like that before, after reboot it would show "40" on POST display, meaning that it was in some deep sleep state. Rebooted it with DP cable unplugged, plugged it back in, now the desktop is fine but the VGA LED is lit on the MB...one more reboot and now it's OK. Funky.
> 
> It did show 309W peak in Heaven so I assume the MPT trick worked. How 'sticky' are these MPT settings, until reboot, until windows / driver reinstall?


Assuming you have a ASUS mobo!! 

Question: are you using a GPU raiser cable?

I see your problems.. and by the way. Also try to read your hotspot with Hwinfo instead of only GPUz


----------



## Martin778

No risers, with MSI mobo it was the same.


----------



## newls1

alexp247365 said:


> Martin, a few of us here have had issues with seemingly good PSU's. I have a 2 year old EVGA 1200w Platinum that worked fine with a 2080ti. However, paired with a Red Devil LE 6900xt ,I can only get it to run stable on default settings (it under water, too!) It will show that it can pull 389w in Timespy, but doesn't finish the demo.
> 
> New PSU is on order as of this morning. Same one as Newls1 has - The SuperFlower 1300w Gold.


where did you find stock of that PSU at? ill buy another one just to have...


----------



## newls1

if by now he removed cooler and re-pasted the card and confirmed proper heatsink placement, his issue might have been resolved already.


----------



## HyperC

Ordered the evc2sx not sure where it is shipped from but hopefully I get it within the week. I will post photos and hopefully tops scores


----------



## Martin778

I'm selling the card and going back to camp green, it crashes with slider set to 1900MHz / 1100mV. After this, no point in testing anything else imho.
Not worth tinkering with a brand new card, considering that it's much weaker than 6800XT it's a potato / monday morning production that somehow passed QC. Putting money in another PSU, while the current one ran a fully overclocked 3090 is also out of the question.

The card even crashes at 2400MHz @ 1125mV, any values for a 6900XT I saw on the internet crash immediately on this one While it's obvious that sillicon lottery plays a major role, it should not crash at what seems like still very mild undervolts.


----------



## CS9K

Martin778 said:


> I'm selling the card and going back to camp green, it crashes with slider set to 1900MHz / 1100mV. After this, no point in testing anything else imho.
> Not worth tinkering with a brand new card, considering that it's much weaker than 6800XT it's a potato / monday morning production that somehow passed QC. Putting money in another PSU, while the current one ran a fully overclocked 3090 is also out of the question.
> 
> The card even crashes at 2400MHz @ 1125mV, any values for a 6900XT I saw on the internet crash immediately on this one While it's obvious that sillicon lottery plays a major role, it should not crash at what seems like still very mild undervolts.


I.. hmm. 

Stop messing with the voltage slider. Put it up to 1175 and leave it.

Reduce the core clock maximum slider to what you wish, then adjust the power limit down to a level that you are comfortable with.

Your card, much like my RX 6800 XT reference, cannot handle a negative offset to the stock voltage curve.

It's not a difficult process.


----------



## Martin778

I can do that but it has no point. Why would I downclock the card over stock values?
Touching any OC/Power related settings doesn't to anything if the card's hotspot is at 110*C it downclocks anyway.


----------



## CS9K

If you think the card's thermal pad isn't seated properly, either RMA it or pull it apart and re-paste it. In a case with decent airflow, and with a fan curve that makes sense, you shouldn't be warm enough to thermal throttle.


----------



## Martin778

I've been doing some reading and apparently 100-110*C is considered normal/within spec by AMD and more people are reporting these temps for ref. 6900XT's. How shocking it might sound.
The pad can't be the problem as then the GPU/die temperature should be all over the place but in fact it's rock stable in terms of increase/decrease when the card is loaded.


----------



## alexp247365

110c is too much, if you run default settings and default fan settings, maybe 80c, 90c tops.


----------



## alexp247365

newls1 said:


> where did you find stock of that PSU at? ill buy another one just to have...


Same link you gave me. It was gone less than a minute after I bought it. I saw some websites saying new stock in for Seasonic, and EVGA March 3rd, So checked as soon as I woke up this morning.


----------



## Bart

I can make my 6900XT hit a hotspot temp of 115C with ease, even with the fans running 100%, that's no big deal at all, and well within AMD spec. The card will just throttle itself to cool off. Cannot wait for the water block to land!!!


----------



## alexp247365

Red Devil must be special, then. Would only hit 110c trying to push 2700 mhz with 75 percent fan speed. Under water now, thankfully (but not stable yet!)


----------



## alexp247365

Martin778 said:


> Putting money in another PSU, while the current one ran a fully overclocked 3090 is also out of the question.


Do 3090's spike to 900w like 6900xt's do? 

Just throwin that out there - my PSU worked fine on a 2080ti with one of the custom bio's on it.. but not for 6900xt (with 3 pins)


----------



## H3||scr3am

How am I doing so far? I think I've dialed the card in, it's the CPU that is next on the list I guess, and maybe some better RAM... I hear 3600 is the sweet spot, and these DIMMs just can't seem to pull it off well... what is better 3533/16-20-20-20 @1.395v or [email protected]?


----------



## CS9K

H3||scr3am said:


> How am I doing so far? I think I've dialed the card in, it's the CPU that is next on the list I guess, and maybe some better RAM... I hear 3600 is the sweet spot, and these DIMMs just can't seem to pull it off well... what is better 3533/16-20-20-20 @1.395v or [email protected]?


3533/16-20-20-20 @1.395v by far.


----------



## H3||scr3am

meant to include my Timespy score... oops. 









I scored 17 326 in Time Spy


AMD Ryzen 5 5600X, AMD Radeon RX 6900 XT x 1, 32768 MB, 64-bit Windows 10}




www.3dmark.com


----------



## doomsdaybg

alexp247365 said:


> Do 3090's spike to 900w like 6900xt's do?
> 
> Just throwin that out there - my PSU worked fine on a 2080ti with one of the custom bio's on it.. but not for 6900xt (with 3 pins)


TPU do some tests, 3090 spike is under 500W, 6900xt ~600W, you can check the image:


https://tpucdn.com/review/msi-radeon-rx-6900-xt-gaming-x-trio/images/power-spikes.png


----------



## Bart

The eagle has landed, from Germany to Canada in 6 days flat, from ordering to arriving. God love this pandemic and how it keeps the airplanes free of people so my junk can land quicker!! This block is for the Asus TUF 6800XT / 6900XT, Alphacool part number 11955, FYI.



Yes this block does look like WALL-E's inbred ******* country cousin, but I will forgive it if the temps are good:


Good riddance to another air cooler, nekkid 6900XT:


First impressions: AWESOME. Mainly because the included install instructions are CLEAR, and in COLOR, and super simple to read. All the included hardware is packaged neatly with big labels on individual parts bags. It's idiot proof. Well done Alphacool!


----------



## newls1

Bart said:


> The eagle has landed, from Germany to Canada in 6 days flat, from ordering to arriving. God love this pandemic and how it keeps the airplanes free of people so my junk can land quicker!! This block is for the Asus TUF 6800XT / 6900XT, Alphacool part number 11955, FYI.
> 
> 
> 
> Yes this block does look like WALL-E's inbred ***** country cousin, but I will forgive it if the temps are good:
> 
> 
> Good riddance to another air cooler, nekkid 6900XT:
> 
> 
> First impressions: AWESOME. Mainly because the included install instructions are CLEAR, and in COLOR, and super simple to read. All the included hardware is packaged neatly with big labels on individual parts bags. It's idiot proof. Well done Alphacool!


SO HAPPY FOR YOU BROTHER!!! Stoked  . Let us see those new "hotspot" temps when your all done. Alphacools instructions are AMAZING... you cant screw it up LOL


----------



## newls1

Using this new alphacool block, here are my current game clocks, whatcha think? This is maxed out RAGE2 with frame cap set @ 144.. Core speed absolutely never drops below 2720 and goes upto 2751... This is with Max slider set to 2800 and a 375w setting set in MPT


----------



## Mickey Padge

newls1 said:


> Using this new alphacool block, here are my current game clocks, whatcha think? This is maxed out RAGE2 with frame cap set @ 144.. Core speed absolutely never drops below 2720 and goes upto 2751... This is with Max slider set to 2800 and a 375w setting set in MPT


Lets see a screen cap of the latest HWiNFO app after a default run of 3D Mark Port Royal please? The latest HWiNFO app shows everything for your card temps etc, and port royal will be a good stress test too


----------



## newls1

i timespy steady at 26-27xx.. ill run port royal later on


----------



## Mickey Padge

newls1 said:


> i timespy steady at 26-27xx.. ill run port royal later on


Port Royal tends to find unstable clock speeds I have found. And HWiNFO is perfect for seeing if you have a good mount and how well the memory and vrms are getting cooled too etc, fantastic app


----------



## newls1

i can see memory temps? i know all about hwinfo, been using it for YEARS, but thanks for the tip. The mount is just fine on my WB, but again thanks


----------



## Mickey Padge

newls1 said:


> i can see memory temps? i know all about hwinfo, been using it for YEARS, but thanks for the tip. The mount is just fine on my WB, but again thanks


Yes you wouldn't be getting those results otherwise, just really interested to see what the block is like on that card. And if it holds that core speed under RT load 

I have a very unusual setup, retro style x79 with an Xeon 1680v2 CPU 










My card doesn't like going much over 2700 on the core, it's an Asrock 6900xt Phantom OC, I modified a Corsair 6800/6900 xt water block to fit it. So I backed off the OC to max at 2666 or there abouts. Very happy with the results, seems an average card though, but my rig is set to run very quiet, 320/280 rad, fans hardly over 1000 rpm.

Also, ctrl + print screen, then ctrl + v into an image app


----------



## Mickey Padge

Port Royal 










I have found maxing memory lowers my score, setting mem at 2130 with the fast timings option seems to be the sweet spot!


----------



## danny9428

I wonder what you can do with the hotspot temp on reference cards.
The gpu core temp is fine it looks but watching hotspot hits 110C when in MPT 370W+ is still kinda iffy to me

Anyone tried adding thermal pads at the back of the card covering the core and the backplate or maybe a fan blowing onto it?

Regarding to PSUs not able to run RDNA2 cards, I've read that these RDNA2 cards have crazy 20ms peak power spikes which outrivals even a power-bios modded 3090, though that appears to only happen when you attempt to fire up these cards in Furmark. So far my AX1000 appears to tank it well with 5950x


----------



## Mickey Padge

danny9428 said:


> I wonder what you can do with the hotspot temp on reference cards.
> The gpu core temp is fine it looks but watching hotspot hits 110C when in MPT 370W+ is still kinda iffy to me
> 
> Anyone tried adding thermal pads at the back of the card covering the core and the backplate or maybe a fan blowing onto it?
> 
> Regarding to PSUs not able to run RDNA2 cards, I've read that these RDNA2 cards have crazy 20ms peak power spikes which outrivals even a power-bios modded 3090, though that appears to only happen when you attempt to fire up these cards in Furmark. So far my AX1000 appears to tank it well with 5950x


On air my Asrock 6900xt Phantom was hitting 75+c GPU and 110c+ hot spot when OC'ing to 25050+ core. Now under water that difference is only around 20c. 50ish core, 70ish hot spot.

Don't really have any temp comparisons, nobody else is running a corsair reference block on the asrock PCB card! lol


----------



## newls1

Bart, wheres my update!


----------



## newls1

danny9428 said:


> I wonder what you can do with the hotspot temp on reference cards.
> The gpu core temp is fine it looks but watching hotspot hits 110C when in MPT 370W+ is still kinda iffy to me
> 
> Anyone tried adding thermal pads at the back of the card covering the core and the backplate or maybe a fan blowing onto it?
> 
> Regarding to PSUs not able to run RDNA2 cards, I've read that these RDNA2 cards have crazy 20ms peak power spikes which outrivals even a power-bios modded 3090, though that appears to only happen when you attempt to fire up these cards in Furmark. So far my AX1000 appears to tank it well with 5950x


there is a youtube vid showing the results for adding thermal pads on the stock ref backplate... hold on ill find it and edit this post

**edit


----------



## Bart

newls1 said:


> SO HAPPY FOR YOU BROTHER!!! Stoked  . Let us see those new "hotspot" temps when your all done. Alphacools instructions are AMAZING... you cant screw it up LOL


So far with my current OC (+2750 core / +2150 memory), the hot spot temp stays under 60C, GPU temps are under 40C. So far so good!


EDIT: after looping Time Spy Extreme stress tests, it looks like my max temps top out at 46C for GPU, 64C for junction temp. There's some serious heat in this loop now! God help it once I starting tweaking the curve optimizer again!


----------



## ZealotKi11er

newls1 said:


> there is a youtube vid showing the results for adding thermal pads on the stock ref backplate... hold on ill find it and edit this post
> 
> **edit



Issue with this test, stock card hits 1.175v while the other runs hit 1.13v. There is ur hot spot difference.


----------



## newls1

Bart said:


> So far with my current OC (+2750 core / +2150 memory), the hot spot temp stays under 60C, GPU temps are under 40C. So far so good!
> 
> 
> EDIT: after looping Time Spy Extreme stress tests, it looks like my max temps top out at 46C for GPU, 64C for junction temp. There's some serious heat in this loop now! God help it once I starting tweaking the curve optimizer again!


freaking awesome man, so happy all worked out for ya. So I see your PSU issue is fixed? What was the problem?


----------



## Bart

newls1 said:


> freaking awesome man, so happy all worked out for ya. So I see your PSU issue is fixed? What was the problem?


I think it was a bum PCIE cable on the old PSU (Corsair RM850). I replaced that with a EVGA G5 850W, no issues at all so far. I'm currently testing it, but so far all the 3dmark stress tests are good, some light gaming benchmarks are good (Horizon Zero Dawn, Red Dead 2, etc). Hasn't missed a beat yet, and in games at the res I play at (3840 x 1200), temps are awesome, like 40C GPU / 50C hot spot. Now I want to see how far I can push that core, once I get my CPU re-dialed back in with the curve optimizer.


----------



## H3||scr3am

Mickey Padge said:


> Port Royal
> 
> View attachment 2480972
> 
> 
> I have found maxing memory lowers my score, setting mem at 2130 with the fast timings option seems to be the sweet spot!


I find memory gets unstable around there... 2125-2130 and score starts dropping due to issues/corrections I assume..


----------



## xR00Tx

danny9428 said:


> I wonder what you can do with the hotspot temp on reference cards.
> The gpu core temp is fine it looks but watching hotspot hits 110C when in MPT 370W+ is still kinda iffy to me
> 
> Anyone tried adding thermal pads at the back of the card covering the core and the backplate or maybe a fan blowing onto it?
> 
> Regarding to PSUs not able to run RDNA2 cards, I've read that these RDNA2 cards have crazy 20ms peak power spikes which outrivals even a power-bios modded 3090, though that appears to only happen when you attempt to fire up these cards in Furmark. So far my AX1000 appears to tank it well with 5950x


Hi,

I had a similar problem. Last week I got my Sapphire Nitro+ 6900 XT and despite the gpu temperature being in the 60 - 65c range, the hotspot temperature was reaching 110 - 115c in benchmarks like Time Spy.

I disassembled the gpu, replaced the original thermal paste with Kryonaut and now the temperatures are great (max hotstop temp +- 85c)!

I am very satisfied with this video card!!! I was able to reach 21,950+ graphics points in Time Spy.
I got a water block from Bykski at AliExpress. I guess it should arrive within 15 days. Maybe I'll get a few more points!










I scored 20 906 in Time Spy


Intel Core i9-10900K Processor, AMD Radeon RX 6900 XT x 1, 16384 MB, 64-bit Windows 10}




www.3dmark.com


----------



## Mickey Padge

H3||scr3am said:


> I find memory gets unstable around there... 2125-2130 and score starts dropping due to issues/corrections I assume..


Thanks for the info! Great score, on air my card was around 10c hotter, and under water it won't go past 2700 core on port royal. Have ran the core up to 2750 in a few things, but it isn't viable if port royal crashes, definitely a good way to find out if you're stable or not.

Overall I'm happy with the modded block on the card, temps are way lower, but my card is no golden sample that's for sure! Noise alone makes it a worthwhile effort.

It's also fun seeing my world record 3D Mark scores.....




....with my Xeon 1680v2 Haha!


----------



## Bart

The Alphacool Eisblock gets my official seal of approval, VERY impressed with this thing! It looks like my core tops out at a max of 2775mhz core, 2800mhz is no bueno, even at 385W + 15%. But I'm happy with this. I think there's points to be squeezed out of my CPU since my core tuning is rudimentary at best, but for having a rig with no Intel and no Nvidia, I'm quite happy with this!

Time Spy:


Time Spy Extreme:


Port Royal:


Oh and according to my UPS, Satan is assisting me while benchmarking. Hail Satan!!


LEDs on this thing are nice and bright too:


----------



## chispy

Mickey Padge said:


> Yes you wouldn't be getting those results otherwise, just really interested to see what the block is like on that card. And if it holds that core speed under RT load
> 
> I have a very unusual setup, retro style x79 with an Xeon 1680v2 CPU
> 
> View attachment 2480969
> 
> 
> My card doesn't like going much over 2700 on the core, it's an Asrock 6900xt Phantom OC, I modified a Corsair 6800/6900 xt water block to fit it. So I backed off the OC to max at 2666 or there abouts. Very happy with the results, seems an average card though, but my rig is set to run very quiet, 320/280 rad, fans hardly over 1000 rpm.
> 
> Also, ctrl + print screen, then ctrl + v into an image app


Hello there  , i have been looking everywhere for a full cover water block for my Asrock 6900xt Phantom OC , can you link me where to buy this water-block and how did you modify this water-block to fit this non-reference 6900xt , thanks.

Kind Regards: Chispy


----------



## H3||scr3am

Bart said:


> The Alphacool Eisblock gets my official seal of approval, VERY impressed with this thing! It looks like my core tops out at a max of 2775mhz core, 2800mhz is no bueno, even at 385W + 15%. But I'm happy with this. I think there's points to be squeezed out of my CPU since my core tuning is rudimentary at best, but for having a rig with no Intel and no Nvidia, I'm quite happy with this!


I'd be careful with those Wattages...385W + 15% is almost 450W (442.75)

each 8 Pin is capable of delivering 150W and the PCIE slot another 75W by spec... where it's pulling that extra 75W from is anyone's guess, but could cause issues if you're not careful... generally the cables can provide more (250W+/ea)... but still


----------



## newls1

H3||scr3am said:


> I'd be careful with those Wattages...385W + 15% is almost 450W (442.75)
> 
> each 8 Pin is capable of delivering 150W and the PCIE slot another 75W by spec... where it's pulling that extra 75W from is anyone's guess, but could cause issues if you're not careful... generally the cables can provide more (250W+/ea)... but still


another one with this reply  start @ 7:15....


----------



## Mickey Padge

chispy said:


> Hello there  , i have been looking everywhere for a full cover water block for my Asrock 6900xt Phantom OC , can you link me where to buy this water-block and how did you modify this water-block to fit this non-reference 6900xt , thanks.
> 
> Kind Regards: Chispy


Hi, the block is this one:






Hydro X Series XG7 RGB RX-SERIES GPU Water Block (6900 XT, 6800 XT)


The CORSAIR Hydro X Series XG7 RGB RX-Series GPU Water Block is a total conversion solution for your AMD Radeon™ RX-Series graphics card, unlocking the true potential of your GPU.




www.corsair.com





There is a reddit post here:


__
https://www.reddit.com/r/Amd/comments/kzder7

Asrock shares many reference design points, especially around the die, vrms, and memory 

Basically, there are four screws on the outer corner of the block, removing them will remove the black metal shroud.

Under that the block itself is quite small, and you will see the metal block has two small half circle cut outs for the two capacitors. The metal cut outs will not be enough for the Asrock capacitors, as they are longer caps than the reference card, I carefully shaved off some acrylic (closely matching the blocks two half circles) so the caps will fit (not too much, one is close to the gasket)

Then, pretty much all the holes in the block will align to the Asrock card, with the exception of some of the upper most ones (not needed for a tight mount anyway).

So after mounting the block, you will simply need to use the lower side for the tubing in/out, as the upper ports will be covered by the PCB.

It isn't that complicated, my card was a toaster even with the massive heatsink, now it is much, much cooler. See my temps in previous screen shots. My card needs more vcore I think, but I cannot give it any. Happy to run 24/7 at around 2650, quiet with low rpm fans 

Cheers!


----------



## Martin778

Bummer, lesson learned. I can't RMA my card witout a PoP...though AMD states the card is fine. After repasting the die temp dropped a bit but the hotspot is still 95-100*C. In fact the die temp is amazing at 64*C @ 254W or 67*C at ~276W.
Looking on how the card heats up from stone cold to 'working temperature' we can easily rule out cooler pressure issues.The behavior is exactly identical to pre-repasting.


----------



## chispy

Mickey Padge said:


> Hi, the block is this one:
> 
> 
> 
> 
> 
> 
> Hydro X Series XG7 RGB RX-SERIES GPU Water Block (6900 XT, 6800 XT)
> 
> 
> The CORSAIR Hydro X Series XG7 RGB RX-Series GPU Water Block is a total conversion solution for your AMD Radeon™ RX-Series graphics card, unlocking the true potential of your GPU.
> 
> 
> 
> 
> www.corsair.com
> 
> 
> 
> 
> 
> There is a reddit post here:
> 
> 
> __
> https://www.reddit.com/r/Amd/comments/kzder7
> 
> Asrock shares many reference design points, especially around the die, vrms, and memory
> 
> Basically, there are four screws on the outer corner of the block, removing them will remove the black metal shroud.
> 
> Under that the block itself is quite small, and you will see the metal block has two small half circle cut outs for the two capacitors. The metal cut outs will not be enough for the Asrock capacitors, as they are longer caps than the reference card, I carefully shaved off some acrylic (closely matching the blocks two half circles) so the caps will fit (not too much, one is close to the gasket)
> 
> Then, pretty much all the holes in the block will align to the Asrock card, with the exception of some of the upper most ones (not needed for a tight mount anyway).
> 
> So after mounting the block, you will simply need to use the lower side for the tubing in/out, as the upper ports will be covered by the PCB.
> 
> It isn't that complicated, my card was a toaster even with the massive heatsink, now it is much, much cooler. See my temps in previous screen shots. My card needs more vcore I think, but I cannot give it any. Happy to run 24/7 at around 2650, quiet with low rpm fans
> 
> Cheers!


 Thank you for the answer , awesome news ! Is out of stock everywhere but i dont mind waiting for it to come back in stock  , also it seems ek is doing an ek fc block for the Asrock 6900xt-6800xt Taichi wish uses the same exact pcb as this one. Will update once i find and install a suitable full cover water block. Thank you the help , appreciate it.

Kind Regards: Chispy


----------



## chispy

chispy said:


> Thank you for the answer , awesome news ! Is out of stock everywhere but i dont mind waiting for it to come back in stock  , also it seems ek is doing an ek fc block for the Asrock 6900xt-6800xt Taichi wish uses the same exact pcb as this one. Will update once i find and install a suitable full cover water block. Thank you the help , appreciate it.
> 
> ** ninja edit ** Found one in stock at Amazon , the last lonely one they had  , i just bought it. I will update once i get it installed.
> 
> Kind Regards: Chispy


----------



## newls1

chispy.... does alphacool offer a block for you by chance? I know in desperate situations we have to get what we can get, but EK's quality and especially customer service have gone WAY DOWN in the past years.... As long as Ican help it, i will not be spending my money with them.


----------



## Martin778

Same (bad) experience with EK, including a 360 predator AiO that flooded a 7980XE rig. I think nowadays I'd trust a chinese Bykski block more.

By the way, I think I found a part or the problem with the temps on reference 6900XT...they seem to dislike vertical mounting, especially 90 deg. like NZXT H1.
When I flip my case horizontally, the Die temp rises but hotspot decreases. Now the delta is <20*C.

On the left screenshot it has been running Heaven for 2 hours flat. Now even on full power slider it doesn't touch 100*C hotspot. Wow. Before that, ramping the fans up to 100% wouldn't do anything to hotspot temp either.

















No doubt there is something going on with the vapor chamber when the card is mounted 90 deg vertically. After stopping the bench, both die and hotspot decrase their temp much slower aswell and the delta is ~3*C when cooling down.
To be frank, there should be an official statement on this from AMD as *these cards really, really dislike 90 deg mounts.*


----------



## Mickey Padge

Do we actually know what the hot spot temp is? My card isn't hot at all under load (at least the back) but something is reading as 70c+ (was 110c+ on stock air)


----------



## Mickey Padge

Alphacool have stated on their forums there will be no asrock block support 

Look forward to seeing your results! I really should have taken a few pics along the way of the block etc, but I kinda get a little one track minded when tinkering with my PC lol


----------



## The EX1

Martin778 said:


> Same (bad) experience with EK, including a 360 predator AiO that flooded a 7980XE rig. I think nowadays I'd trust a chinese Bykski block more.
> 
> By the way, I think I found a part or the problem with the temps on reference 6900XT...they seem to dislike vertical mounting, especially 90 deg. like NZXT H1.
> When I flip my case horizontally, the Die temp rises but hotspot decreases. Now the delta is <20*C.
> 
> On the left screenshot it has been running Heaven for 2 hours flat. Now even on full power slider it doesn't touch 100*C hotspot. Wow. Before that, ramping the fans up to 100% wouldn't do anything to hotspot temp either.
> 
> No doubt there is something going on with the vapor chamber when the card is mounted 90 deg vertically. After stopping the bench, both die and hotspot decrase their temp much slower aswell and the delta is ~3*C when cooling down.
> To be frank, there should be an official statement on this from AMD as *these cards really, really dislike 90 deg mounts.*


93c hotspot on the reference cooler at that low of fan speed is actually really good. Vapor chamber orientation has been an issue for a while. Especially on my Titan Xp.


----------



## Martin778

Yes, it surely is a nice result. The card now also boosts a lot higher! Time to do some 3Dmark comparisons, as previously it would get beaten by a 6800XT...
Now we're getting somewhere with MPT and some tweekz. Even when hitting 385W(!) it still manages sub 105*C hotspot, albeit at 100% fan speed, just for the time of 3DMark.
Max clock slider set to 2600MHz, memory slider maxed out at 2150MHz with fast timings.
20435 graphics score in TimeSpy I scored 19 441 in Time Spy


----------



## Bart

H3||scr3am said:


> I'd be careful with those Wattages...385W + 15% is almost 450W (442.75)
> 
> each 8 Pin is capable of delivering 150W and the PCIE slot another 75W by spec... where it's pulling that extra 75W from is anyone's guess, but could cause issues if you're not careful... generally the cables can provide more (250W+/ea)... but still


I'm not too worried. Those are just very high ceilings we're setting, we're not forcing the GPU to actually eat that much power. They may draw a lot of power at certain times, but those are peaks, the overall power draw is much less than 400W overall. Plus it's really only benchmarks that drive those numbers up, actually gaming is WAY easier on the GPU, and the numbers are WAY lower during say Horizon Zero Dawn benchmarks.


----------



## Martin778

It's all fun and games until you start an RTX game like Crysis Remastered, that one ate every watt of my 335W power limit set in MPT. TimeSpy and FireStrike don't even come close 
RTX 3090 SuprimX that I had before would also eat up every available watt, pulling 450W flat in Quake II RTX.

3Dmark is getting long in the tooth with Time Spy approaching the 5 year mark, I expect there will be more and more games hitting the card harder with all the visual gimmicks.
Especially if you're running 100Hz+ refresh rate the card will be pushed to 99% utilization constantly.


----------



## newls1

has anyone cracked a way to increase voltage yet? I want to pass timespy @ 2800MHz but need more voltage


----------



## CS9K

newls1 said:


> has anyone cracked a way to increase voltage yet? I want to pass timespy @ 2800MHz but need more voltage


Nope. All leads over on Igor's Lab forums have come up fruitless. Hellm is working on it as best they can.


----------



## ZealotKi11er

CS9K said:


> Nope. All leads over on Igor's Lab forums have come up fruitless. Hellm is working on it as best they can.


AMD did such a good job.


----------



## CS9K

ZealotKi11er said:


> AMD did such a good job.


We _could_ have encrypted bios'es. Pick your poison


----------



## ZealotKi11er

CS9K said:


> We _could_ have encrypted bios'es. Pick your poison


Its like Nvidias 3060, Its vbios/fw/driver handshake.


----------



## HeLeX63

newls1 said:


> has anyone cracked a way to increase voltage yet? I want to pass timespy @ 2800MHz but need more voltage


Cant even get timespy to run at 2700MHz set in the AMD software, so running it at 2640Mhz seems to crash. But game stable at 2720MHz sustained (2750)Mhz in the AMD software.


----------



## danny9428

Martin778 said:


> Same (bad) experience with EK, including a 360 predator AiO that flooded a 7980XE rig. I think nowadays I'd trust a chinese Bykski block more.
> 
> By the way, I think I found a part or the problem with the temps on reference 6900XT...they seem to dislike vertical mounting, especially 90 deg. like NZXT H1.
> When I flip my case horizontally, the Die temp rises but hotspot decreases. Now the delta is <20*C.
> 
> On the left screenshot it has been running Heaven for 2 hours flat. Now even on full power slider it doesn't touch 100*C hotspot. Wow. Before that, ramping the fans up to 100% wouldn't do anything to hotspot temp either.
> 
> View attachment 2481044
> View attachment 2481045
> 
> 
> No doubt there is something going on with the vapor chamber when the card is mounted 90 deg vertically. After stopping the bench, both die and hotspot decrase their temp much slower aswell and the delta is ~3*C when cooling down.
> To be frank, there should be an official statement on this from AMD as *these cards really, really dislike 90 deg mounts.*


I wonder if the majority of the time where Hotspot temp breaches 90 or even 100 it's not actually the GPU die but rather the back of the GPU core where lies tons of those tiny little ceremics that recieves no cooling other than case air flow (At least that is the case with reference board, the backplate doesn't cover that part)


----------



## Mickey Padge

The back of my 6900xt card isn't hot at all, compared to the oven that was my 1080ti, even that was water cooled too!


----------



## Josef997

Hi Guys, 

I want to ask about the 6900 XT temp i change to water cooling EK water block stock was under full speed during the game 63-64 not sure about the junction temp, but now 51-53c and Junction is 64c is that normal under water cooling ?


----------



## Mickey Padge

Josef997 said:


> Hi Guys,
> 
> I want to ask about the 6900 XT temp i change to water cooling EK water block stock was under full speed during the game 63-64 not sure about the junction temp, but now 51-53c and Junction is 64c is that normal under water cooling ?


Depending on what clocks and settings, ambient temps/rad volume etc, sounds decent enough to me, if those are well loaded tests...


----------



## chispy

newls1 said:


> chispy.... does alphacool offer a block for you by chance? I know in desperate situations we have to get what we can get, but EK's quality and especially customer service have gone WAY DOWN in the past years.... As long as Ican help it, i will not be spending my money with them.


Hi there , no alphacool will not been doing custom pcb Asrock 5900xt block sadly. I'm with you and totally agree EK has gone downhill , even Byski water blocks quality are better now than EK.



Mickey Padge said:


> Alphacool have stated on their forums there will be no asrock block support
> 
> Look forward to seeing your results! I really should have taken a few pics along the way of the block etc, but I kinda get a little one track minded when tinkering with my PC lol


Hello , yes i read that on their forums already :/ , sadly there are no fc h2o blocks for this non-reference Asrock pcb , i guess we are not many , as such we got to make due with what we can find. Thank you for helping me find a fc wb that would fit this card , appreciate it a lot your help and surely will post feedback once i got it installed. Since i already maxed out this gpu for extreme overclocking on sub-sero temps and hardware mods with evc2 installed i reverse everything and now is back to stock configuration , uninstalled all the hardware mods and evc2 and clean her up real good and looks brand new again  . it will be used for gaming now only , no more v.mods / tinkering or overclocking just pure gaming on it from now on.



newls1 said:


> has anyone cracked a way to increase voltage yet? I want to pass timespy @ 2800MHz but need more voltage


Hi , the only way to add voltage sadly is thru hardware volt mods adding an evc2 to be able to control all voltages and increase power limit , but sadly it will void the warranty as it will always leave traces of the soldering points no matter how good you clean those up it is easy to tell that it has been hardware v.moded.


----------



## Josef997

Mickey Padge said:


> Depending on what clocks and settings, ambient temps/rad volume etc, sounds decent enough to me, if those are well loaded tests...


Usually i OC the GPU to 2730+ depending in the game as well some games not more than 2500+ some of them 2645 etc 2 rad 240mm and 3600 mm with 9 fans full speed 1900 RPB for Each dual tanks for extra capacity the ambient temp 20c to 24c during the heavy load 34c, the things is not sure about the Junctions temp 10c + from the GPU temp so maybe something i missed when i was screw the blocks or its fine and its the normal temps.


----------



## Mickey Padge

Looking forward to seeing your results. My loop cools my Xeon 1680v2 and 6900XT, 360/280mm rads with a D5 pump/res, fans run between 900-1100.










These are my bench temps from one of my best port royal runs. Seems the VDDC temp is a little high comparatively speaking, maybe because there's a little less pressure in an area where an extra block screw would be normally? Still way lower than air temps


----------



## Martin778

danny9428 said:


> I wonder if the majority of the time where Hotspot temp breaches 90 or even 100 it's not actually the GPU die but rather the back of the GPU core where lies tons of those tiny little ceremics that recieves no cooling other than case air flow (At least that is the case with reference board, the backplate doesn't cover that part)


These caps are already discolored partly on my card but in a weird way, would you believe. 110*C is by far not enough to cause such discoloration so quickly and I've also had my hand on them and while they're hot, I measured ~62*C with an IR thermometer. Putting my finger confirmed it (no sizzling  )


----------



## Mickey Padge

Josef997 said:


> Usually i OC the GPU to 2730+ depending in the game as well some games not more than 2500+ some of them 2645 etc 2 rad 240mm and 3600 mm with 9 fans full speed 1900 RPB for Each dual tanks for extra capacity the ambient temp 20c to 24c during the heavy load 34c, the things is not sure about the Junctions temp 10c + from the GPU temp so maybe something i missed when i was screw the blocks or its fine and its the normal temps.


+10c hotspot is great. On air my card was average +35c hotspot, now it's +20c under water. Every card is different. I run mine now with a slight undervolt (1150mV) at a real 2600+- core speed. Keeps temps where I want, and I know it's stable. Pushing higher is pointless for the small % gain in everyday usage. Port royal I have never completed a bench with an average core over 2700, so that was my stability check, lowering clocks until stable, then lowering volts to find the sweet spot 

I'm not running push/pull on my rads, so only five fans plus one case fan, I really want silence more than anything else.


----------



## Josef997

Mickey Padge said:


> +10c hotspot is great. On air my card was average +35c hotspot, now it's +20c under water. Every card is different. I run mine now with a slight undervolt (1150mV) at a real 2600+- core speed. Keeps temps where I want, and I know it's stable. Pushing higher is pointless for the small % gain in everyday usage. Port royal I have never completed a bench with an average core over 2700, so that was my stability check, lowering clocks until stable, then lowering volts to find the sweet spot
> 
> I'm not running push/pull on my rads, so only five fans plus one case fan, I really want silence more than anything else.


Yes so i can guess its normal temp and everything is setup as needed and nothing more we can do just accept the fact those cards are really hot at this points i do have 2080Ti and they were never cross 42c during the game even with bios update it was 48c this card stock bios 44-52 c lol, the high fans speed is fine when you will play you will never heard anything, is there custom bios will release soon for those cards?


----------



## Mickey Padge

Josef997 said:


> Yes so i can guess its normal temp and everything is setup as needed and nothing more we can do just accept the fact those cards are really hot at this points i do have 2080Ti and they were never cross 42c during the game even with bios update it was 48c this card stock bios 44-52 c lol, the high fans speed is fine when you will play you will never heard anything, is there custom bios will release soon for those cards?


Haven't seen any reason for BIOS mods, MorePowerTool works for raising the power limit and undervolting, good enough for me. If they have voltage changes that work? Be a game changer for sure


----------



## Josef997

Mickey Padge said:


> Haven't seen any reason for BIOS mods, MorePowerTool works for raising the power limit and undervolting, good enough for me. If they have voltage changes that work? Be a game changer for sure


how to rise the power which tools is available to increase the power limit ? the watt stuck at 300 the reality 293w lol i didn't find anything yet.


----------



## Mickey Padge

Josef997 said:


> how to rise the power which tools is available to increase the power limit ? the watt stuck at 300 the reality 293w lol i didn't find anything yet.


Use GPU-Z app to extract your GPU rom (Navi 21.rom). Load rom into MorePowerTool app, click "power and voltage", change "power limit GPU" to 350 or similar. Remember you will add more power % on top with AMD OC tool 

Not much else works in MPT app, I leave well alone. You can undervolt with "Maximum Voltage GFX"


----------



## Josef997

Mickey Padge said:


> Use GPU-Z app to extract your GPU rom (Navi 21.rom). Load rom into MorePowerTool app, click "power and voltage", change "power limit GPU" to 350 or similar. Remember you will add more power % on top with AMD OC tool
> 
> Not much else works in MPT app, I leave well alone. You can undervolt with "Maximum Voltage GFX"


Yeah how did i missed this parts, how can i re flash the bios back to the gpu with power limit edit i dont have any issues with the voltage but the power limit better to be increase to 350w will be much better performance.


----------



## Mickey Padge

Josef997 said:


> Yeah how did i missed this parts, how can i re flash the bios back to the gpu with power limit edit i dont have any issues with the voltage but the power limit better to be increase to 350w will be much better performance.


To be sure you are not editing the BIOS directly on the card, this is done in windows registry, changes will be removed with driver update 

To delete your settings, select "delete SPPT" and to set them in the first place is "write SPPT"


----------



## Josef997

Mickey Padge said:


> To be sure you are not editing the BIOS directly on the card, this is done in windows registry, changes will be removed with driver update
> 
> To delete your settings, select "delete SPPT" and to set them in the first place is "write SPPT"


okay very clear so its just editing the window registry to stay away from any really issues in future and each update need to re apply those setting okay i got it and after write SPPT need to restart.


----------



## Mickey Padge

Josef997 said:


> okay very clear so its just editing the window registry to stay away from any really issues in future and each update need to re apply those setting okay i got it and after write SPPT need to restart.


Exactly


----------



## Josef997

Mickey Padge said:


> Exactly


i did and and tried it its reached to 350 w easily the temp was nice extra 50w but gpu temp was 51c but the Junctions jump to 68 to 72c i thinks the back plate with thermal pad need to apply extra someting is missing not sure what its.


----------



## CS9K

Josef997 said:


> i did and and tried it its reached to 350 w easily the temp was nice extra 50w but gpu temp was 51c but the Junctions jump to 68 to 72c i thinks the back plate with thermal pad need to apply extra someting is missing not sure what its.


I have the EK block and backplate (w/thermal pads) on my reference RX 6800 XT. Under full synthetic load w/350W limit in MPT, during continuous testing, 55C/70C are what my core/hotspot temperatures top out at. Your temperatures are completely normal from my experience, and from what I've seen around the 'net.


----------



## Josef997

CS9K said:


> I have the EK block and backplate (w/thermal pads) on my reference RX 6800 XT. Under full synthetic load w/350W limit in MPT, during continuous testing, 55C/70C are what my core/hotspot temperatures top out at. Your temperatures are completely normal from my experience, and from what I've seen around the 'net.


then whiles i am not alone in this part its very clear its correct then lol, i kept the power limit 350w and 0% in extra power limit i don't want to exceed it, i know the cards can give more but i will do within the limit till see a real effect and benefit in gaming as well. 

thanks a lot.


----------



## CS9K

Josef997 said:


> then whiles i am not alone in this part its very clear its correct then lol, i kept the power limit 350w and 0% in extra power limit i don't want to exceed it, i know the cards can give more but i will do within the limit till see a real effect and benefit in gaming as well.
> 
> thanks a lot.


Yes, you are good! For the record, I have MPT set to 300W and 350A. With +15% in MPT, that comes out to 345W total board power.


----------



## Josef997

CS9K said:


> Yes, you are good! For the record, I have MPT set to 300W and 350A. With +15% in MPT, that comes out to 345W total board power.


That a good idea! i also put the TDC to 370 maybe i will keep it 350w its the same way i think so no need to change to 300w and 15% its same idea.


----------



## alexp247365

Pinning all hopes that this fixes the sporadic crashing issues -


----------



## HeLeX63

I want to ask about the 6900 XT temp i change to water cooling EK water block stock was under full speed during the game 63-64 not sure about the junction temp, but now 51-53c and Junction is 64c is that normal under water cooling ?
[/QUOTE]

With my Alphacool 6900XT Red Devil Waterblock, My temps hover around 45C and 65C junction. Previous gaming temps under similar conditions with air cooler was around 68C and 95C junction


----------



## HeLeX63

Josef997 said:


> Hi Guys,
> 
> I want to ask about the 6900 XT temp i change to water cooling EK water block stock was under full speed during the game 63-64 not sure about the junction temp, but now 51-53c and Junction is 64c is that normal under water cooling ?


With my Alphacool 6900XT Red Devil Waterblock, My temps hover around 45C and 65C junction. Previous gaming temps under similar conditions with air cooler was around 68C and 95C junction


----------



## Josef997

HeLeX63 said:


> With my Alphacool 6900XT Red Devil Waterblock, My temps hover around 45C and 65C junction. Previous gaming temps under similar conditions with air cooler was around 68C and 95C junction


Yes its in some games 45-47 and some others games who used cpu 40% that will increase the heat in the loop the temp will be 49-52 and junctions same temp but stock 95c its a way to hot good we do have a water cooling lol.


----------



## skline00

HeLeX63 said:


> With my Alphacool 6900XT Red Devil Waterblock, My temps hover around 45C and 65C junction. Previous gaming temps under similar conditions with air cooler was around 68C and 95C junction


Sounds about right. I have the "little brother" RX 6800 under an EK block and backplate and at stock running AIDA64 gpu stress test for @20 minutes I record 43-44 core and 59-61 hotspot.

You are running the Big Boy so those temps sound fine.


----------



## Josef997

one thing i was trying to do the extra 50w seems not working well the screen during the game hanging and its not stable with the extra watt its not evening touch the 350w in game need 270w to 315w max, some games yes need 350w but i tried Call of duty 19 and not stable at all i return it normal, is there something else i can do ? 

and something in performance those cards really amazing i do have 4k monitors 160 Hz and with this cards i can easily hit 180 FPS+ in some games with OC the GPU from 2600 to 2700 for example and its really stable and good performance!


----------



## CS9K

Josef997 said:


> one thing i was trying to do the extra 50w seems not working well the screen during the game hanging and its not stable with the extra watt its not evening touch the 350w in game need 270w to 315w max, some games yes need 350w but i tried Call of duty 19 and not stable at all i return it normal, is there something else i can do ?
> 
> and something in performance those cards really amazing i do have 4k monitors 160 Hz and with this cards i can easily hit 180 FPS+ in some games with OC the GPU from 2600 to 2700 for example and its really stable and good performance!


With the hard-limit of 1.150V/1.175V for the RX 6800 XT and RX 6900 XT, respectively, the core clock can only go SO far. Some games may -seem- stable at a higher core clock, but different games and benchmarks load the GPU differently, so you may not see every game reach the 350W limit. Nothing is wrong with the GPU, so long as clock speeds are going up to what you set.
Some advice when overclocking: Try many different benchmarks. Find the one that crashes at the lowest maximum clock speed. Get -that- benchmark stable, and leave your global maximum clock speed set to that level, and no higher. While some games -seem- stable at faster clock speeds, they WILL crash eventually.


----------



## Josef997

CS9K said:


> With the hard-limit of 1.150V/1.175V for the RX 6800 XT and RX 6900 XT, respectively, the core clock can only go SO far. Some games may -seem- stable at a higher core clock, but different games and benchmarks load the GPU differently, so you may not see every game reach the 350W limit. Nothing is wrong with the GPU, so long as clock speeds are going up to what you set.
> Some advice when overclocking: Try many different benchmarks. Find the one that crashes at the lowest maximum clock speed. Get -that- benchmark stable, and leave your global maximum clock speed set to that level, and no higher. While some games -seem- stable at faster clock speeds, they WILL crash eventually.


Yes usually the OC is fine within the range 2500-2600 for call of duty for example the other games 2700-2800 no crash stable OC, the issues i was having when i increase the watt limit to 350w this was create some issues with the games even same OC not sure why even the Watt usage was same sometimes its up to 310W but the real reason not sure about it yet.


----------



## alexp247365

If MPT only adjusts registry settings, has the registry setting for voltage not been found yet? Or, is it hard locked on the card? These cards seem like they could do so much more with a little extra voltage! It appears my card is voltage limited at ~ 2750mhz in some benchmarks. Getting real world stable 2750mhz would be nice, though.


----------



## CS9K

alexp247365 said:


> If MPT only adjusts registry settings, has the registry setting for voltage not been found yet? Or, is it hard locked on the card? These cards seem like they could do so much more with a little extra voltage! It appears my card is voltage limited at ~ 2750mhz in some benchmarks. Getting real world stable 2750mhz would be nice, though.


It's more complicated than that. Much like Nvidia's "new" 3-way handshake, AMD implemented something similar with the 5000 series. 

For my 5600 XT to get around the clock and voltage limits:

Someone had to write a bios that circumvented the bios signature check built into the board
Someone had to ensure MPT and RBE could modify the bios itself
Someone had to figure out how to trick the drivers into thinking the RX 5600 XT was an RX 5700, so that the clock/voltage could be raised.

There's a lot of moving parts, and so far the good folks over at Igor's Lab haven't found a way around the limits for the 6000 series... yet.


----------



## alexp247365

Thanks for the detailed response, CS9K. + Rep!


----------



## newls1

alexp247365 said:


> If MPT only adjusts registry settings, has the registry setting for voltage not been found yet? Or, is it hard locked on the card? These cards seem like they could do so much more with a little extra voltage! It appears my card is voltage limited at ~ 2750mhz in some benchmarks. Getting real world stable 2750mhz would be nice, though.


this 100%


----------



## H3||scr3am

So, I put my build into the case (Lian Li O11-Dynamic Mini) and have had nothing but issues since... my previously stable Card OC settings, are crashing left and right, same with RAM timings... so I backed off the RAM timings, and that at least allowed the benchmarks to run again... but many more wattman crashes later... I've seen in HWInfo that I'm tripping Current throttling on the GPU, I haven't seen/noticed this before, any suggestions on finding whats tripping it?

I also removed the backplate, and have the entire system mounted Vertically now, so the GPU is Horizontal... might be an airflow/heat dissipation issue? I've gotten it going again with a bit more voltage (I had it at 1090mv, so I've ramped it up to 1100 for now and it's completing timespy now)

has anyone else had issues like this?


----------



## Martin778

Reference design? Check your hotspot temperatures!
They do not tolerate inverted mounting, maybe vertical mount on a riser aswell. My card was doing exactly the same when mounted with video outputs on top.


----------



## H3||scr3am

Martin778 said:


> Reference design? Check your hotspot temperatures!
> They do not tolerate inverted mounting, maybe vertical mount on a riser aswell. My card was doing exactly the same when mounted with video outputs on top.


Basically the entire build was on an open bench, with the motherboard Horizontal, and now I've moved it to the case, and it's Vertical... nothing is upside down, it's your typical tower build in terms of mounting of hardware... it'll run port royal at my old clock settings, but not timespy any longer... odd. according to HWinfo after a port royal run the hotspot maxed at 103*C, looking back at an older HWBot submission when it was on the bench, my hotspot temp was reported as 100*C after a port royal run, so I've picked up a couple degrees between changing the orientation and removing the backplate... guess I'll throw the backplate back on and see if I can pull through a timespy on my old stable clocks...


----------



## Bart

H3||scr3am said:


> I've seen in HWInfo that I'm tripping Current throttling on the GPU, I haven't seen/noticed this before, any suggestions on finding whats tripping it?


I saw the saw thing when I was stock. You can only OC so much before you see that current throttling. I _think_ you start seeing it when your clocks get high enough to need more power. Not sure if it's exactly that simple, but the stock limit of 272w is very limiting clock-wise.


----------



## Martin778

Placed thermal pads (standard quality blue ones) on all 4 sides around the memory and PCB, just making cutouts for the big caps.
These are the results. Please not these are NOT STOCK settings, the PPT is set to 335W @ 2550MHz. Fan was set to fixed ~1830RPM with a 1000 RPM noctua blowing over the card.

First screen is the situation before, as you can see the results are in fact worse...except for 2*C better die temp. Yes the pads do make contact as the backplate heats up significantly now.
Don't pay too much attention to the current GPU'z temperatures as Heaven uses different scenes that result in slightly different load at given moment.
My conclusion would be that the increased PPT + OC saturate the cooler to the point it starts heating up other components.


----------



## majestynl

H3||scr3am said:


> So, I put my build into the case (Lian Li O11-Dynamic Mini) and have had nothing but issues since... my previously stable Card OC settings, are crashing left and right, same with RAM timings... so I backed off the RAM timings, and that at least allowed the benchmarks to run again... but many more wattman crashes later... I've seen in HWInfo that I'm tripping Current throttling on the GPU, I haven't seen/noticed this before, any suggestions on finding whats tripping it?
> 
> I also removed the backplate, and have the entire system mounted Vertically now, so the GPU is Horizontal... might be an airflow/heat dissipation issue? I've gotten it going again with a bit more voltage (I had it at 1090mv, so I've ramped it up to 1100 for now and it's completing timespy now)
> 
> has anyone else had issues like this?


You are not using riser cables right ? I had many issues when I installed my card the first days because of the Mobo PCIe "auto configuration" sets it to PCIe Gen4. And if your cable isn't compatible you get weird issues. ! I needed to manually set it to PCIe Gen3 to work properly.

I'm now using a PCIe Gen4 compatible riser cable without any issue. It was 80 euro 

Anyways. If you are not using a riser cable again just try what happens when manually set to PCIe Gen3.


----------



## Josef997

majestynl said:


> You are not using riser cables right ? I had many issues when I installed my card the first days because of the Mobo PCIe "auto configuration" sets it to PCIe Gen4. And if your cable isn't compatible you get weird issues. ! I needed to manually set it to PCIe Gen3 to work properly.
> 
> I'm now using a PCIe Gen4 compatible riser cable without any issue. It was 80 euro
> 
> Anyways. If you are not using a riser cable again just try what happens when manually set to PCIe Gen3.


that a good idea i am now using gen3 riser cable i will replace it with gen 4 but they are very few available gen4 linkup i thinks the name of the company, i need to check more about it, yeah the price depend on the country and taxes here 40 euro when i change the currency lol.

thanks for the idea it could be the issue not not reading the extra 50W from 300 to 350w,


----------



## delgon

1. "so I backed off the RAM timings, and that at least allowed the benchmarks to run again.. " this is concerning? I mean, if everything is stable, it should not matter. Are you sure your RAM OC is stable? Anything below 400% MemTest is not reliable (400% is still too low for me personally but well).

Can you show when it says it is current throttling? Remember that you have 2 main current "eaters" that can be seen in the HWInfo, Core, and SOC. I had no problem with SOC current (unless you are mining, then it can easily hit SOC current limit) but you can easily solve it by reducing the SOC max voltage in MPT. It should not impact the performance that much. To reach the default Core current of 320A you would need to draw like 376W (320A * 1.175v) on Core alone.
You are much more likely to hit the Power limit than the Current by far.

Edit: It is even more as Power Limit sliders also increase the Current limits accordingly.


----------



## majestynl

Josef997 said:


> that a good idea i am now using gen3 riser cable i will replace it with gen 4 but they are very few available gen4 linkup i thinks the name of the company, i need to check more about it, yeah the price depend on the country and taxes here 40 euro when i change the currency lol.
> 
> thanks for the idea it could be the issue not not reading the extra 50W from 300 to 350w,


Yeah i bought that linkup one. Working great. But to rule it out you can try to manually set to PCIe Gen3 for time being.


----------



## H3||scr3am

majestynl said:


> You are not using riser cables right ? I had many issues when I installed my card the first days because of the Mobo PCIe "auto configuration" sets it to PCIe Gen4. And if your cable isn't compatible you get weird issues. ! I needed to manually set it to PCIe Gen3 to work properly.
> 
> I'm now using a PCIe Gen4 compatible riser cable without any issue. It was 80 euro
> 
> Anyways. If you are not using a riser cable again just try what happens when manually set to PCIe Gen3.


no riser currently, will try that, thanks... it could also maybe be a need for a support bracket or something I was thinking too... idk... little gremlin... I will check if I lay the case down (same orientation as on bench if it fixes the issue... that'd be a new one...



delgon said:


> 1. "so I backed off the RAM timings, and that at least allowed the benchmarks to run again.. " this is concerning? I mean, if everything is stable, it should not matter. Are you sure your RAM OC is stable? Anything below 400% MemTest is not reliable (400% is still too low for me personally but well).
> 
> Can you show when it says it is current throttling? Remember that you have 2 main current "eaters" that can be seen in the HWInfo, Core, and SOC. I had no problem with SOC current (unless you are mining, then it can easily hit SOC current limit) but you can easily solve it by reducing the SOC max voltage in MPT. It should not impact the performance that much. To reach the default Core current of 320A you would need to draw like 376W (320A * 1.175v) on Core alone.
> You are much more likely to hit the Power limit than the Current by far.
> 
> Edit: It is even more as Power Limit sliders also increase the Current limits accordingly.


RAM OC mostly just threw voltage at it to try and run it quicker... the kit is only validated for 3200Mhz (16-20-20-20) at 1.35v I had it up at 3600 16-20-20-20 with 1.35 but that was wonky, and then found it'd do 3566 at that voltage ish... never did any RAM benches though, I've mostly focused on the GPU at this point, as I'm waiting for a 5950X CPU...

my MPT settings are set for 335W + 10% will try to get a screeny of the issue when I catch it next, just need to find time to do some more benching...


----------



## CS9K

H3||scr3am said:


> the kit is only validated for 3200Mhz (16-20-20-20) at 1.35v I had it up at _3600_ 16-_20-20-20_ _with 1.35_


Right there, is a possible source of your issues... 3200 16-20-20 and 18-22-22 kits are the lower end bins of whatever chips made it onto the boards. I wouldn't be surprised if tRCD 20 and tRP 20 threw memtest errors at any/everything above 3200. Of the three timings, tCL is the only one that scales with voltage; tRCD and tRP scale with speed. Worth a run through memtest to see what comes up.


----------



## delgon

ugh, just switching from 3200 to 3600 on low end RAM is "bad".
Please run at least 100-400% of hci memtest to know it is not unreasonably unstable. Unstable RAM can lead to many instabilities and you do not really know in that case if it is CPU, CPU Cache, RAM, GPU or even something else. That is why you either leave it as is or spend a loooooot of time (a looot more than CPU overclocking) to validate and test it.


----------



## kx11

I should be a member of this club in few days with this model









ASRock Radeon RX 6900 XT Phantom Gaming D 16G OC


<b>Clock: GPU / Memory</b>, Boost Clock: Up to 2340 MHz / 16 Gbps, Game Clock: Up to 2105 MHz / 16 Gbps, Base Clock: 1925 MHz / 16 Gbps<br /><b>Key Specifications</b>, 7nm AMD Radeon™ RX 6900 XT Graphics, 16GB 256-bit GDDR6, AMD RDNA™ 2 Architecture, Hardware Raytracing, PCI® Express 4.0...




www.asrock.com






anyone tried it and got tips for me since no one reviewed it AFAIK


----------



## H3||scr3am

CS9K said:


> Right there, is a possible source of your issues... 3200 16-20-20 and 18-22-22 kits are the lower end bins of whatever chips made it onto the boards. I wouldn't be surprised if tRCD 20 and tRP 20 threw memtest errors at any/everything above 3200. Of the three timings, tCL is the only one that scales with voltage; tRCD and tRP scale with speed. Worth a run through memtest to see what comes up.


Yeah, I've reset the clock to XMP profile settings... but still hitting issues with Timespy crashing on runs...



delgon said:


> ugh, just switching from 3200 to 3600 on low end RAM is "bad".
> Please run at least 100-400% of hci memtest to know it is not unreasonably unstable. Unstable RAM can lead to many instabilities and you do not really know in that case if it is CPU, CPU Cache, RAM, GPU or even something else. That is why you either leave it as is or spend a loooooot of time (a looot more than CPU overclocking) to validate and test it.


Yeah, I've reset the clock to XMP profile settings... but still hitting issues with Timespy crashing on runs...

See below for where I'm seeing the Current Throttling issues reported


----------



## ZealotKi11er

Sorry, I did not read much into all your post. Is TimeSpy with GPU at stock?


----------



## delgon

Ye, like I thought it is the SOC that is hitting it. It shows as 96.5% on your SS it can be a little bit more depending on your refresh frequency.
You can play with maximum SOC voltage. I have mine 100% stable at 975mv and that did help with SOC current limits. Another thing you can do is increase the TDC of the SOC in MPT. I would not recommend bumping it too much tho. try lowering the voltage first. The New Beta of HWInfo also shows your Limits of PPT, Core TDC, and SOC TDC. I can see that you do not increase the Power Limit as you are "throttling" with 55Amps so default. The +15 Power limit in Wattman also increases the SOC TDC making it 63.25A


----------



## H3||scr3am

so I did complete a run, still hitting current throttle according to hwinfo beta... backing down the max SoC voltage... 1100 now from 1150 default, seems to have at least completed the bench, so fingers crossed you're onto something Delgon. What kind of clocks are you running at that voltage Delgon?


----------



## delgon

I'm at 2150 Memory with Fast Timings and 2575-2675 @ 1163mv stable. Passed like 1h of Timespy, 1h Port Royal and many games past those weeks I got it 
In MPT I increased the PPT to 340, changed GFX max voltage to 1162mv (to make it that way, just changing it in the wattman does not help at those speeds), and SoC voltage at 975mv

[Official] AMD Radeon RX 6900 XT Owner's Club Here are some benches on 2610-2710 that were not really stable on my card but some scores to compare to nonetheless.


----------



## H3||scr3am

delgon said:


> I'm at 2150 Memory with Fast Timings and 2575-2675 @ 1163mv stable. Passed like 1h of Timespy, 1h Port Royal and many games past those weeks I got it
> In MPT I increased the PPT to 340, changed GFX max voltage to 1162mv (to make it that way, just changing it in the wattman does not help at those speeds), and SoC voltage at 975mv
> 
> [Official] AMD Radeon RX 6900 XT Owner's Club Here are some benches on 2610-2710 that were not really stable on my card but some scores to compare to nonetheless.


Well I knocked GPU Mv down to 1170, and it's a new PB for timespy... also upped the RAM to 3400 again...appears stable enough lol...









guess I'll keep playing with SOC max voltage and see if I can squeeze some more performance out of it...

I'm jealous of your PR score... that whomps mine  haven't gotten 11K yet... getting close though...

EDIT: guess I'm in the 11K club now...


----------



## ZealotKi11er

Very interested in what that current limit means. You should only be power limited, temperature and then once at 1.175v, voltage limit should always be your limit. 
There is the Core Current and SOC Current. Since we dont touch SOC we should be fine with just 15% extra we get. We limit we dont see is Electrical design current. This one is hw limit for VRM which can spike at any point of the test and drop gfx core clk for very short time.


----------



## Martin778

What would be the most efficient way to AiO an 6900XT? EK's 240mm AIO kit is extremely expensive but I haven't seen many rad+pump combo's.


----------



## ZealotKi11er

Martin778 said:


> What would be the most efficient way to AiO an 6900XT? EK's 240mm AIO kit is extremely expensive but I haven't seen many rad+pump combo's.


Custom loop?


----------



## danny9428

Martin778 said:


> What would be the most efficient way to AiO an 6900XT? EK's 240mm AIO kit is extremely expensive but I haven't seen many rad+pump combo's.


Considering Asus and Sapphire would charge about the same price as you put EK's AIO on a reference card, I guess there isn't much difference in terms of price aio-cooling a 6900xt.
Maybe the Asus one would be a better pick if you are into aio water cool as the board design would be better than a reference but frankly it's more about which card comes in stock first atm


----------



## Martin778

That's exactly it @danny9428 any AiO options are vaporware at the moment and I already have a reference one. I doubt a NIB would be that much better since we can make the power limit almost infinite in MPT


----------



## H3||scr3am

Martin778 said:


> What would be the most efficient way to AiO an 6900XT? EK's 240mm AIO kit is extremely expensive but I haven't seen many rad+pump combo's.


just ghetto rig a CPU AIO cooler onto it if you don't have clearance issues  



 zipties can work... or custom make a retention bracket, or drill some custom holes in one...


----------



## delgon

Well, it works but can behave worse in some instances. Remember that nowadays your VRAM also needs active cooling and with memory heavy applications even a stock cooler that provides contact to all components can be almost not enough. If you also wanna increase PPT, your VRM will definitely need more cooling. I went with a custom loop just for this reason, to cool everything.


----------



## danny9428

I think I've seen someone using the NZXT aio for GPU
not exactly sure if that works on RX 6000-series cards

AMD's decision to use pads on GPU die kinda complicates things regarding to the mounting pressure so I probably wouldn't go for zip-tie method : P


----------



## muffins

I picked up a 6900 xt reference and want to add thermal pads to the backplate. I currently have 1.5mm on me. Anyone know the pad thickness needed for the back of vram and black capacitors?


----------



## delgon

From what I can measure you would need like 2mm pads at least. That is the thickness you would need to get to the bumps for the screws (the standoffs). So I would use either 2 or 2.5mm pads.


----------



## marcoschaap

I finally got my Sapphire Nitro+ 6900XT TimeSpy stable. I could get Superposition to work everytime but with the following settings now the TimeSpy benchmark doesn't crash anymore,like before.

Settings MPT:

Wattage: 375
EDC: 370

GFX Max Voltage: 1150
SoC Max Voltage: 1012

Settings Radeon driver:

Clock speed: 500 - 2660
Memory speed: 2150 & Fast Timing enabled
Zero-RPM mode disabled and increased fan curve ~2800RPM @ 375w load
Power settings: disabled

Timespy bench:










URL to result: I scored 18 896 in Time Spy

Overall I'm pretty happy with the result. I'm currently planning on a custom loop, which hopefully will yield a little more performance and a somewhat more likeable accoustic level


----------



## muffins

delgon said:


> From what I can measure you would need like 2mm pads at least. That is the thickness you would need to get to the bumps for the screws (the standoffs). So I would use either 2 or 2.5mm pads.


ty very much  i'll pick up some 2mm pads then.


----------



## kx11

So i tried recording a video of my gameplay without mic and Relive captured the video without audio, anyone got tips?? looks like a widespread problem


----------



## majestynl

muffins said:


> I picked up a 6900 xt reference and want to add thermal pads to the backplate. I currently have 1.5mm on me. Anyone know the pad thickness needed for the back of vram and black capacitors?





delgon said:


> From what I can measure you would need like 2mm pads at least. That is the thickness you would need to get to the bumps for the screws (the standoffs). So I would use either 2 or 2.5mm pads.


I have used the EK Backplate and below image is showing the thickness on each place. I didn't used the crap pads from EK.
Always using better quality ones. Most of the time Alphacool Eisschicht 14W/mK ! Definitely seeing the difference!


----------



## kx11

Trying to get an idea of how fast this card is 










I scored 8 817 in Time Spy Extreme


AMD Ryzen 9 3900XT, AMD Radeon RX 6900 XT x 1, 32768 MB, 64-bit Windows 10}




www.3dmark.com


----------



## muffins

got it all padded up! used some chinese branded pads i picked up from amazon that are labeled as 6w, 2mm thick to use to bridge the pcb to the back plate and used 1mm arctic pads i had left over for the black capacitors. both 1mm and 2mm fit perfectly.


----------



## H3||scr3am

kx11 said:


> Trying to get an idea of how fast this card is
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> I scored 8 817 in Time Spy Extreme
> 
> 
> AMD Ryzen 9 3900XT, AMD Radeon RX 6900 XT x 1, 32768 MB, 64-bit Windows 10}
> 
> 
> 
> 
> www.3dmark.com


you've probably still got more room in that card if you really start playing with it 


I should probably rerun extreme and make sure it passes... but: I scored 8 424 in Time Spy Extreme

your CPU outshines me due to core count, but my GPU settings eeked out more points, and I think we're both on air, and your memory is higher clocked


----------



## kx11

H3||scr3am said:


> you've probably still got more room in that card if you really start playing with it
> 
> 
> I should probably rerun extreme and make sure it passes... but: I scored 8 424 in Time Spy Extreme
> 
> your CPU outshines me due to core count, but my GPU settings eeked out more points, and I think we're both on air, and your memory is higher clocked


Using MSI AB, not sure how far i should OC it, yes mine is air-cooled it's ASrock phantom gaming D


----------



## kx11

RElive still won't capture audio after a number of fixes i tried, looks like i'm selling this crap for good


----------



## ZealotKi11er

kx11 said:


> So i tried recording a video of my gameplay without mic and Relive captured the video without audio, anyone got tips?? looks like a widespread problem


In-game audio?


----------



## kx11

ZealotKi11er said:


> In-game audio?


Yes


----------



## ZealotKi11er

kx11 said:


> Yes


I just tried to record and i can hear audio. I am using DP cable and playing sound via DP to my LG 4K Monitor.


----------



## kx11

Yeah it's too late, back to 3090 for me, this card is getting sold soon


----------



## ZealotKi11er

kx11 said:


> Yeah it's too late, back to 3090 for me, this card is getting sold soon











[Official] NVIDIA RTX 3090 Owner's Club


Last Updated: October 23, 2022 Note: This content is licensed under Creative Commons 3.0. This means that you are free to copy and redistribute this material, but only if the following criteria are met: 1) You must give appropriate credit by linking back to this thread. 2) You may not use this...




www.overclock.net


----------



## Vesimas

Hi, couple of hours ago i built the new rig in signature (atm still on air), on normal use didn't notice anything but when i launched heaven benchmark this crazy coil whine started  will it get better with time considering also that i'll watercool it or...


----------



## CS9K

Vesimas said:


> Hi, couple of hours ago i built the new rig in signature (atm still on air), on normal use didn't notice anything but when i launched heaven benchmark this crazy coil whine started  will it get better with time considering also that i'll watercool it or...


It will get slightly better over time, but yes the RDNA2 cards coil whine like champs. I won't get better with water because you have no GPU fan noise to drown it out. It stinks, but it is what it is.


----------



## ZealotKi11er

Vesimas said:


> Hi, couple of hours ago i built the new rig in signature (atm still on air), on normal use didn't notice anything but when i launched heaven benchmark this crazy coil whine started  will it get better with time considering also that i'll watercool it or...


It was pretty bad for me but I don't hear it anymore.


----------



## Vesimas

Good to know, ty both


----------



## weleh

My 6800XT coil whines a lot too but to be honest, I haven't had any GPU that didn't do it AMD or NVIDIA.

Earlier 1080 Ti FTW3 also did it, RTX GXT 3070 also did it, and now 6800 XT does it.

I'm inclined to the PSU, it's a Seasonic GX FOCUS 750W Gold that might be the culprit.


----------



## H3||scr3am

if you want some crazy coil whine try something like the night raid benchmark! but yeah they whine for sure.


----------



## drnilly007

How’s amd drivers this time around?


----------



## ZealotKi11er

drnilly007 said:


> How’s amd drivers this time around?


Better?


----------



## marcoschaap

drnilly007 said:


> How’s amd drivers this time around?


Glad they fixed a BSOD issue I had with my Reverb G2. I feel the drivers are pretty stable. No weird crashes and in the games I play mostly (DCS and ACC) the GPU performs perfectly.


----------



## drnilly007

ZealotKi11er said:


> Better?


Not exactly comforting...lol. I have a 3080 FE on water and of course Nvidia drivers seem to be very solid. Got a chance to get a 6900XT for msrp so thinking of making the switch, but I want to be able to use the card without issues.


----------



## ZealotKi11er

drnilly007 said:


> Not exactly comforting...lol. I have a 3080 FE on water and of course Nvidia drivers seem to be very solid. Got a chance to get a 6900XT for msrp so thinking of making the switch, but I want to be able to use the card without issues.


It hard to say really. I have both and dont have problems.


----------



## Martin778

Hmm, I don't see any point in thermal-padding solid state caps on the PCB, why would that help? These caps have near zero ESR and shouldn't get warm by itself?


----------



## 6u4rdi4n

Drivers have been fine on my end. No crashes or anything.


----------



## koji

Can confirm, I was a bit concerned about the AMD drivers but everything has been running smooth on my end.


----------



## Bart

I find the drivers for the 6900XT to be really good so far, _knock on wooden skull_. Even when I've pushed it too far during Time Spy score chasing, the driver seems to recover nicely in most cases. In the worst case, it just zaps the manual OC settings, which take 10 second to put back. I did notice some stuttering and FPS dips in Witcher 3, but considering I ran that game with a 2080ti for most of the time and only recently replaced it with the 6900XT, I wonder if re-installing the game 'fresh' would help.


----------



## majestynl

Drivers are really in a good shape. AMD is doing a good job. Some people are getting confused between what's a driver and what are side software/utilities. People think Adrenaline is the driver


----------



## newls1

Bart said:


> I find the drivers for the 6900XT to be really good so far, _knock on wooden skull_. Even when I've pushed it too far during Time Spy score chasing, the driver seems to recover nicely in most cases. In the worst case, it just zaps the manual OC settings, which take 10 second to put back. I did notice some stuttering and FPS dips in Witcher 3, but considering I ran that game with a 2080ti for most of the time and only recently replaced it with the 6900XT, I wonder if re-installing the game 'fresh' would help.


hows the waterblock so far? been a minute since i read up in this thread.....


----------



## Bart

newls1 said:


> hows the waterblock so far? been a minute since i read up in this thread.....


No complaints at all, but I've been working so much that I haven't done much gaming. It's kinda silly for a really kick-butt all-AMD rig with a 5950x/6900XT, but the game I've played the most lately is Total Annihilation, which is practically a DOS game from the 90s, not much stress on the GPU with that old dog, LOL.


----------



## nyk20z3

ZealotKi11er said:


> It was pretty bad for me but I don't hear it anymore.


Ive noticed mine being quite loud as well just when i start gaming, sometimes worse then others.


----------



## newls1

Bart said:


> No complaints at all, but I've been working so much that I haven't done much gaming. It's kinda silly for a really kick-butt all-AMD rig with a 5950x/6900XT, but the game I've played the most lately is Total Annihilation, which is practically a DOS game from the 90s, not much stress on the GPU with that old dog, LOL.


same here brother, FC5 is my addiction.... terribly optimized game engine, but cant get enough of the "Arcade" mode....


----------



## ZealotKi11er

nyk20z3 said:


> Ive noticed mine being quite loud as well just when i start gaming, sometimes worse then others.


Usually during game menus, fps is very high which is one of the reasons for coil whine.


----------



## HyperC

Well I finally got my mod now I just gotta figure out the best time to blow up my card.... Question about wattman crashing I have noticed once I clear the crash log my clocks default to 2745... I don't even have an saved profile with those clocks nor am I doing anything else besides clear and then click tuning maybe i need to DDU?


----------



## alexp247365

Can anyone recommend a good paste/pads for this card. I've got my Red Devil LE under water and for the most part it is fine temp wise. However, for The Division two game, it can hit a constant 345w, which has brought the temps up to 70c/100c - which seems way too high for a watercooled card.

2 360 ek radiators for the 6900xt and a non overclocked 5900x.


----------



## CS9K

alexp247365 said:


> Can anyone recommend a good paste/pads for this card. I've got my Red Devil LE under water and for the most part it is fine temp wise. However, for The Division two game, it can hit a constant 345w, which has brought the temps up to 70c/100c - which seems way too high for a watercooled card.
> 
> 2 360 ek radiators for the 6900xt and a non overclocked 5900x.


What do your temps in Furmark look like compared to The Division?


----------



## ZealotKi11er

That seems like a bad mount.


----------



## alexp247365

Furmark was 50c/70c @ 323w for a minute or two. Any other suggestions to test max temps?


----------



## kratosatlante

alexp247365 said:


> Furmark was 50c/70c @ 323w for a minute or two. Any other suggestions to test max temps?


i have 6900xt asrok phantom gaming, 3x8 conector but don have isues with temp, not mod yet, thinkin add water block if i get for this card. temp core around 55-63 in stock 2440mhz max , juntion 75-78. hotspot 80-85, mode oc asrock c 60-67 2620mhz j 78-83, h 80-87. cooler 68% , most models of this card dont have termal pads in the blacke plate, buy some good thermal pad, some 12w +conductivity 2mm Amazon.com: Fujipoly/mod/smart Ultra Extreme XR-m Thermal Pad - 60 x 50 x 1.5 - Thermal Conductivity 17.0 W/mK: Computers & Accessories








try run with out blackplate, see this


----------



## xTesla1856

Joining late, as per usual, but I managed to pick up an AsRock 6900XT Phantom Gaming at near-MSRP


----------



## kratosatlante

picks from mine



















and the supply



















in horizon zero dawn in 4k ultimate only get 68fps in benchmark, dont know if a limitation from my pci 3.0 8x. raid nvme limit my pcie
in other games i tested get similar o more fps from reviewers from youtube. runing at 3800cl14-8-15-14-22-34-221 no limit bandwich ram


----------



## xTesla1856

Fellow AsRock owner! The card is hilarious, great cooler, almost too big to even fit my Enthoo Luxe. As for performance, it destroys pretty much any game I throw at it in 1440p.


----------



## majestynl

alexp247365 said:


> Can anyone recommend a good paste/pads for this card. I've got my Red Devil LE under water and for the most part it is fine temp wise. However, for The Division two game, it can hit a constant 345w, which has brought the temps up to 70c/100c - which seems way too high for a watercooled card.
> 
> 2 360 ek radiators for the 6900xt and a non overclocked 5900x.



Alphacool Eisschicht 14 or 17W/mk
Fujipoly 14 or 17W/mk

And I found a new one on Amazon doing really a good job and is cheaper then the ones above!!
It's a german company.

- Extreme Cool 360 / Silver and up (14W/mk and up)

But your temps are way to high for 345w and a WB. I would first check your mounting.!!!


----------



## alexp247365

majestynl said:


> But your temps are way to high for 345w and a WB. I would first check your mounting.!!!


It is only that one game. I don't know what is so special about it that pushes the card in some areas harder than it is pushed by benchmarks.

Also, I'm not best at draining/refilling the loop without making a mess. The two refill bottles I've ordered off Amazon both leak .


----------



## delgon

alexp247365 said:


> It is only that one game. I don't know what is so special about it that pushes the card in some areas harder than it is pushed by benchmarks.
> 
> Also, I'm not best at draining/refilling the loop without making a mess. The two refill bottles I've ordered off Amazon both leak .


[Official] AMD Radeon RX 6900 XT Owner's Club Here are my temps. This one pulls around 370W in Port Royal and gets a little hot too. I was also considering replacing the paste with LM. This 30C difference between edge and hotspot is kinda driving me crazy. I have EK 360 normal and 280 thick boys in my loop with not rly OCd 9900k. It is true that I run my fans at around 900RPM but still, even on max it does not get that much cooler


----------



## CS9K

delgon said:


> [Official] AMD Radeon RX 6900 XT Owner's Club Here are my temps. This one pulls around 370W in Port Royal and gets a little hot too. I was also considering replacing the paste with LM. This 30C difference between edge and hotspot is kinda driving me crazy. I have EK 360 normal and 280 thick boys in my loop with not rly OCd 9900k. It is true that I run my fans at around 900RPM but still, even on max it does not get that much cooler


I'm curious what thermal compound each of you used on the die, cc @alexp247365 . My hotspot doesn't get anywhere close to 90C on similar loads/test-lengths, at 2600MHz, 1150mV, 2100mem, 345W limit. I used Gelid GC-Extreme thermal compound on my RX 6800 XT and it has performed admirably so far. I've topped 75C hotspot before, but normally hang around the 60's gaming and in short benchmark runs.


----------



## alexp247365

CS9K said:


> I'm curious what thermal compound each of you used on the die


Whatever came in the alphacool water-block package. My card is voltage hungry, and doesn't seem stable at anything less than 1175.

In the opening timespy scene, I've seen it pull as much as 402w, but crashes shortly after


----------



## CS9K

alexp247365 said:


> Whatever came in the alphacool water-block package. My card is voltage hungry, and doesn't seem stable at anything less than 1175.
> 
> In the opening timespy scene, I've seen it pull as much as 402w, but crashes shortly after


Hopefully they sent you Alphacool XPX-1, and not "Alphacool Silver Grease"


----------



## Medusa666

Question: Does anyone know what kind of fan bearing Asrock uses for their 6800 (XT) and 6900 XT cards? There is no information to be found on this, my experience is that the fans are normally one of the first components to fail on a graphics card.


----------



## alexp247365

Medusa666 said:


> There is no information to be found on this


The underside of the fan would probably have a model number for the fan on the sticker that you could use to cross-reference. But, you'll probably have to do a little disassembly to get that information. However, you might be able to just order the whole fan component itself.


----------



## xR00Tx

Just enjoying my Sapphire Nitro+ 6900 XT!
I don't regret at all leaving nvidia!

I scored 20 953 in Time Spy


----------



## HyperC

alexp247365 said:


> Can anyone recommend a good paste/pads for this card. I've got my Red Devil LE under water and for the most part it is fine temp wise. However, for The Division two game, it can hit a constant 345w, which has brought the temps up to 70c/100c - which seems way too high for a watercooled card.
> 
> 2 360 ek radiators for the 6900xt and a non overclocked 5900x.


 I used KPX for the first time ON the gpu die no issues, haven't gone over 41c die temp not sure how good that is stock alphacool block pads I am sure 14wk pads would yield another 5c off the board but amd with limits really doesn't matter at this point


----------



## 99belle99

Just got one of these cards and got around to testing it today on GTA V, Mafia III and Watch Dogs 2 @4k at 80FPS I set the FPS limit at 80FPS. All were ran at ultra settings except GTA V it was x4 instead of x8 in MSAA and the advanced graphic settings column was off. No need to OC these cards as mine was hitting 2500MHz on the core mostly and around the 2000MHz on the memory. Ran a bit hot on stock fans so I pushed the rpm up with cool it more and still pretty silent. I have a reference model.


----------



## Flexarius

xR00Tx said:


> Just enjoying my Sapphire Nitro+ 6900 XT!
> I don't regret at all leaving nvidia!
> 
> I scored 20 953 in Time Spy
> 
> 
> 
> View attachment 2482911
> View attachment 2482912


Hi,

can you post your Settings in MPT and Wattman please?


----------



## newls1

Hows this timespy run? Just installed new driver.










*EDIT*

DAMN IT!!! That score puts me between 8th and 9th fastest on 3dmark for 6900XT (single Card) But wont validate online results cause it says "driver not recognized" and throws a warning.... [email protected][email protected][email protected][email protected]! .. I think I have a pretty good 6900XT gpu silicon on this card


----------



## cfranko

Hey everyone, I just picked up a 6900 XT Phantom D. I set its power to 325 watts from morepowertool. Its boosting really nice with this amount of power however, the junction temp is around 90-95 in 3D Mark. Will this temperature cause an issue in the long term? I have to be careful because the card has no warranty.


----------



## weleh

Wait a few more days until driver becomes valid and then you can hide/show the score and it will put you on the leaderboard.


----------



## newls1

weleh said:


> Wait a few more days until driver becomes valid and then you can hide/show the score and it will put you on the leaderboard.


Will do, thanks


----------



## coelacanth

cfranko said:


> Hey everyone, I just picked up a 6900 XT Phantom D. I set its power to 325 watts from morepowertool. Its boosting really nice with this amount of power however, the junction temp is around 90-95 in 3D Mark. Will this temperature cause an issue in the long term? I have to be careful because the card has no warranty.


According to AMD those junction temps are fine. Are you using a custom fan curve? That can help a lot.


----------



## Vesimas

If im not wrong Adrenaline is not compatible with MSI Afterburner, so the question is: does exist an overlay software for fps/gpu tempo etc etc?


----------



## newls1

Ctrl Shift O ......


----------



## Vesimas

newls1 said:


> Ctrl Shift O ......


Lol i didn't know


----------



## newls1

Vesimas said:


> Lol i didn't know


thats why forums exist, for help.....


----------



## newls1

weleh said:


> Wait a few more days until driver becomes valid and then you can hide/show the score and it will put you on the leaderboard.


am i going to have to re-run the benchmark, or will it just validate as 3dmark (UL) recognizes the driver?


----------



## cfranko

Are my settings currently good? Can I potentially increase something else? I don't generally know much about overclocking but know a bit.


----------



## weleh

newls1 said:


> am i going to have to re-run the benchmark, or will it just validate as 3dmark (UL) recognizes the driver?



No need to rerun, just hide and show resuls on your run and it will show up on leaderboard again.

However, bear in mind the new drivers boosted everyone's cards.

My own 6800XT just did a 21.2K Graphic Score run and currently #1 5800X/6800XT on Time Spy at 2700 Mhz average.









I scored 19 644 in Time Spy


AMD Ryzen 7 5800X, AMD Radeon RX 6800 XT x 1, 16384 MB, 64-bit Windows 10}




www.3dmark.com


----------



## newls1

weleh said:


> No need to rerun, just hide and show resuls on your run and it will show up on leaderboard again.
> 
> However, bear in mind the new drivers boosted everyone's cards.
> 
> My own 6800XT just did a 21.2K Graphic Score run and currently #1 5800X/6800XT on Time Spy at 2700 Mhz average.
> 
> 
> 
> 
> 
> 
> 
> 
> 
> I scored 19 644 in Time Spy
> 
> 
> AMD Ryzen 7 5800X, AMD Radeon RX 6800 XT x 1, 16384 MB, 64-bit Windows 10}
> 
> 
> 
> 
> www.3dmark.com


congrats on your run! this might be a decent driver. So are you saying just click "hide" then show and it will reset?


----------



## newls1

cfranko said:


> Are my settings currently good? Can I potentially increase something else? I don't generally know much about overclocking but know a bit.
> View attachment 2483053


many places you can tweak here:
im guessing you are not under water correct? if not setup a fan profile to keep "hotspot" temps below 100c and core 75c. max at your mem slider and set timing to "fast" Id raise your min slider to 2630 and max slider to 2750 (for whatever reason ive had best results keeping around a 150Mhz difference between the 2) 
now on to MPT tweaks..... I have my 6900XT set to the following (but bear in mind im watercooled so be cautious when adjusting the % of power slider:

watts 340 (w)
TDC 390 (a)
TDC SoC 58 (a)

write that to the PPT and WATCH OUT FOR YOUR TEMPS. Id start with 5% Power slider, then work your way up until you either hit a temp limit or a CTD

*EDIT* make certain you are powering this gpu with 2 dedicated 8pin PCIe cables and NOT 1 cable with 2 8pins!! That is crucial


----------



## weleh

newls1 said:


> congrats on your run! this might be a decent driver. So are you saying just click "hide" then show and it will reset?


Once the driver gets approved, for your result to show on leaderboard you need to hide and show because otherwise it doesn't get posted automatically I believe.


----------



## cfranko

newls1 said:


> many places you can tweak here:
> im guessing you are not under water correct? if not setup a fan profile to keep "hotspot" temps below 100c and core 75c. max at your mem slider and set timing to "fast" Id raise your min slider to 2630 and max slider to 2750 (for whatever reason ive had best results keeping around a 150Mhz difference between the 2)
> now on to MPT tweaks..... I have my 6900XT set to the following (but bear in mind im watercooled so be cautious when adjusting the % of power slider:
> 
> watts 340 (w)
> TDC 390 (a)
> TDC SoC 58 (a)
> 
> write that to the PPT and WATCH OUT FOR YOUR TEMPS. Id start with 5% Power slider, then work your way up until you either hit a temp limit or a CTD
> 
> *EDIT* make certain you are powering this gpu with 2 dedicated 8pin PCIe cables and NOT 1 cable with 2 8pins!! That is crucial


Thanks for the reply. I have a ASROCK Phantom D which has 3x8 power pins with all seperate cables so no daisy chaining and no issues with power. However, at 325 watts, my junction temp sits at 90-95 and sometimes even 100 in 3D Mark timespy. And a little cooler in games. And since I am aircooled and my card doesnt have any kind of warranty 95-100 in the hotspot got me worried. So right now I changed the TDP to 300W. Im assuming the configuration you gave with 2630 min and 2750 max wont work with 300 watts. Air cooling is really limiting on these cars man


----------



## koji

kratosatlante said:


> i have 6900xt asrok phantom gaming, 3x8 conector but don have isues with temp, not mod yet, thinkin add water block if i get for this card. temp core around 55-63 in stock 2440mhz max , juntion 75-78. hotspot 80-85, mode oc asrock c 60-67 2620mhz j 78-83, h 80-87. cooler 68% , most models of this card dont have termal pads in the blacke plate, buy some good thermal pad, some 12w +conductivity 2mm Amazon.com: Fujipoly/mod/smart Ultra Extreme XR-m Thermal Pad - 60 x 50 x 1.5 - Thermal Conductivity 17.0 W/mK: Computers & Accessories
> 
> 
> 
> 
> 
> 
> 
> 
> try run with out blackplate, see this





xTesla1856 said:


> Joining late, as per usual, but I managed to pick up an AsRock 6900XT Phantom Gaming at near-MSRP
> View attachment 2482596
> View attachment 2482597


Any of you guys take the back plate of yet to see if there are thermal pads installed? Have one myself but I'd rather not mess with it if I can avoid it, current GPU prices are a *****...


----------



## newls1

cfranko said:


> Thanks for the reply. I have a ASROCK Phantom D which has 3x8 power pins with all seperate cables so no daisy chaining and no issues with power. However, at 325 watts, my junction temp sits at 90-95 and sometimes even 100 in 3D Mark timespy. And a little cooler in games. And since I am aircooled and my card doesnt have any kind of warranty 95-100 in the hotspot got me worried. So right now I changed the TDP to 300W. Im assuming the configuration you gave with 2630 min and 2750 max wont work with 300 watts. Air cooling is really limiting on these cars man


100C hotspot is allowable just not favorable. you have to adjust your fans. set my MPT settings and adjust your fans and see what performance improvement you get.


----------



## cfranko

newls1 said:


> 100C hotspot is allowable just not favorable. you have to adjust your fans. set my MPT settings and adjust your fans and see what performance improvement you get.


I can afford to go on water actually.But I have no idea how. 
This is my current setup. My case is the td500 mesh and I have a AIO on my cpu. If I only want to water cool my GPU can I do it? How can I get information about this?


----------



## skline00

koji, I "only" have the RX6800, a Gigabyte stock card, but when I went to install the EK waterblock and backplate, the first thing I noticed was no thermal pads on the rear of the PCB when the original backplate was removed.

The EK backplate included directions and plenty of thermal pads for the rear of the PCB.


----------



## skline00

cfranko, Google on how to build a custom water loop. There are tons of videos.

I have done a number of custom loops through the years. They are not works of art since I favor function over appearance. 

I use flexible tubing. I use D5 pumps. I make sure I have plenty of radiator capacity and I always try to buy a case big enough to house all the additional parts.

In my signature below you can see the parts I used in the build.


----------



## cfranko

skline00 said:


> cfranko, Google on how to build a custom water loop. There are tons of videos.
> 
> I have done a number of custom loops through the years. They are not works of art since I favor function over appearance.
> 
> I use flexible tubing. I use D5 pumps. I make sure I have plenty of radiator capacity and I always try to buy a case big enough to house all the additional parts.
> 
> In my signature below you can see the parts I used in the build.


How do I find a waterblock that is compatible with my GPU?

And also, Can’t I just place an CPU AIO on the core of the gpu? Do I have to get a custom loop in order to water cool the GPU?
I will look up the details in google now. Thanks for the advice.


----------



## newls1

I made the top 10!!! W00T..... the nerd inside of me is so excited! Im 9th fastest currently! Thank you alphacool for the waterblock for this GPU


----------



## koji

skline00 said:


> koji, I "only" have the RX6800, a Gigabyte stock card, but when I went to install the EK waterblock and backplate, the first thing I noticed was no thermal pads on the rear of the PCB when the original backplate was removed.
> 
> The EK backplate included directions and plenty of thermal pads for the rear of the PCB.


Thanks for the reply man, yeah I should just check it out myself, removing the backplate isn't too much of a hassle. Would have been nice if I was able to find a teardown of the card somewhere but the Phantom D is relatively obscure when it comes to internet coverage it seems.

Nevermind:















Looks like they are installed.


----------



## xTesla1856

Is there a quick way to check the card's ASIC quality? Like you used to be able to do in GPUZ?


----------



## ZealotKi11er

xTesla1856 said:


> Is there a quick way to check the card's ASIC quality? Like you used to be able to do in GPUZ?


Out of the box clk speed is 1 indicator.


----------



## xR00Tx

Flexarius said:


> Hi,
> 
> can you post your Settings in MPT and Wattman please?


Hey Flexarius,

For benchmarks I'm running MPT @ 400w and wattman 2800 max / 2150 FT / 1.175v / +15% (know it's dangerous since I have only 2 8pin pwr connectors, thats why it's only for fast benchs)

But for daily usage (gaming) I use a more conservative configuration: 2600 @ 1050mv (360w cap).

New Time Spy score: 22008
I scored 22 008 in Time Spy

CPU: 16585
GPU: 23357


----------



## mismatchedyes

Does anyone find that the hardest non RT benchmark is time spy gt2? I can run set about 200mhz higher in fire strike ultra Vs what I can in time spy (standard). Just wondered if that's common or not?


----------



## mismatchedyes

xR00Tx said:


> Hey Flexarius,
> 
> For benchmarks I'm running MPT @ 400w and wattman 2800 max / 2150 FT / 1.175v / +15% (know it's dangerous since I have only 2 8pin pwr connectors, thats why it's only for fast benchs)
> 
> But for daily usage (gaming) I use a more conservative configuration: 2600 @ 1050mv (360w cap).
> 
> New Time Spy score: 22008
> I scored 22 008 in Time Spy
> 
> CPU: 16585
> GPU: 23357


Amazing scores. What power supply do you have out of interest ?


----------



## xR00Tx

mismatchedyes said:


> Amazing scores. What power supply do you have out of interest ?


Ty

It's a hx1200i


----------



## mismatchedyes

What do you get on firestike ultra? I can get a really good score there but am miles behind you on timespy.


----------



## ZealotKi11er

mismatchedyes said:


> What do you get on firestike ultra? I can get a really good score there but am miles behind you on timespy.


There seem to be some tricks you cant apply when you run the Extra/Ultra version but you can do with normal TimeSpy and FireStrike. My 6900 XT score 18.6K in TimeSpy with stock cooler + 15% power. Even with 2800MHz it would mean to get 23K GPU score I need to be getting at least 20K+ stock.


----------



## mismatchedyes

ZealotKi11er said:


> There seem to be some tricks you cant apply when you run the Extra/Ultra version but you can do with normal TimeSpy and FireStrike. My 6900 XT score 18.6K in TimeSpy with stock cooler + 15% power. Even with 2800MHz it would mean to get 23K GPU score I need to be getting at least 20K+ stock.


Ah I see thanks. The most graphics score I can get on timespy is about 21600 which seems well behind the best scores. Anything over 2700 target clock and it will crash the benchmark. I wasn't sure if the crashing was due to power supply as i only have a 750w, though I did run rtx3090 with 520w bios on this psu ok so I am not sure.


----------



## weleh

I think you can exploit LOD/Tesselation on some 3DMARK but not others. Or even doing it time and time will trigger it. There's posts about it here.
I haven't done it personally but a 6900 XT doing 23k+ on TS does sound like either very cold temps or some sort of messing about.


----------



## 99belle99

My settings will not change after changing settings in MorePowerTool. I restart the PC after clicking write.

I have no idea why. I used to use it no problem with my old 5700 XT. I even updated to the latest version of MPT.


----------



## newls1

99belle99 said:


> My settings will not change after changing settings in MorePowerTool. I restart the PC after clicking write.
> 
> I have no idea why. I used to use it no problem with my old 5700 XT. I even updated to the latest version of MPT.


do you have the little window come up saying "write successful" after clicking "Write SPPT"?


----------



## CS9K

mismatchedyes said:


> Does anyone find that the hardest non RT benchmark is time spy gt2? I can run set about 200mhz higher in fire strike ultra Vs what I can in time spy (standard). Just wondered if that's common or not?


I feel like you're running into power/thermal limits in TS Ultra, which isn't letting your core clock up as high as it could. Meanwhile, the lower resolution in TS standard lets your core run as fast as it can.


----------



## 99belle99

newls1 said:


> do you have the little window come up saying "write successful" after clicking "Write SPPT"?


No, never knew that was a thing.

Wonder why that's not happening for me?


----------



## newls1

99belle99 said:


> No, never knew that was a thing.
> 
> Wonder why that's not happening for me?


so for me, i have to do this IIRC correctly (going off memory so i might miss a step) if i dont get the little window that pops up, my settings dont apply either:
Open MPT as ADMIN RIGHTS
click "load" and load your bios file you extracted from gpuz
then go to the drop down box and select your gpu
then once you do that, all fields will populate
click the tab where we adjust power (W) and (A) and SoC (A)
make your adjustments
I click "Save" at this point and save a file to my desktop
then I click "write Sppt" and then a very small box pops up saying "write successful" 
THEN REBOOT, all is done


----------



## xR00Tx

mismatchedyes said:


> What do you get on firestike ultra? I can get a really good score there but am miles behind you on timespy.


On Fire Strike Ultra I just got: 16.477 (graphics)

I scored 16 077 in Fire Strike Ultra


----------



## ZealotKi11er

xR00Tx said:


> On Fire Strike Ultra I just got: 16.477 (graphics)
> 
> I scored 16 077 in Fire Strike Ultra


Dam is ur clock at 2900Mhz?


----------



## xR00Tx

ZealotKi11er said:


> Dam is ur clock at 2900Mhz?


Yep, for Fire Strike Ultra I was able to run it @ 2940!


----------



## HyperC

xR00Tx said:


> Yep, for Fire Strike Ultra I was able to run it @ 2940!


holy moley! brb going to test and crash my drivers!


----------



## newls1

xR00Tx said:


> Yep, for Fire Strike Ultra I was able to run it @ 2940!


what sub ambient cooling are you using?


----------



## cfranko

@newls1 Hey newls, Why is my score timespy score bad? My MPT settings are 325 w and 345 a









I scored 16 504 in Time Spy


AMD Ryzen 5 5600X, AMD Radeon RX 6900 XT x 1, 16384 MB, 64-bit Windows 10}




www.3dmark.com


----------



## xR00Tx

newls1 said:


> what sub ambient cooling are you using?


Lol
Yesterday I installed a bykski water block on it!


----------



## newls1

cfranko said:


> @newls1 Hey newls, Why is my score timespy score bad? My MPT settings are 325 w and 345 a
> 
> 
> 
> 
> 
> 
> 
> 
> 
> I scored 16 504 in Time Spy
> 
> 
> AMD Ryzen 5 5600X, AMD Radeon RX 6900 XT x 1, 16384 MB, 64-bit Windows 10}
> 
> 
> 
> 
> www.3dmark.com


you are almost 20k for a gpu score, seems close to right for me, your cpu score is exactly 1/2 of what mine is so maybe that is bringing you done. YOu can easily break 20k for gpu score tho, increase that OC... your average clock speed was 24xx.... bring that up to 26xx


----------



## cfranko

newls1 said:


> you are almost 20k for a gpu score, seems close to right for me, your cpu score is exactly 1/2 of what mine is so maybe that is bringing you done. YOu can easily break 20k for gpu score tho, increase that OC... your average clock speed was 24xx.... bring that up to 26xx


Alright, Thanks. Also is the Byski 6900 XT Waterblock compatible with the Asrock Phantom D? Here is the link to it https://m.tr.aliexpress.com/item/10...fa8d449e0a3262534cc1f1978w.jpg_640x640Q90.jpg

How can I know if its compatible or not?


----------



## newls1

no, you have a custom pcb.. find a block for your specific model gpu GOOD LUCK


----------



## MickeyPadge

cfranko said:


> Alright, Thanks. Also is the Byski 6900 XT Waterblock compatible with the Asrock Phantom D? Here is the link to it https://m.tr.aliexpress.com/item/1005002297435653.html?spm=a2g0n.productlist.0.0.3331BnH2BnH2yW&browser_id=d1d94a45b99946578f478f2c75d9d404&aff_trace_key=4048e8a59aba45a5b2f998d3be16212c-1615128827607-01402-UneMJZVf&aff_platform=msite&m_page_id=17853cb50fd9739f9d61da55f7527be4a691eec367&gclid=&_imgsrc_=ae01.alicdn.com/kf/H6d4e15bfa8d449e0a3262534cc1f1978w.jpg_640x640Q90.jpg
> 
> How can I know if its compatible or not?


I use the corsair block on my phantom, you must remove the metal shroud, and shave the plastic so the two longer capacitors will fit OK. Works great


----------



## MickeyPadge

chispy said:


> Thank you for the answer , awesome news ! Is out of stock everywhere but i dont mind waiting for it to come back in stock  , also it seems ek is doing an ek fc block for the Asrock 6900xt-6800xt Taichi wish uses the same exact pcb as this one. Will update once i find and install a suitable full cover water block. Thank you the help , appreciate it.
> 
> Kind Regards: Chispy


Did you get the block? Be interested to see some results temp wise. I've upgraded to a Ryzen 5800X now too, the xeon 1680v2 is my backup


----------



## Aaq

This is my score, see below. I only adjusted the Power Limit to 15% and made a custom fan curve. 6900 XT Reference.









I scored 18 007 in Time Spy


Intel Core i7-10700K Processor, AMD Radeon RX 6900 XT x 1, 32768 MB, 64-bit Windows 10}




www.3dmark.com


----------



## xR00Tx

Aaq said:


> This is my score, see below. I only adjusted the Power Limit to 15% and made a custom fan curve. 6900 XT Reference.
> 
> 
> 
> 
> 
> 
> 
> 
> 
> I scored 18 007 in Time Spy
> 
> 
> Intel Core i7-10700K Processor, AMD Radeon RX 6900 XT x 1, 32768 MB, 64-bit Windows 10}
> 
> 
> 
> 
> www.3dmark.com
> 
> 
> 
> 
> 
> View attachment 2483291
> 
> View attachment 2483292


Have you already installed driver 21.3.1 ? It will improve your TS score.


----------



## cfranko

xR00Tx said:


> Have you already installed driver 21.3.1 ? It will improve your TS score.


I just got this score








I scored 16 710 in Time Spy


AMD Ryzen 5 5600X, AMD Radeon RX 6900 XT x 1, 16384 MB, 64-bit Windows 10}




www.3dmark.com





is it good for a air cooled gpu?


----------



## Aaq

xR00Tx said:


> Have you already installed driver 21.3.1 ? It will improve your TS score.


Yes I have the latest driver installed. I gained a few extra FPS in some games. Very happy overall with the perf of 6900 XT. I used to be very Nvidia sided but after trying out this product I can say that AMD is also very good despite what people say about driver stability issues. I have yet to experience a driver problem. Previously I had RTX 3080 but I returned due very bad coil whine and bad performance. I had 5900x with RTX 3080 combination but my system would sometimes have mini-freezes like 0.5 sec hiccups. I tested all parts but I could not find culprit so I returned everything and rebuild with Intel. Going i9 11900k soon.


----------



## xR00Tx

Aaq said:


> Yes I have the latest driver installed. I gained a few extra FPS in some games. Very happy overall with the perf of 6900 XT. I used to be very Nvidia sided but after trying out this product I can say that AMD is also very good despite what people say about driver stability issues. I have yet to experience a driver problem. Previously I had RTX 3080 but I returned due very bad coil whine and bad performance. I had 5900x with RTX 3080 combination but my system would sometimes have mini-freezes like 0.5 sec hiccups. I tested all parts but I could not find culprit so I returned everything and rebuild with Intel. Going i9 11900k soon.


Have you tried to increase your card's TDP through MPT (More Power Tool)? I'm also waiting for a 11900k! =]


----------



## Aaq

xR00Tx said:


> Have you tried to increase your card's TDP through MPT (More Power Tool)?


I have but I don't think the extra perf (few fps) warrants the heat that comes with it. With reference cooler I hit 91 Celsius Hotspot with a few Time Spy Extreme runs.










I'm not sure if I want to take the card apart because I might go with 3080 Ti if I ever can get my hands on one.

Update:
I've been looking at this thread a bit more and I'm getting very excited with all the overclocks you guys are doing. I'm considering going with watercooling for the GPU. I have my eyes on an Alphacool Eisblock and would need a reservoir, pump and radiator/fans going with it. I've never done watercooling before, can someone recommend me some decent parts to cool 6900 XT? I have the Meshify 2 Compact case.

This is the one right? 








Alphacool Eisblock Aurora Acryl GPX-A Radeon RX 6800/6800XT/6900XT Reference mit Backplate


Der Alphacool Eisblock Aurora GPX-A Radeon 6800(XT) vereint Style mit Performance und eine umfangreiche Digital RGB Beleuchtung. Die Erfahrung von über 17 Jahren sind in diesen Grafikkarten-Wasserkühler eingeflossen und stellen den...




www.alphacool.com


----------



## xR00Tx

Aaq said:


> I have but I don't think the extra perf (few fps) warrants the heat that comes with it. With reference cooler I hit 91 Celsius Hotspot with a few Time Spy Extreme runs.
> 
> View attachment 2483324
> 
> 
> I'm not sure if I want to take the card apart because I might go with 3080 Ti if I ever can get my hands on one.
> 
> Update:
> I've been looking at this thread a bit more and I'm getting very excited with all the overclocks you guys are doing. I'm considering going with watercooling for the GPU. I have my eyes on an Alphacool Eisblock and would need a reservoir, pump and radiator/fans going with it. I've never done watercooling before, can someone recommend me some decent parts to cool 6900 XT? I have the Meshify 2 Compact case.
> 
> This is the one right?
> 
> 
> 
> 
> 
> 
> 
> 
> Alphacool Eisblock Aurora Acryl GPX-A Radeon RX 6800/6800XT/6900XT Reference mit Backplate
> 
> 
> Der Alphacool Eisblock Aurora GPX-A Radeon 6800(XT) vereint Style mit Performance und eine umfangreiche Digital RGB Beleuchtung. Die Erfahrung von über 17 Jahren sind in diesen Grafikkarten-Wasserkühler eingeflossen und stellen den...
> 
> 
> 
> 
> www.alphacool.com


I got a chinese gpu water block for my Sapphire Nitro+ 6900 XT and it is working perfectly!

It is a Bykski A-SP6900XT-X:
818.99R$ 10% de desconto|Bykski uso bloco de água para safira radeon rx 6800xt/rx6900xt nitro + gpu cartão/cobertura completa radiador cobre/rgb luz|Ventiladores e resfriadores| - AliExpress


----------



## 99belle99

How are people running 2600-2700MHz and higher. In Timespy my average is around 2400MHz. If I use MPT to change settings Timespy will crash at launch. I have a reference 6900 XT.


----------



## xR00Tx

99belle99 said:


> How are people running 2600-2700MHz and higher. In Timespy my average is around 2400MHz. If I use MPT to change settings Timespy will crash at launch. I have a reference 6900 XT.


Use MPT to change only the "POWER LIMIT GPU (W)" option. Set it to 335.


----------



## CS9K

xR00Tx said:


> Use MPT to change only the "POWER LIMIT GPU (W)" option. Set it to 335.


This. On the "Power and Voltage" tab, I have "Power Limit GPU (W)" set to 300W, and "TDC Limit GFX (A)" set to "350A" on my RX 6800 XT reference (on water). Those two values are the only two you can change currently in MorePowerTool.


----------



## Nighthog

Just bought the RX 6900XT Liquid Devil from PowerColor.

Should arrive in a couple days. A batch of various GPU's came in to my go-to retailer and they didn't vanish in seconds as they have done for the past weeks.
A little on the expensive side but good value compared to the other 6900XT models currently available. A little on the panic side when I saw it pop-up for purchase.


----------



## Nighthog

Had just read the following review and the card came in for stock & purchase a moment later.









PowerColor RX 6900XT Liquid Devil Review - Well cooled is half won! | igor'sLAB


In the meantime, in addition to the already extensively tested reference cards of AMD's RX-6000 series, there are also various board partner cards – not for sale – which include the truly exceptional…




www.igorslab.de





Perfect timing, several still in stock. (Sweden)


----------



## Aaq

This is the best I can get on air.









I scored 18 914 in Time Spy


Intel Core i7-10700K Processor, AMD Radeon RX 6900 XT x 1, 32768 MB, 64-bit Windows 10}




www.3dmark.com


----------



## xR00Tx

I've just run Superposition benchmark (1080p Extreme & 4k) and here are the results:


----------



## drnilly007

Aaq said:


> This is the best I can get on air.
> 
> 
> 
> 
> 
> 
> 
> 
> 
> I scored 18 914 in Time Spy
> 
> 
> Intel Core i7-10700K Processor, AMD Radeon RX 6900 XT x 1, 32768 MB, 64-bit Windows 10}
> 
> 
> 
> 
> www.3dmark.com
> 
> 
> 
> 
> 
> View attachment 2483422


what are your settings? OC


----------



## drnilly007

xR00Tx said:


> Use MPT to change only the "POWER LIMIT GPU (W)" option. Set it to 335.


Do you do anything in the radeon software when applying this. I have a ref 6900xt with the alphacool waterblock on it


----------



## LtMatt

xR00Tx said:


> Use MPT to change only the "POWER LIMIT GPU (W)" option. Set it to 335.


Do you set the power limit to +15% also?


----------



## xR00Tx

drnilly007 said:


> Do you do anything in the radeon software when applying this. I have a ref 6900xt with the alphacool waterblock on it





LtMatt said:


> Do you set the power limit to +15% also?


For Superposition 1080p Extreme / 4k I have Radeon software set to:

Clock 2840/2940
VRam 2150 / FT
Power +15%


----------



## newls1

xR00Tx said:


> For Superposition 1080p Extreme / 4k I have Radeon software set to:
> 
> Clock 2840/2940
> VRam 2150 / FT
> Power +15%


would you be kind enough to share your MPT settings? I would like to see if I could meet those scores


----------



## xR00Tx

newls1 said:


> would you be kind enough to share your MPT settings? I would like to see if I could meet those scores


Sure!
For Time Spy and Fire Stike Ultra I have only increased Power Limit to 400w.


----------



## Nighthog

It has arrived:


----------



## newls1

This is my 4k optimized super position @ ~2830Mhz.... Is it a decent score??


----------



## Aaq

Does anyone have any experience with EK-Quantum Reaction AIO?


----------



## newls1

Aaq said:


> Does anyone have any experience with EK-Quantum Reaction AIO?


EK for any watercooling part would be my absolute LAST resort


----------



## CS9K

newls1 said:


> This is my 4k optimized super position @ ~2830Mhz.... Is it a decent score??


Great googly moogly. Just over 15k is the best I can pull with my 6800 XT @ 2600MHz.


----------



## Aaq

newls1 said:


> EK for any watercooling part would be my absolute LAST resort


Ok so which parts would you recommend instead?


----------



## newls1

2830Mhz is as far as my 6900xt will go until we can increase voltage. a few MHz more then 2830, and driver crashes. In fully loaded benchmarks that is.. I was pulling nearly 400watts @ 4k super position benchy. 2830Mhz tops this core can do with that kind of load. I can game @ 2900Mhz but gpu usage is only like 50-75% most of the time


----------



## mismatchedyes

It was 6c here today so I opened the window and put the pc next to it. Got the room down to 15c and got a good score for fire strike. It is the number #1 Radeon for the moment but I don't think it will last very long. Pc is in closed case also which I think doesn't help.

Clock was set at 2900mhz









I scored 16 286 in Fire Strike Ultra


AMD Ryzen 9 5950X, AMD Radeon RX 6900 XT x 1, 32768 MB, 64-bit Windows 10}




www.3dmark.com


----------



## 6u4rdi4n

My water block arrived a few days ago and I finaly got around to start testing my card. Performance doesn't seem too bad, system is very silent and the card has little to no coil whine. I do need to do some investigating. Could be lots of air trapped in the system, could be a bad mount, could be a combination of things. 









I scored 17 780 in Time Spy


Intel Core i9-9900K Processor, AMD Radeon RX 6900 XT x 1, 16384 MB, 64-bit Windows 10}




www.3dmark.com





Temp graph:









This is what I ran for initial testing with a room temperature of ~22°C


----------



## Nighthog

I've installed the RX 6900XT Liquid Devil in my system but I noted that I had considerable trouble to get a good flow in my system. Easily gets hot, I already knew this beforehand that I didn't have optimal flow but just quick rebuilding the loop for this just pointed it out once more I need to improve my loop.
Anyway the card is in there and is working as it should I presume but is underperforming paired with a 4650G and the restricted water flow is not making things better as temperatures get hot with a bit of testing.

Getting 55C core & 66C hotspot with a Ethereum test load for stock settings isn't a good indication.

Time Spy returned only ~15500 points for the moment.

But the coil-whine... *unbelievable*! No card has screamed as much as this one did running the 3dmark benchmark.


----------



## CS9K

Nighthog said:


> But the coil-whine... *unbelievable*! No card has screamed as much as this one did running the 3dmark benchmark.


I have an EK block and backplate on my reference RX 6800 XT. With the stock cooler, my card had average coil whine, but once the block and backplate went on, you could, no exaggeration, hear coil whine across the house when running games and benchmarks.

In my case, it was the backplate that was transmitting the sound

I removed all of the backplate screws, then one by one, tightened each screw ONLY until the volume of the coil whine increased, I then backed the screw out until the volume went down. I did each screw one by one, then went over each screw again to tighten until volume increased.

It is not ideal, I admit. While the backplate won't come off, the backplate isn't screwed on tightly, either.


----------



## kazukun

*Alphacool Unveils the Eisblock Aurora Acetal GPX-A Water Block for AMD Big Navi*








Alphacool Unveils the Eisblock Aurora Acetal GPX-A Water Block for AMD Big Navi


During the development of the Eisblock Aurora RADEON RX 6800/6900/XT graphic card GPU block, we wanted to further increase the performance. The first step was to move the cooler closer to the individual components by reducing the thermal pads to a thickness of 1 mm. Next, we reduced the...




www.techpowerup.com


----------



## Aaq

kazukun said:


> *Alphacool Unveils the Eisblock Aurora Acetal GPX-A Water Block for AMD Big Navi*
> 
> 
> 
> 
> 
> 
> 
> 
> Alphacool Unveils the Eisblock Aurora Acetal GPX-A Water Block for AMD Big Navi
> 
> 
> During the development of the Eisblock Aurora RADEON RX 6800/6900/XT graphic card GPU block, we wanted to further increase the performance. The first step was to move the cooler closer to the individual components by reducing the thermal pads to a thickness of 1 mm. Next, we reduced the...
> 
> 
> 
> 
> www.techpowerup.com


I just bought it!


----------



## cfranko

Which waterblock on Aliexpress would be compatible with my 6900 XT Phantom D? I can only buy from aliexpress and there are waterblocks only for nitro+, gigabyte, merc319, gaming x and reference. Does the Phantom D have the same PCB as the ones I listed?


----------



## Nighthog

After checking the EK website for the Liquid Devil I realized I had mounted my flow in reverse for my card. There were no instruction or markings on the card/manual itself.
Something to fix later, Because temperatures are sky high in games at the moment.


----------



## 6u4rdi4n

cfranko said:


> Which waterblock on Aliexpress would be compatible with my 6900 XT Phantom D? I can only buy from aliexpress and there are waterblocks only for nitro+, gigabyte, merc319, gaming x and reference. Does the Phantom D have the same PCB as the ones I listed?


No. Phantom D uses Asrock's own custom design. 



Nighthog said:


> After checking the EK website for the Liquid Devil I realized I had mounted my flow in reverse for my card. There were no instruction or markings on the card/manual itself.
> Something to fix later, Because temperatures are sky high in games at the moment.


What temperatures are you getting? I feel like EK's been bragging a lot about how their blocks perform almost the same regardless of flow direction.


----------



## Nighthog

6u4rdi4n said:


> What temperatures are you getting? I feel like EK's been bragging a lot about how their blocks perform almost the same regardless of flow direction.


I have seen HWiNFO report 90C Hotspot after a game with 70C core temperature. But I also have bad flow in my loop, it isn't as high as it should be.


----------



## 6u4rdi4n

That's a bit on the high side. How bad is your flow? I believe it has to be really bad for it to impact temperatures more than a couple of degrees. 

I feel like the hotspot should be closer to the other temp, but I could be wrong.


----------



## newls1

Something is seriously F'ed up in your loop. You need to find the clog/flow issue and start over. Running 400+Watt load through my 6900xt, my hotspot temps hover in the 55c range with core hitting 45-48c... 400w load temps! Typical game temps are 31-33c core/40-43c hotspot... This is with an alphacool block


----------



## 99belle99

My hotspot reaches over 100c while running Timespy. And the standard temperature is only around 60c.


----------



## newls1

99belle99 said:


> My hotspot reaches over 100c while running Timespy. And the standard temperature is only around 60c.


and.......... dont understand the point of this reply? That is a normal temp for a aircooled card either max'd OC'd or terrible fan curve set. When I was aircooled, id see 107/108c then anything more then that I was guaranteed a CTD.


----------



## ZealotKi11er

My Watercooled card hit 55C/75C at 450W TimeSpy Run.


----------



## 99belle99

newls1 said:


> and.......... dont understand the point of this reply? That is a normal temp for a aircooled card either max'd OC'd or terrible fan curve set. When I was aircooled, id see 107/108c then anything more then that I was guaranteed a CTD.


I wasn't replying to your post I just thought I'd mention that my card reaches over 100c in a short benchmark. I was more asking why that is happening but you seemed to answer it in your post which is because it is on air.


----------



## MickeyPadge

cfranko said:


> Which waterblock on Aliexpress would be compatible with my 6900 XT Phantom D? I can only buy from aliexpress and there are waterblocks only for nitro+, gigabyte, merc319, gaming x and reference. Does the Phantom D have the same PCB as the ones I listed?


You can use the corsair block, if you don't mind removing the metal shroud and shaving off some acrylic to let the two (longer than reference) capacitors fit


----------



## 6u4rdi4n

Remounted my water block and got some improvements on my end at least. Topped out at 41°C/61°C after about a 15 minute stress run at ~320W. Maybe it could be better and bring GPU temp and GPU temp hot spot closer together, but at least the hot spot isn't moving up in the high 90s anymore.


----------



## cfranko

I scored 16 809 in Time Spy


AMD Ryzen 5 5600X, AMD Radeon RX 6900 XT x 1, 16384 MB, 64-bit Windows 10}




www.3dmark.com





How is my score? Air cooled.


----------



## cfranko

MickeyPadge said:


> You can use the corsair block, if you don't mind removing the metal shroud and shaving off some acrylic to let the two (longer than reference) capacitors fit


How do I exactly shave off acrylic? With a knife or something? I can’t understand the idea in modding waterblocks. Wouldn’t it leak if you cut a part of a waterblock?


----------



## MickeyPadge

cfranko said:


> How do I exactly shave off acrylic? With a knife or something? I can’t understand the idea in modding waterblocks. Wouldn’t it leak if you cut a part of a waterblock?


The acrylic that needs to be removed does not effect the block itself, unless you make a mistake. You would need a Dremel tool, or patience with a rounded hand file 










You can see the red circles? The metal block is shaped to let the two capacitors fit, for the reference card. Asrock capacitors are longer. So the plastic/acrylic must be carefully shaved away to match the metal part, then it will fit the Asrock card no problem. Must be done with care, as one part is close to the gasket, but it is not so hard.

The black metal shroud is removed with four screws located in each corner of the block. Also there will only be access to the block ports on one side, the pcb blocks the other side. Easy to overcome.

I will try to take pics of my card, but I am waiting for a chipset block and other items before I redo my loop.

It is the only solution so far for non reference Asrock cards, I think maybe only a few have done this mod. It does work well, as all the important screws to attach to the block to the card line up perfectly


----------



## Nighthog

newls1 said:


> Something is seriously F'ed up in your loop. You need to find the clog/flow issue and start over. Running 400+Watt load through my 6900xt, my hotspot temps hover in the 55c range with core hitting 45-48c... 400w load temps! Typical game temps are 31-33c core/40-43c hotspot... This is with an alphacool block


Yeah it hasn't performed as well as I've wanted from the start, but I was more down to my old Vega 64 with it's water block having bad contact but the new one also has the same issue in the loop it's more likely the whole loop being the issue rather then the GPU blocks.
Starting from scratch seem advised but what a hassle to have to redo everything once more. I've redone the loop several times because of the bad flow/temperatures but nothing seemed to have fixed it in the end but having marginal improvements when cleaning out blockage to the parts.
Might need two pumps rather than only one DDC.


----------



## No-one-no1

Does anyone here know if there has been progress on RBE side of things?
Haven't really been paying attention for the last month or so. Couldn't find any info on igor's site directly.
(mostly interested in raising the max voltage above 1175mV without having to use a separate SPI/I2C controller)


----------



## drnilly007

newls1 said:


> Something is seriously F'ed up in your loop. You need to find the clog/flow issue and start over. Running 400+Watt load through my 6900xt, my hotspot temps hover in the 55c range with core hitting 45-48c... 400w load temps! Typical game temps are 31-33c core/40-43c hotspot... This is with an alphacool block


Yeah those are good temps thats about what i get with mine. Also alphacool block

Also with MPT I tried to increase Soc amperage and couldnt get the memory to get over the2140 2138 mark, seems stuck at that max speed. However if you use the card to mine eth I actually lower it to 53A (2a less than stock) and got about .65mhs more with same performance and can still watch twitch on full screen with it in the backround. That is the only SoC voltage i tested but yeah maybe can go lower for more and still hit same Memory clocks

Might be good for gaming too still gotta test. Maybe also giving more headroom to the core overclocks. just a thought


----------



## xR00Tx

99belle99 said:


> My hotspot reaches over 100c while running Timespy. And the standard temperature is only around 60c.



I had a similar problem when I got my sapphire 6900 xt. While the normal temperature was around 70c, the Hotspot was reaching 100 / 110c.

I decide to replace its thermal paste and then the difference between the two temperatures was no more than 20 / 25c at most.


----------



## No-one-no1

cfranko said:


> Which waterblock on Aliexpress would be compatible with my 6900 XT Phantom D? I can only buy from aliexpress and there are waterblocks only for nitro+, gigabyte, merc319, gaming x and reference. Does the Phantom D have the same PCB as the ones I listed?


Edit: prototype testing


----------



## newls1

interesting


----------



## HyperC

No-one-no1 said:


> Does anyone here know if there has been progress on RBE side of things?
> Haven't really been paying attention for the last month or so. Couldn't find any info on igor's site directly.
> (mostly interested in raising the max voltage above 1175mV without having to use a separate SPI/I2C controller)


 Nothing from what i have read, still waiting for the right time to mod mine


----------



## MickeyPadge

No-one-no1 said:


> Edit: prototype testing
> View attachment 2484316


That looks so cool, no more info?


----------



## TexasAVI03

Optional Update 3/29/21:


https://www.amd.com/en/support/kb/release-notes/rn-rad-win-21-3-2



Article Number
RN-RAD-WIN-21-3-2
*Radeon Software Adrenalin 2020 Edition 21.3.2 Highlights
Support For*

Outriders™
Evil Genius 2: World Domination™
DIRT 5™ Update 4.0
DirectX® Raytracing

*Fixed Issues*

Radeon RX 6700 series graphics products may report incorrect core clock values in performance tuning and/or the system graphics hardware information tab.
Shadows may exhibit corruption in Insurgency: Sandstorm™ on Radeon RX 6000 series graphics products.
On a limited number of displays, the preferred desktop resolution in Windows® may change when the display is power cycled.
The start and cancel buttons in the performance tuning stress test may disappear when Radeon Software is resized to be small.
A black screen may occur on a limited number of displays when Radeon FreeSync is enabled and playing a game set to use borderless fullscreen on Radeon RX 6000 series graphics products.
*Known Issues*

Connecting two displays with large differences in resolution/refresh rates may cause flickering on Radeon RX Vega series graphics products.
Enabling vsync in Rocket League and setting the game to use borderless fullscreen may cause stuttering or ghosting.
Radeon RX 400 and 500 series graphics products may experience a TDR during extended periods of video playback.
Brightness flickering may intermittently occur in some games or applications when Radeon™ FreeSync is enabled, and the game is set to use borderless fullscreen.
Enhanced Sync may cause a black screen to occur when enabled on some games and system configurations. Any users who may be experiencing issues with Enhanced Sync enabled should disable it as a temporary workaround.


----------



## Nighthog

A combo of clogged CPU block & bad pump was the main cause.
My DDC 1T Plus seems to no longer want to start up after I added some EK Cryofuel to my loop.
Changed to my regular DDC 1T and it's performance is better than the 1T Plus even though it should be the worse one.

My Liquid Devil wanted that extra flow but it isn't as good still as I had hoped. Time Spy would reach *58C* for the core and *83C* for the Hotspot for a quick retest this time. Much lower temperature but I was expecting a 10C extra for the final result to just leave it as is.
I removed some radiators to see if I had too many restrictions but it was mostly just the pump's fault in the end.

Will need to invest in new pumps as these are no longer reliable it seems. Even the DDC 1T had issues with starting why I had the DDC 1T Plus to begin with.(but the 1T Plus been more unreliable overall in the end)


----------



## drnilly007

Time Spy 19784 I have lowered the SOC with MPT to 50A and it gave me 200ish more points, and seems stable havent done any long testing. Mem Clk stayed at 2138-2140 so from this one test no mem clock loss either.
I see other people raising the SOC with MPT which may not be necessary
AMD Radeon RX 6900 XT video card benchmark result - AMD Ryzen 9 5900X,ASUSTeK COMPUTER INC. ROG CROSSHAIR VIII DARK HERO (3dmark.com)


----------



## demitrisln

Hey all I've got a 6900XT 5800X 32GB RAM playing on a 49" 1440p 120hz monitor (only play wow on one side of the screen) and my frames like in World of Warcraft on setting 7 go from like 140 standing still to in raid with 20people drops to below 30 for a second. Is there anything I'm missing? I would think that with these two combos I'd be able to hit 100FPS regardless of what is going on screen with a game such as World of Warcraft.


----------



## No-one-no1

MickeyPadge said:


> That looks so cool, no more info?


Just a test piece to see how our cooling solution works on GPUs. (and want to run a 3000MHz setting for "daily driver")
Will update once we have some performance data. Waiting on some other parts now to be able to slap the test bench back together.


----------



## HeLeX63

I have the 6900XT with Alphacool Waterblock. I currently have thermal grizzly Kryonaut paste on it. What is the best paste to work with these GPU's to bring the junction and GPU temps as close as possible. Currently sitting at about 20C above GPU. So 40C GPU = 60C Junction.


----------



## drnilly007

HeLeX63 said:


> I have the 6900XT with Alphacool Waterblock. I currently have thermal grizzly Kryonaut paste on it. What is the best paste to work with these GPU's to bring the junction and GPU temps as close as possible. Currently sitting at about 20C above GPU. So 40C GPU = 60C Junction.


What are you doing to get the temps up?


----------



## HeLeX63

drnilly007 said:


> What are you doing to get the temps up?


Just a general gaming load. It is the Red Devil 6900XT OC Limited Edition. Air temps hovered around 70C with junction at 90 to 95. Since WB. 40 to 45C with Junction from 60 to like 68C.


----------



## Bart

I'm not sure you _can_ bring the junction temp closer to the GPU core temp. Especially if you're pushing higher clocks.


----------



## ZealotKi11er

The higher the power the higher the delta between normal temp and junction temp. Also noticed if you have a high leakage part you will have higher delta.


----------



## HeLeX63

ZealotKi11er said:


> The higher the power the higher the delta between normal temp and junction temp. Also noticed if you have a high leakage part you will have higher delta.


What do you mean by leakage part ?


----------



## 6u4rdi4n

HeLeX63 said:


> I have the 6900XT with Alphacool Waterblock. I currently have thermal grizzly Kryonaut paste on it. What is the best paste to work with these GPU's to bring the junction and GPU temps as close as possible. Currently sitting at about 20C above GPU. So 40C GPU = 60C Junction.


~20°C between GPU and GPU junction seems about normal according to my own findings, but I'm sure you could bring them closer together. These chips can be very picky about mounting so things like paste distribution and how tight the screws are can affect a lot.

And like ZealotKi11er mentioned, power affects the delta. After a long gaming session yesterday, the max temperature of my 6900 XT with Bykski wb and Kryonaut was 51°C GPU and 74°C junction. While idling, the delta is 4-5°C.


----------



## kairi_zeroblade

anybody here can give advise regarding HZD?? on my RTX 3070 card I only get the Optimizing Shaders when I update drivers..yesterday, I slapped my RX6800XT and I did clean install of win10 and all the drivers (using the latest WHQL drivers for GPU) and I ran my HZD from GoG and it let it update the shader cache and played a bit to see how the new GPU is doing, I exited and downloaded the rest of my games from Steam, then again decided to play HZD, it started to optimize again..

TLDR: does anybody know if its a known issue with the RX6000 series for HZD or is there a fix? I did try to run the game as admin privilege and still no dice, every time I run the game it has to optimize over and over again..which I never get to experience before..


----------



## 6u4rdi4n

Some morning benchmarks for those interested in that sort of stuff. Increased "stock" power limit to 350W with MPT and then dragged the slider to +15% in Radeon Software, giving the card a theoretical power limit of 402.5W. Glancing at GPU-Z during the benchmarks, I observed GPU Chip Power Draw mostly around 350W with the lower end around 325W and the upper about 370W (power draw for more than a split second). According to GPU-Z, the card peaked at 406W.

I don't think I got the best chip out there, but so far a good day!


----------



## ZealotKi11er

Thats a pretty fast chip. You are doing 2800Mhz.


----------



## 6u4rdi4n

Average varies from 2500+ to 2700+ depending on benchmark. Might be CPU bound at the lower end, so it clocks down. Port Royal reported an average of 2703MHz.


----------



## MickeyPadge

Here is a couple of pics of how I modified the corsair reference block to fit the asrock phantom 6900xt:

 

Not so complicated, but you need a steady hand and ideally a dremel style tool. Delicately shave off the acrylic, not too heavy handed, you don't want to melt it, as one is near the gasket.

Removing the black metal shroud is easy too (four screws in the corners of the block). After that, all but two of the block mounting holes line up perfectly with the card. And the two that don't? Not really relevant for block fitment anyway.


----------



## Nighthog

I'm not getting any great results with my 6900XT.

Time Spy test1: 16039 (I scored 16 039 in Time Spy)
Time SPy test2: 16177 (I scored 16177 in Time Spy)

Fire Strike test2: 32580 (I scored 32580 in Fire Strike)

Test1: +15% Power and 2650MHz Max Frequency target with 105% VRAM.
Test2: 106% VRAM

2700Mhz crashed when it ran just below average 2650Mhz on the core.
And Memory would cause some kind of driver timeout when 2100+ MHz static settings so I tried the % thing to see if it would work instead, which seemed to do it.

GPU core: 59-60C 
GPU Hotspot: 84C

Water temperature between 28-34C when benching at the moment.

I haven't changed the pump to the new one that I ordered as it is working for the moment. (DDC 1T) 
Have a EK DDC 3,2 (12V PWM) as replacement if it should fail.

EDIT: I see my memory clocks are being reported wrong... They spike on occasion to unreasonable clocks.


----------



## c0rny42

Hey Guys, im looking forward to watercool my 6900XT Red Devil but the Alphacool Block estimated delivery will maybe By the end of may.
But i dont want to wait that long. Do you guys know which Block could possible fit, maybe with small work on the Block.


----------



## Nighthog

c0rny42 said:


> Hey Guys, im looking forward to watercool my 6900XT Red Devil but the Alphacool Block estimated delivery will maybe By the end of may.
> But i dont want to wait that long. Do you guys know which Block could possible fit, maybe with small work on the Block.


I would expect the Liquid Devil has the same PCB as the Red Devil? If it does then EK should have a block for it as they make the block for the Liquid Devil version.
Though I'm not having a great experience on it though.


----------



## c0rny42

Nighthog said:


> I would expect the Liquid Devil has the same PCB as the Red Devil? If it does then EK should have a block for it as they make the block for the Liquid Devil version.
> Though I'm not having a great experience on it though.


Yes the PCB should be same but its not released yet i think. On the EK Website is no block listed 2. Guess i have to wait for alphacool..


----------



## newls1

c0rny42 said:


> Yes the PCB should be same but its not released yet i think. On the EK Website is no block listed 2. Guess i have to wait for alphacool..


Good, you dont want EK, trust me... wait for the Alphacool. Far better quality


----------



## c0rny42

newls1 said:


> Good, you dont want EK, trust me... wait for the Alphacool. Far better quality


Its actually my first custom loop. I Dont have any experience with ek or alphacool. Got my Radiator D5 Pump ( older one / used from ebay) fittings and tubes from Ek. They make a solid impression for me but i dont know much 😅


----------



## newls1

they just rebrand the D5 pump, so your good there, their fittings sand tubing are solid, no issues there.... just there blocks are troublesome when it comes to their nickel coating.


----------



## blackzaru

newls1 said:


> they just rebrand the D5 pump, so your good there, their fittings sand tubing are solid, no issues there.... just there blocks are troublesome when it comes to their nickel coating.


Honestly, they had one horrible batch of blocks, which dates back to Maxwell (GTX 9XX), if I recall well. Ever since, they don't really seems to have anything out of the ordinary (like, not an entire batch being pure **** like their infamous batch). This is like people ****ting on AMD for their "hot cpu", when, in fact, it dates back to the bulldozer architecture. Their blocks are not the best, basically entry level blocks like barrow and bykski, but they do the trick for the vast majority of people, given you properly maintain your loop. Alphacool is pretty great in quality, but, the top notch truly is watercool and their heatkiller lineup. (some might say optimuspc, but, given their inability to fulfill orders, their blocks might be considered vaporware at this point.).


----------



## newls1

blackzaru said:


> Honestly, they had one horrible batch of blocks, which dates back to Maxwell (GTX 9XX), if I recall well. Ever since, they don't really seems to have anything out of the ordinary (like, not an entire batch being pure **** like their infamous batch). This is like people ****ting on AMD for their "hot cpu", when, in fact, it dates back to the bulldozer architecture. Their blocks are not the best, basically entry level blocks like barrow and bykski, but they do the trick for the vast majority of people, given you properly maintain your loop. Alphacool is pretty great in quality, but, the top notch truly is watercool and their heatkiller lineup. (some might say optimuspc, but, given their inability to fulfill orders, their blocks might be considered vaporware at this point.).


You are incorrect about saying "1 batch" I have EK cpu blocks from 2011ish that the coating completely flaked off, and blocks bought 2 years ago with exact same issue. So unless their "1 batch" covers nearly a 10 year span, please stop spewing your info. Each and everytime i applied RMA for my EK products they ALWAYS came back saying it was my fluid and I must have had mixed metals. I always used EK fluid, and when all my receipts proving blocks in loop are EK, fluid was EK, guess where my RMA went too?......... Still denied.. so F*ck EK and there TERRIBLE TERRIBLE CUSTOMER SERVICE. They gave me the good old F-You to many times.


----------



## 6u4rdi4n

I've never had any issues with their customer service, but their products are becoming too expensive for what they are. If you pick up an EK block and a Barrow or Bykski block, they will probably feel the same. Quality, finish, machining marks, solutions etc. A Bykski block with backplate costs like what, half of an EK block without backplate? 

Price wise there are better alternatives. Want something good, but as cheap as possible? Buy Alphacool, Barrow or Bykski. 

Want everything? Like excellent performance, design, finishing etc. Buy Aqua Computer or Watercool.


----------



## blackzaru

newls1 said:


> You are incorrect about saying "1 batch" I have EK cpu blocks from 2011ish that the coating completely flaked off, and blocks bought 2 years ago with exact same issue. So unless their "1 batch" covers nearly a 10 year span, please stop spewing your info. Each and everytime i applied RMA for my EK products they ALWAYS came back saying it was my fluid and I must have had mixed metals. I always used EK fluid, and when all my receipts proving blocks in loop are EK, fluid was EK, guess where my RMA went too?......... Still denied.. so F*ck EK and there TERRIBLE TERRIBLE CUSTOMER SERVICE. They gave me the good old F-You to many times.


And this is called a personal experience. Don't get me wrong, EK is basically entry-level tier in the watercooling world, and you basically a premium basically because of their marketing. However, claiming all EK blocks flake because you had 2 blocks doing that is exactly like I said: a false point made from personal experience. It's sad that it happened to you, but isn't a representative picture of the company and their lineup. I wouldn't recommend EK to someone, unless they need the fastest block available to them, because, that's the only thing EK has going for them: insanely fast shipping time between your order and the block showing up to your door. Otherwise, Bykski literally has blocks that perform as well, but cost half the price, the problem is that you need to wait 1 to 3 months before you get them from China. I personally had one bad experience with a Bykski block, does that mean I need to assume all of their blocks are that way? No.

Basically, in terms of "pecking order" for waterblocks:

Fastest block available, Cheap quality, High Price for what it's worth: EKWB, Corsair
Cheapest block available: Bykski, Barrow or Bitspower
Good quality: Alphacool
Great Quality: Aquacomputer or Watercool
Insane quality, but some blocks have yet to be delivered 2 years later: OptimusPC
After that, no matter the company, you can have problems. For example: OptimusPC, which basically use balls-to the wall methods to ensure the highest quality of their products, had a bad batch of CPU blocks and pump/reservoir in their Nickle plated lineup, despite using the "best" way to do it. This happens, it doesn't mean that it will define all of their blocks for the next 10 years.

All in all, however, even a "cheap" block will do the trick for almost all users, however, a great quality block can bring the temps a lot further down. For example, I switched from a Barrow block to a Watercool block on my previous card (RTX 2000 serie), and my temp, under load, dropped by an additional 4 degrees. However, it dropped from 48 to 44 degrees (if I recall well), which, honestly, doesn't make much difference at that point. So, unless a block is awfully designed, pretty much any block will be able to handle your card. (Unless you do some crazy **** like flashing a custom unlimited power bios, and push 600+Watts through your card. At that point, some cheap blocks won't be able to handle that, obviously.)


----------



## newls1

blackzaru said:


> And this is called a personal experience. Don't get me wrong, EK is basically entry-level tier in the watercooling world, and you basically a premium basically because of their marketing. However, claiming all EK blocks flake because you had 2 blocks doing that is exactly like I said: a false point made from personal experience. It's sad that it happened to you, but isn't a representative picture of the company and their lineup. I wouldn't recommend EK to someone, unless they need the fastest block available to them, because, that's the only thing EK has going for them: insanely fast shipping time between your order and the block showing up to your door. Otherwise, Bykski literally has blocks that perform as well, but cost half the price, the problem is that you need to wait 1 to 3 months before you get them from China. I personally had one bad experience with a Bykski block, does that mean I need to assume all of their blocks are that way? No.
> 
> Basically, in terms of "pecking order" for waterblocks:
> 
> Fastest block available, Cheap quality, High Price for what it's worth: EKWB, Corsair
> Cheapest block available: Bykski, Barrow or Bitspower
> Good quality: Alphacool
> Great Quality: Aquacomputer or Watercool
> Insane quality, but some blocks have yet to be delivered 2 years later: OptimusPC
> After that, no matter the company, you can have problems. For example: OptimusPC, which basically use balls-to the wall methods to ensure the highest quality of their products, had a bad batch of CPU blocks and pump/reservoir in their Nickle plated lineup, despite using the "best" way to do it. This happens, it doesn't mean that it will define all of their blocks for the next 10 years.
> 
> All in all, however, even a "cheap" block will do the trick for almost all users, however, a great quality block can bring the temps a lot further down. For example, I switched from a Barrow block to a Watercool block on my previous card (RTX 2000 serie), and my temp, under load, dropped by an additional 4 degrees. However, it dropped from 48 to 44 degrees (if I recall well), which, honestly, doesn't make much difference at that point. So, unless a block is awfully designed, pretty much any block will be able to handle your card. (Unless you do some crazy **** like flashing a custom unlimited power bios, and push 600+Watts through your card. At that point, some cheap blocks won't be able to handle that, obviously.)


and this is called you didnt read my thread. First i never said 2 blocks..... I have atleast 10/12 blocks with nickel flaking, and my fault for keep buying into that brand. I certainly dont need you to explain to me the levels of quality in brands. im quite sure ive been watercooling PC's since the 90's and not new in this game. Im familiar with optimus PC and own their blocks as well, so unless you are trying to throw a shameless plug in for that company and all others, the run off is not needed. Im super stoked you haven't had a EKWB flake on you, CONGRATS, you are lucky. if I havent thrown a mojority of my grave yard EK blocks out, i'd gladly post you up the pics for everyone to see. However if you'd like to take 5mins out of your day and just google "ekwb nickel flaking" you'd see plenty of pics on various forums and see that this issue is still a LARGE issue. Thanks for your input though however

*EDIT* I did the search for you and here is a convenient link
LMGTFY - Let Me Google That For You


----------



## ZealotKi11er

I am happy with Bykski block for both my Radeon 7 and 6900 XT.


----------



## Nighthog

I like my Alphacool parts, only choose the wrong block for my RX Vega 64. There were 3 choices and I picked the worst one they had available (3 blocks from Alphacool).
It works stock but you can't OC on that one unless you manage a perfect pressure fit and even then it wasn't optimal. (issues were fixed on the other blocks I could have bought)

There my new RX 6900XT with EK block is better but was expecting more from reading other results around here.

And I have a bad bin. My sample can't do 2100Mhz MEM stable as far as I can tell.

Anyone mine Etherium on their Waterblocked cards? Getting 48-50C core and ~55C Hotspot for that with stock settings for a reference. (34C water) (only 480x45mm radiator space)
Meaning a 15C Delta to core with ~215Watts.


----------



## No-one-no1

No-one-no1 said:


> Just a test piece to see how our cooling solution works on GPUs. (and want to run a 3000MHz setting for "daily driver")
> Will update once we have some performance data. Waiting on some other parts now to be able to slap the test bench back together.


Some quick initial testing yesterday shows we're not getting the flow it's designed to run at. Pump probably isn't capable of running gpu + cpu flow at the same time.
Did have it up at 2800MHz on a very quick haven test. But that's nothing to write about. Only about what it did on the stock cooler I think.
More testing though! And will be ordering in some more pump capacity!


----------



## lucker#1

c0rny42 said:


> Yes the PCB should be same but its not released yet i think. On the EK Website is no block listed 2. Guess i have to wait for alphacool..


The alphacool block is currently in stock at alternate.de - It's only listed as RX 6800, but alphacool lists it as 6800 and 6900 compatible. The product number that alternate lists is also the same, so it should work i guess. I've got mine already ordered - should arrive along with the 6900XT.


https://www.alternate.de/Alphacool/Eisblock-Aurora-Acryl-GPX-A-Radeon-RX-6800-6800XT-Reference-mit-Backplate-Wasserk%FChlung/html/product/1700557


----------



## No-one-no1

Nighthog said:


> Anyone mine Etherium on their Waterblocked cards?


You can turn down the core clock to 1200MHz, and turn down the cooling to maintain about 80c junction. That will keep max mining performance, and minimize power draw. While keeping the core nice and cool.


----------



## No-one-no1

No-one-no1 said:


> Some quick initial testing yesterday shows we're not getting the flow it's designed to run at. Pump probably isn't capable of running gpu + cpu flow at the same time.
> Did have it up at 2800MHz on a very quick haven test. But that's nothing to write about. Only about what it did on the stock cooler I think.
> More testing though! And will be ordering in some more pump capacity!


Did some flow testing. Will need about 2-3x more flow, or redesign the cooler for lower flow. So operating far from optimal conditions.
Some quick testing just for the lolz anyway. Not fully tweaked by any means, but some reference. 320W with mpt, and +15% slider. Stock voltage.
*firestrike* graphics part: *64447
port royal: 11062*


----------



## newls1

just DL'd Doom Eternal, and game is caped at 144fps, how do i remove that cap? would like to see what MAX fps im actually getting, then cap it again as 144fps is more then enough and keeps my 6900XT @ 2760MHz with max temp of 35c and Hotspot maxed at 38c


----------



## NDS322

Since early driver 2021, My RX 6900 XT Memory clock always runs 2000 MHz is that normal ?


----------



## ZealotKi11er

NDS322 said:


> Since early driver 2021, My RX 6900 XT Memory clock always runs 2000 MHz is that normal ?


What is your monitor resolution?


----------



## drnilly007

Nighthog said:


> I like my Alphacool parts, only choose the wrong block for my RX Vega 64. There were 3 choices and I picked the worst one they had available (3 blocks from Alphacool).
> It works stock but you can't OC on that one unless you manage a perfect pressure fit and even then it wasn't optimal. (issues were fixed on the other blocks I could have bought)
> 
> There my new RX 6900XT with EK block is better but was expecting more from reading other results around here.
> 
> And I have a bad bin. My sample can't do 2100Mhz MEM stable as far as I can tell.
> 
> Anyone mine Etherium on their Waterblocked cards? Getting 48-50C core and ~55C Hotspot for that with stock settings for a reference. (34C water) (only 480x45mm radiator space)
> Meaning a 15C Delta to core with ~215Watts.


Mine is 34-46c but your mining at too high a wattage. You should be shooting for 144w. Memory maxxed out fast timings. I set core speed to 1200 and core voltage to 900mv. That is good and stable for video and web browsing. If you create profiles in the radeon software and add miner to a specific setting it will automatically switch as long as you create profile for a game you play you just have to close the previous application or the settings wont change per application


----------



## LtMatt

@SLAY3R8888 

Let's give some love to the air cooled 6900 XTs with all these impressive water cooled scores. 

Merc 6900 XT running Cod Black Ops 25-30% faster than a stock 3090 FE at the same settings. 

2725Mhz core clock in game, power draw below 350W. 

6900 XT Merc Overclocked | Call of Duty Black Ops Cold War - YouTube 

9:50 - 2160p Ultra - DLSS OFF


----------



## NDS322

ZealotKi11er said:


> What is your monitor resolution?


MI 3440x1440 144Hz


----------



## weleh

Hey guys,

Just ordered a Sapphire 6900XT Toxic LE from a shock drop locally (Portugal).
Probably the only in Portugal.

Going to be replacing my Suprim X 3080 that I will sell to get my money back.

Had a 6800XT Nitro+ SE and was very pleased with it's performance and ever since I got the 3080 I was a bit let down by how slower it is compared to my old golden sample 6800XT.

Will post pics once I get it.


----------



## ZealotKi11er

NDS322 said:


> MI 3440x1440 144Hz


That is normal. Not a driver problem.


----------



## jimpsar

LtMatt said:


> @SLAY3R8888
> 
> Let's give some love to the air cooled 6900 XTs with all these impressive water cooled scores.
> 
> Merc 6900 XT running Cod Black Ops 25-30% faster than a stock 3090 FE at the same settings.
> 
> 2725Mhz core clock in game, power draw below 350W.
> 
> 6900 XT Merc Overclocked | Call of Duty Black Ops Cold War - YouTube
> 
> 9:50 - 2160p Ultra - DLSS OFF


Great OC mate! 
Nice vid .
Have the same gpu as yours, Could you mind sharing the wattman / mptooll settings of yours? Thank you!


----------



## LtMatt

jimpsar said:


> Great OC mate!
> Nice vid .
> Have the same gpu as yours, Could you mind sharing the wattman / mptooll settings of yours? Thank you!


Yep responded to your comment on YT.


----------



## MickeyPadge

newls1 said:


> just DL'd Doom Eternal, and game is caped at 144fps, how do i remove that cap? would like to see what MAX fps im actually getting, then cap it again as 144fps is more then enough and keeps my 6900XT @ 2760MHz with max temp of 35c and Hotspot maxed at 38c


Turn off vsync?


----------



## newls1

i did


----------



## LtMatt

newls1 said:


> i did


Check video settings as it is possible to run with an uncapped frame rate. I don't have the game installed at the moment, but i do recall having something silly like 200FPS at points at 4K on 120HZ OLED.


----------



## jimpsar

LtMatt said:


> Yep responded to your comment on YT.


Yep thank you! They work very good atm


----------



## cfranko

Hey guys, I have a 6900 XT Phantom D and I set the power limit to 350W and 400 EDC on MPT and also added an additional 15% power limit in wattman. But any Maximum Core Clock above 2630 would just crash while doing timespy. Is this normal considering the power limit? It sometimes reaches 400W while doing the tests so I mean it should be able to do 2700 MHz but it is absolutely hard stuck at 2630. Is this a silicon lottery thing or what?

Here is my result at 350W MPT + 15% Wattman:








I scored 17 093 in Time Spy


AMD Ryzen 5 5600X, AMD Radeon RX 6900 XT x 1, 16384 MB, 64-bit Windows 10}




www.3dmark.com


----------



## 99belle99

cfranko said:


> Hey guys, I have a 6900 XT Phantom D and I set the power limit to 350W and 400 EDC on MPT and also added an additional 15% power limit in wattman. But any Maximum Core Clock above 2630 would just crash while doing timespy. Is this normal considering the power limit? It sometimes reaches 400W while doing the tests so I mean it should be able to do 2700 MHz but it is absolutely hard stuck at 2630. Is this a silicon lottery thing or what?
> 
> Here is my result at 350W MPT + 15% Wattman:
> 
> 
> 
> 
> 
> 
> 
> 
> I scored 17 093 in Time Spy
> 
> 
> AMD Ryzen 5 5600X, AMD Radeon RX 6900 XT x 1, 16384 MB, 64-bit Windows 10}
> 
> 
> 
> 
> www.3dmark.com


I have a reference model and I seem to have the same problem as you. I haven't tested it much though but if I set 2750 as max it would crash Timespy. That would probably be 2650-2700MHz in real time.


----------



## HeLeX63

cfranko said:


> Hey guys, I have a 6900 XT Phantom D and I set the power limit to 350W and 400 EDC on MPT and also added an additional 15% power limit in wattman. But any Maximum Core Clock above 2630 would just crash while doing timespy. Is this normal considering the power limit? It sometimes reaches 400W while doing the tests so I mean it should be able to do 2700 MHz but it is absolutely hard stuck at 2630. Is this a silicon lottery thing or what?
> 
> Here is my result at 350W MPT + 15% Wattman:
> 
> 
> 
> 
> 
> 
> 
> 
> I scored 17 093 in Time Spy
> 
> 
> AMD Ryzen 5 5600X, AMD Radeon RX 6900 XT x 1, 16384 MB, 64-bit Windows 10}
> 
> 
> 
> 
> www.3dmark.com


I cant even run timespy at like above 2600MHz. Yet I am game stable at 2720Mhz sustained. So a good 120mhz deficit just for timespy. Always seems to crash even with an OC. This is a watercooled 6900XT RED DEVIL.


----------



## ZealotKi11er

HeLeX63 said:


> I cant even run timespy at like above 2600MHz. Yet I am game stable at 2720Mhz sustained. So a good 120mhz deficit just for timespy. Always seems to crash even with an OC. This is a watercooled 6900XT RED DEVIL.


Yeah TimeSpy is a killer for 6800/6900.


----------



## LtMatt

HeLeX63 said:


> I cant even run timespy at like above 2600MHz. Yet I am game stable at 2720Mhz sustained. So a good 120mhz deficit just for timespy. Always seems to crash even with an OC. This is a watercooled 6900XT RED DEVIL.


Yes I see similar and i think Timespy Extreme is even worse, slightly.


----------



## cfranko

HeLeX63 said:


> I cant even run timespy at like above 2600MHz. Yet I am game stable at 2720Mhz sustained. So a good 120mhz deficit just for timespy. Always seems to crash even with an OC. This is a watercooled 6900XT RED DEVIL.


But I want to use a config that is 100% stable. And I think timespy is great to test stability If I am not wrong. Because game stability can change from game to game. I don’t want my call of duty to crash mid game because of an overclock. It would be really frustrating.


----------



## HeLeX63

cfranko said:


> But I want to use a config that is 100% stable. And I think timespy is great to test stability If I am not wrong. Because game stability can change from game to game. I don’t want my call of duty to crash mid game because of an overclock. It would be really frustrating.


True that. But when you leave a good 3 or 5 fps out, for something that could be pushed more, it can also be a waste. I haven't had the card crash at all, and I dont run Ray Tracing on my games. and at 2700 to 2730Mhz sustained, its perfectly fine. I don't want to lower my core clock so that a stressful benchmark test can pass. When it doesnt crash in 99% of other applications.


----------



## Ipak

NDS322 said:


> MI 3440x1440 144Hz


If You change to 120Hz refresh rate, memory will downclock on idle.


----------



## newls1

for my watercooled 6900xt and what ive found running TS is hotspot temp is key. If I can keep Hotspot around the 55c mark, I can complete the bench in the 27xx range, but most of the bench stays in the 26xx range and this is with MPT set to 375w. As soon as hotspot hits 60c, it will CTD everytime. When on air I couldnt run TS anything faster then 24xx range with hotspot in the 105c range and fans @ 100%. Watercooling obviously makes a huge difference but that Benchmark is so brutal on the gpu that @ 26/27xx MHz and hotspot @ 55c..... theres alot of energy going on inside that core!


----------



## Spawnyspawn

I've been a 6900XT reference owner for 2 days now, but I feel something isn't right. 
Here's a snapshot from a 3DMark TimeSpy benchmark I did. The GPU clock frequency line looks very strange to me (I was expecting more of a straight line) and my graphics score is lower than what I was expected.









Any ideas what's going wrong (if anything is going wrong) and what could be the cause? I have to add I'm running it with a 760W PSU (Fractal Ion 760p platinum), so it might not get enough power.


----------



## ZealotKi11er

You have to check the temp. Looks like it's throttling hard.


----------



## Spawnyspawn

ZealotKi11er said:


> You have to check the temp. Looks like it's throttling hard.


Thanks. I'll see if I can get the temps down, see what that does, and report back.


----------



## 6u4rdi4n

Spawnyspawn said:


> Thanks. I'll see if I can get the temps down, see what that does, and report back.


Just set the fans to 100% and do a run to see how it behaves then.


----------



## Spawnyspawn

6u4rdi4n said:


> Just set the fans to 100% and do a run to see how it behaves then.


I did just that. Case fans and GPU fans at 100% (idle temps went from 61 to 30 degrees). During the benchmark highest sensor temps never went past 72 degrees. Average temps maxed at 60.
Here's the results with higher scores, but still the same erratic lines:


----------



## ZealotKi11er

Thanks for posting the graphs. Is there anything wrong with the CPU? Looks like its under 2GHz.


----------



## Spawnyspawn

ZealotKi11er said:


> Thanks for posting the graphs. Is there anything wrong with the CPU? Looks like its under 2GHz.


Not that I am aware of. I noticed that too, it only goes up to 4.8k in the last bit, but a friend told me he had the same thing after latest chipset update (?).


----------



## CS9K

Welp, my hand slipped last week and I accidentally an RX 6900 XT reference model. Boy howdy, I did NOT appreciate how bad of a bin (for water) that my RX 6800 XT was.

New card feels stable at 1065mV on the slider in non-RT benches from 2400-2700MHz on the Max Frequency slider. I'm still working on RT stability in Port Royal, though I'm mostly limited by my keeping the hotspot under 90C while the card is on air still. I'm eager to get my water block onto the card, but waiting on a few more fittings since I had to get a new pump; old one died.

I read through the entire thread over on the OC UK forums, and will re-read through this thread again over the course of this week to fill out the list further. Here is the data that I've gathered so far. I'm sure @LtMatt will find it interesting:

What I've found so far for "out of the box" Max Frequency values. Most of these are from over on the Overclockers UK forum:
2577 - (What ZealotKi11er theorizes is the max possible out-of-the-box)
2545 - Friend of mine, reference model, 2 weeks old
2534 -ZealotKi11er
2524 - Me, reference model, 2 days old
2504 - Stag
2504 - Chispy
2499 - helis4life
2499 - LtMatt
2489 - st4rky
2469 - Daemaz
2440 - LtMatt someone on ocuk


----------



## 6u4rdi4n

That's weird. CPU score is maybe a bit on the low side but nothing too bad. The GPU frequency curve almost match the GPU load curve, so that's something... There's definitely something weird going on if you look at the GPU load. It should be more or less at 100% load except for during the CPU test at the end.

Have you tried cleaning out the drivers and software with DDU and reinstalling?


----------



## Spawnyspawn

6u4rdi4n said:


> That's weird. CPU score is maybe a bit on the low side but nothing too bad. The GPU frequency curve almost match the GPU load curve, so that's something... There's definitely something weird going on if you look at the GPU load. It should be more or less at 100% load except for during the CPU test at the end.
> 
> Have you tried cleaning out the drivers and software with DDU and reinstalling?


I'll give that a go again when I get home. Maybe try the optional drivers, see if that works.
I'll run a cinebench run too, to see how the CPU loads with that. If it goes to 100% load no problems there, the low loads on the cpu must be related to TimeSpy instead.
I have the same issue with dropping GPU loads in Heaven benchmarks too btw. So I'm pretty sure that is not a TimeSpy specific issue.


----------



## LtMatt

CS9K said:


> Welp, my hand slipped last week and I accidentally an RX 6900 XT reference model. Boy howdy, I did NOT appreciate how bad of a bin (for water) that my RX 6800 XT was.
> 
> New card feels stable at 1065mV on the slider in non-RT benches from 2400-2700MHz on the Max Frequency slider. I'm still working on RT stability in Port Royal, though I'm mostly limited by my keeping the hotspot under 90C while the card is on air still. I'm eager to get my water block onto the card, but waiting on a few more fittings since I had to get a new pump; old one died.
> 
> I read through the entire thread over on the OC UK forums, and will re-read through this thread again over the course of this week to fill out the list further. Here is the data that I've gathered so far. I'm sure @LtMatt will find it interesting:
> 
> What I've found so far for "out of the box" Max Frequency values. Most of these are from over on the Overclockers UK forum:
> 2577 - (What ZealotKi11er theorizes is the max possible out-of-the-box)
> 2545 - Friend of mine, reference model, 2 weeks old
> 2534 -ZealotKi11er
> 2524 - Me, reference model, 2 days old
> 2504 - Stag
> 2504 - Chispy
> 2499 - helis4life
> 2499 - LtMatt
> 2489 - st4rky
> 2469 - Daemaz
> 2440 - LtMatt someone on ocuk


Good work collecting all that information my friend. 

Funnily enough, of the three 6900 XTs I've tried, the one with by far the lowest core clock (2479Mhz), has been the best overclocker. It also appears to love an undervolt, I am able to run 2350Mhz at only 0.950-0.975v. I am fairly sure my current 6900 XT would be one of the best underwater, but alas it will live its life under the Merc air cooler.


----------



## 6u4rdi4n

Spawnyspawn said:


> I'll give that a go again when I get home. Maybe try the optional drivers, see if that works.
> I'll run a cinebench run too, to see how the CPU loads with that. If it goes to 100% load no problems there, the low loads on the cpu must be related to TimeSpy instead.
> I have the same issue with dropping GPU loads in Heaven benchmarks too btw. So I'm pretty sure that is not a TimeSpy specific issue.


Give Time Spy Extreme a try.


----------



## ZealotKi11er

LtMatt said:


> Good work collecting all that information my friend.
> 
> Funnily enough, of the three 6900 XTs I've tried, the one with by far the lowest core clock (2479Mhz), has been the best overclocker. It also appears to love an undervolt, I am able to run 2350Mhz at only 0.950-0.975v. I am fairly sure my current 6900 XT would be one of the best underwater, but alas it will live its life under the Merc air cooler.


The ones with higher clk out of the box will run best under water.


----------



## CS9K

LtMatt said:


> Funnily enough, of the three 6900 XTs I've tried, the one with by far the lowest core clock (2479Mhz), has been the best overclocker.





ZealotKi11er said:


> The ones with higher clk out of the box will run best under water.


Directly conflicting information whoo~

I have the rare opportunity to potentially choose one from three reference RX 6900 XT's. I have one, my friend has one, and the same friend just received a second that's destined for a different friend. I offered him a not small amount of money to go through the effort of helping me bin all three, and if he had the best of three, I'd swap with him to put the best one in my PC under water.

I didn't think the "default Max Frequency" number was an absolute, more of just an indicator. It's neat seeing the numbers all there in one place.

I plan to test different frequency points (2400, 2500, 2600, and maybe 2700 w/MPT's help if his case can keep 350W cool like mine can) and see how much of an undervolt each card will take in Time Spy, Port Royal, and Heaven; basically, recreating the best curve each card can do, in hopes of extrapolating each card's curve to find the card with the best curve up at the top-end.

I'll let yall know how it goes this weekend :3


----------



## ZealotKi11er

CS9K said:


> Directly conflicting information whoo~
> 
> I have the rare opportunity to potentially choose one from three reference RX 6900 XT's. I have one, my friend has one, and the same friend just received a second that's destined for a different friend. I offered him a not small amount of money to go through the effort of helping me bin all three, and if he had the best of three, I'd swap with him to put the best one in my PC under water.
> 
> I didn't think the "default Max Frequency" number was an absolute, more of just an indicator. It's neat seeing the numbers all there in one place.
> 
> I plan to test different frequency points (2400, 2500, 2600, and maybe 2700 w/MPT's help if his case can keep 350W cool like mine can) and see how much of an undervolt each card will take in Time Spy, Port Royal, and Heaven; basically, recreating the best curve each card can do, in hopes of extrapolating each card's curve to find the card with the best curve up at the top-end.
> 
> I'll let yall know how it goes this weekend :3


The higher clk is based on AMD data. They believe that the part can run higher freq with a certain voltage. Not all parts will behave the same but there is a trend. Also, it depends on silicon quality,.


----------



## Spawnyspawn

6u4rdi4n said:


> Give Time Spy Extreme a try.


Don't own the full version. Yet.



ZealotKi11er said:


> Thanks for posting the graphs. Is there anything wrong with the CPU? Looks like its under 2GHz.





6u4rdi4n said:


> That's weird. CPU score is maybe a bit on the low side but nothing too bad. The GPU frequency curve almost match the GPU load curve, so that's something... There's definitely something weird going on if you look at the GPU load. It should be more or less at 100% load except for during the CPU test at the end.
> 
> Have you tried cleaning out the drivers and software with DDU and reinstalling?


Apparently, my pc switches to energy saving power settings after a reboot. After turning that off, I got this with all fans @ 100%, powerboost max and mem clock boost max:









Seems much more reasonable. Max gpu clocks of about 2300Mhz or something. Still not entirely stable.
Then, I went back to stock settings and tried to UV to 1100. Stress test crashed after a few seconds. Did a smaller UV of 1125, crashed again. Back to stock settings, tried +100 gpu clock, didn't crash but it went to 0% GPU useage for a while during a benchmark.
So my deduction would be: power issues. PSU probably insufficient for anything but stock settings. Thoughts?


----------



## ZealotKi11er

Yeah at 23xx, that score is about right.


----------



## CS9K

ZealotKi11er said:


> The higher clk is based on AMD data. They believe that the part can run higher freq with a certain voltage. Not all parts will behave the same but there is a trend. Also, it depends on silicon quality,.


Aye, it's a fun thing that they gave us a starting point. I'm excited to see what each of the three cards will do, and how those results stack up to the default clock speed.


----------



## Spawnyspawn

ZealotKi11er said:


> Yeah at 23xx, that score is about right.


Think I need to replace the PSU to get any form of OC actually stable?


----------



## 6u4rdi4n

Spawnyspawn said:


> Think I need to replace the PSU to get any form of OC actually stable?


What PSU do you have again? My guess would be to get temperatures lower. How's the hot spot/junction temperature looking?


----------



## Spawnyspawn

6u4rdi4n said:


> What PSU do you have again? My guess would be to get temperatures lower. How's the hot spot/junction temperature looking?


I have a Fractal ion 760p platinum. Hot spot temps with fans on full blast hover between 70 and 80 degrees.


----------



## cfranko

I scored 17 236 in Time Spy


AMD Ryzen 5 5600X, AMD Radeon RX 6900 XT x 1, 16384 MB, 64-bit Windows 10}




www.3dmark.com





Is this score good on air?


----------



## Spawnyspawn

cfranko said:


> I scored 17 236 in Time Spy
> 
> 
> AMD Ryzen 5 5600X, AMD Radeon RX 6900 XT x 1, 16384 MB, 64-bit Windows 10}
> 
> 
> 
> 
> www.3dmark.com
> 
> 
> 
> 
> 
> Is this score good on air?


About 1500 points more than I have managed so far with a reference model. So I'd say so.


----------



## psychomantium

Thought I'd share my results here.

I'm running a 6900XT with the EK waterblock on in my loop with a 5950X (running stock right now, no PBO or CO, except RAM).
used Morepowertool to increase it to 350W.
TimeSpy  - a bit lower clocks. - Ran using my daily/current settings posted below.
Firestrike Extreme - highest I could push with 1175mv 

Temps while gaming usually hovers around 63c now instead of 53c after increasing the power. Average temp on TimeSpy stress test was 61c. 

Any tips? Increase voltage?










Also here's my zentimings.


----------



## newls1

psychomantium said:


> Thought I'd share my results here.
> 
> I'm running a 6900XT with the EK waterblock on in my loop with a 5950X (running stock right now, no PBO or CO, except RAM).
> used Morepowertool to increase it to 350W.
> TimeSpy  - a bit lower clocks. - Ran using my daily/current settings posted below.
> Firestrike Extreme - highest I could push with 1175mv
> 
> Temps while gaming usually hovers around 63c now instead of 53c after increasing the power. Average temp on TimeSpy stress test was 61c.
> 
> Any tips? Increase voltage?
> 
> View attachment 2485839
> 
> 
> Also here's my zentimings.
> View attachment 2485838


your temps are in the 60s with a FCWB and that minor OC, or is that "hotspot" temp you are giving us?? If that is your average core temp, then thats aweful....... you need to fix that before proceeding any furthere


----------



## 6u4rdi4n

Spawnyspawn said:


> I have a Fractal ion 760p platinum. Hot spot temps with fans on full blast hover between 70 and 80 degrees.


PSU might be a bit small for my taste, but it should definitely be enough. Isn't 70°C like the magic number to stay below on these chips? I honestly can't remember right now.


----------



## psychomantium

newls1 said:


> your temps are in the 60s with a FCWB and that minor OC, or is that "hotspot" temp you are giving us?? If that is your average core temp, then thats aweful....... you need to fix that before proceeding any furthere


Yeah, full cover water block and backplate from EKWB. 350W with morepowertool.
This is after multiple hours of gaming, not after running one TimeSpy or anything.

Fans aren't really running that fast and airflow is not amazing. 

So, what should i fix?


----------



## 6u4rdi4n

psychomantium said:


> Yeah, full cover water block and backplate from EKWB. 350W with morepowertool.
> This is after multiple hours of gaming, not after running one TimeSpy or anything.
> 
> Fans aren't really running that fast and airflow is not amazing.
> 
> So, what should i fix?


Which temperature were you referring to earlier? GPU temp or GPU Temp hotspot/Junction? How much radiator space do you have? These chips are really picky about mounting, so making sure the thermal paste is properly spread and mounting pressure is tight and even is something to look into.


----------



## psychomantium

6u4rdi4n said:


> Which temperature were you referring to earlier? GPU temp or GPU Temp hotspot/Junction? How much radiator space do you have? These chips are really picky about mounting, so making sure the thermal paste is properly spread and mounting pressure is tight and even is something to look into.


GPU temp after about 2 hours of load at 350W. Running timespy results in "Average temperature 53 °C ", EK PE 360 and a EK SE 240. Maybe I'll reapply some other thermal paste and/or swap block. What temps are you guys getting? If anyone has a similar setup.


----------



## newls1

psychomantium said:


> Yeah, full cover water block and backplate from EKWB. 350W with morepowertool.
> This is after multiple hours of gaming, not after running one TimeSpy or anything.
> 
> Fans aren't really running that fast and airflow is not amazing.
> 
> So, what should i fix?


i can game for HOURS @ 2760-2800MHz and my gpu temp is no hotter then 38c and hotspot never exceeds 46/47c.... This is with MPT settings @ 385w. Your flow, rad surface area, block mount, or whatever else is terrible and needs attention


----------



## weleh

New baby arrived.


----------



## 99belle99

Cé mhéad? (How much)?


----------



## weleh

1900€

Went from 3070-6800XT-3080 and now this my last card and the one I wanted.


----------



## psychomantium

newls1 said:


> i can game for HOURS @ 2760-2800MHz and my gpu temp is no hotter then 38c and hotspot never exceeds 46/47c.... This is with MPT settings @ 385w. Your flow, rad surface area, block mount, or whatever else is terrible and needs attention


Alright, I'll have a look at some point. 
What's your setup?


----------



## newls1

psychomantium said:


> Alright, I'll have a look at some point.
> What's your setup?


my gpu is isolated to its own loop using 420mm 60mm EK Rad in push/pull, D5 and GPU block is an alphacool for asus tuf (11955).. thats it


----------



## Spawnyspawn

What are good numbers to go for in air? I don't feel like running the fans full blast all the time, but even with a small OC, either my drivers crash or the temps go to 80+.
That's using +15% power limit, 2150 mem core, 2574 core clock and 1100mV.
Reading your numbers, my OC capabilities seem rather lackluster because of the high temps and instability.
Read something about the lack of thermal pads between the components on the PCB and the backplate. Would that be a useful fix to decrease temps a bit?


----------



## Bistun

weleh said:


> New baby arrived.
> 
> View attachment 2485853


Where did you buy it from ?


----------



## weleh




----------



## Nighthog

What a nice accessory the Sapphire Toxic arrived with!

Powercolor sends only some dinky water colours with the Liquid Devil.


----------



## CS9K

Here's an updated list, @LtMatt:
2577 - (What ZealotKi11er theorizes is the max possible out-of-the-box)
2545 - Friend of mine, reference model, 2 weeks old
2534 -ZealotKi11er
2524 - Me, reference model, 2 days old
2509 - SLAY3R8888 - 2600mhz @ 1175, no higher
2504 - Stag
2504 - Chispy
2499 - helis4life
2499 - LtMatt
2489 - st4rky
2470 - MyShadow
2469 - Daemaz
2469 - The EX1
2440 - Kratosatlante

Semi-related, here's one for yall: I've got my RX 6900 XT reference card under water and in my system now. Temps are nice and cool, but...

At 2750, 2800, and 2850 on the max clock slider (with 380W power limit via MPT), according to the latest stable HWINFO64, 7.02, regardless of where the power slider goes, HWINFO shows the card maxing out at 1115mV. That's enough voltage for 2750 and 2800 in Heaven, but not enough for 2850. Is HWINFO wrong or is this some artifact of MPT that I'm unaware of. I've only changed the card wattage level and core amperage level in MPT. 21.3.2 drivers, did a fresh install of them w/DDU in safe mode. 2800MHz and everything below is working fine with 1065mV on the voltage slider; a pretty sweet undervolt if you ask me.

It turns out HWINFO64 7.02-4430 isn't reading core voltage correctly for some reason. The Adrenalin control panel says I'm at 1175mV, HWINFO says 1115mV. I'll believe Adrenalin on this one.


----------



## Spawnyspawn

CS9K said:


> Here's an updated list, @LtMatt:
> 2577 - (What ZealotKi11er theorizes is the max possible out-of-the-box)
> 2545 - Friend of mine, reference model, 2 weeks old
> 2534 -ZealotKi11er
> 2524 - Me, reference model, 2 days old
> 2509 - SLAY3R8888 - 2600mhz @ 1175, no higher
> 2504 - Stag
> 2504 - Chispy
> 2499 - helis4life
> 2499 - LtMatt
> 2489 - st4rky
> 2470 - MyShadow
> 2469 - Daemaz
> 2469 - The EX1
> 2440 - Kratosatlante


Is the "out of the box" speed the number Adrenalin uses as stock speed?

Using the +15% power boost,I can't seem to go below 1095mV at stock core clock (2504Mhz). If I use the Adrenalin auto OC for the core clock (2574Mhz) I need to use 1100mV or it crashes during TimeSpy.
I see others using much lower undervolts with much higher core clocks. What determines how low you can undervolt at a certain clockspeed? I haven't figured that one out yet.


----------



## weleh

So first impressions about the Toxic...

First of all, card looks like it was opened already, screws are scuffed and the card has light use marks. 
Probably a review unit that gets refurbished when sent back to Sapphire and then resold as new.

The box itself is scratched as well so looks like it has traveled a lot.

Second of all, the fan on the GPU hits the shroud, I have no idea how this is even possible. If I push the fan inward it stays good for a bit but after some minutes it starts rubbing on the shroud and makes grinding noises with the fin design of the shroud. 

Temp wise, it's very good, probably comparable to a dedicated loop. Core wise, it's not as strong as my Nitro+ SE 6800XT, so far, been able to stabilize at 2680 Mhz effective clock speed. Haven't had the time to test further. Memory can be maxed out with Fast timings too.

There are some strange behaviours though, without MPT, with PL maxed out,the card doesn't exceed 330W or something similar... even with MPT at 400W the card doesn't seem to want to go that far. 


Anyway, I've contacted sapphire about the lack of QC, and let's see their response because I cba to send my card back to China to be fixed.


----------



## kratosatlante

CS9K said:


> Here's an updated list, @LtMatt:
> 2577 - (What ZealotKi11er theorizes is the max possible out-of-the-box)
> 2545 - Friend of mine, reference model, 2 weeks old
> 2534 -ZealotKi11er
> 2524 - Me, reference model, 2 days old
> 2509 - SLAY3R8888 - 2600mhz @ 1175, no higher
> 2504 - Stag
> 2504 - Chispy
> 2499 - helis4life
> 2499 - LtMatt
> 2489 - st4rky
> 2470 - MyShadow
> 2469 - Daemaz
> 2469 - The EX1
> 2440 - Kratosatlante
> 
> Semi-related, here's one for yall: I've got my RX 6900 XT reference card under water and in my system now. Temps are nice and cool, but...
> 
> At 2750, 2800, and 2850 on the max clock slider (with 380W power limit via MPT), according to the latest stable HWINFO64, 7.02, regardless of where the power slider goes, HWINFO shows the card maxing out at 1115mV. That's enough voltage for 2750 and 2800 in Heaven, but not enough for 2850. Is HWINFO wrong or is this some artifact of MPT that I'm unaware of. I've only changed the card wattage level and core amperage level in MPT. 21.3.2 drivers, did a fresh install of them w/DDU in safe mode. 2800MHz and everything below is working fine with 1065mV on the voltage slider; a pretty sweet undervolt if you ask me.
> 
> It turns out HWINFO64 7.02-4430 isn't reading core voltage correctly for some reason. The Adrenalin control panel says I'm at 1175mV, HWINFO says 1115mV. I'll believe Adrenalin on this one.


drivers 21.3.2 in default when click manual gave me 2469 mhz, 













no touching mpt limit its 2700mhz , but runing ungine heaven cash, run well 2675, with this config mpt run gaming 2725, 2750 gave me artifacts, 2725 cash heaven, 2715 run well, core in games run 2666 average power max 349w










some numbers



Spoiler: sort4k
































min voltaje 650mv for mining 


Spoiler: 650mv mining


----------



## CS9K

TL;DR: LtMatt was curious about the maximum clock speed that the Adrenalin control panel shows when you first install the card and enable overclocking. The theory is, that the number that shows up is a rough gauge of how good/bad of a bin you have. So, I compiled the numbers.

The numbers below are _that_ number:

2577 - (What ZealotKi11er theorizes is the max possible out-of-the-box)
2545 - Friend of mine, ref.
2534 - ZealotKi11er
2524 - CS9K (me) - 2775MHz @ 1175mV 400W
2519 - majestynl- 2840MHz @ 1175mV 400W+15%
2509 - SLAY3R8888 - 2600MHz @ 1175mV
2504 - Spawnyspawn
2504 - Stag
2504 - Chispy
2499 - helis4life
2499 - LtMatt
2489 - st4rky
2484 - Friend of a Friend's ref. 
2470 - MyShadow
2469 - kratosatlante - 2700MHz @ 1175mV
2469 - Daemaz
2469 - The EX1
2440 - Someone, somewhere


----------



## majestynl

CS9K said:


> Here's another updated list:
> 2577 - (What ZealotKi11er theorizes is the max possible out-of-the-box)
> 2545 - Friend of mine, reference model, 2 weeks old
> 2534 -ZealotKi11er
> 2524 - CS9K (me) - 2750-2775, still feeling it out, @ 1175mV 400W
> 2509 - SLAY3R8888 - 260MHz @ 1175, no higher
> 2504 - Spawnyspawn
> 2504 - Stag
> 2504 - Chispy
> 2499 - helis4life
> 2499 - LtMatt
> 2489 - st4rky
> 2484 - Friend of a Friend's ref.
> 2470 - MyShadow
> 2469 - kratosatlante - 2700MHz @ 1175
> 2469 - Daemaz
> 2469 - The EX1
> 2440 - Kratosatlante


What's the MHz your are referring to ? Actual Gaming / Benching clocks or Radeon Setting ?


----------



## CS9K

majestynl said:


> What's the MHz your are referring to ? Actual Gaming / Benching clocks or Radeon Setting ?


I updated my original post. Stock maximum clock speeds "out of the box"


----------



## weleh

Is that effective clocks measured on HWinfo?

My toxic seems to handle 2680 Mhz effective easily.


----------



## ZealotKi11er

kratosatlante said:


> drivers 21.3.2 in default when click manual gave me 2469 mhz,
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> no touching mpt limit its 2700mhz , but runing ungine heaven cash, run well 2675, with this config mpt run gaming 2725, 2750 gave me artifacts, 2725 cash heaven, 2715 run well, core in games run 2666 average power max 349w
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> some numbers
> 
> 
> 
> Spoiler: sort4k
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> min voltaje 650mv for mining
> 
> 
> Spoiler: 650mv mining


Impressive that u set 650mV for mining. I only have 750mV. Maybe I should give it a try.


----------



## kratosatlante

ZealotKi11er said:


> Impressive that u set 650mV for mining. I only have 750mV. Maybe I should give it a try.


with 825mv gave me this Almost the same temperatures but 10c less memory, 134w peak 140w







the tip is from :
Team red miner:

Navi21 (6800/6800XT/6900XT)
===========================
TRM v0.8.1 added basic support for Big Navi cards (Navi21). This section will
be expanded as we do more work for this gpu generation. For now, the suggested
tuning process is quite simple:

1) Big Navis should run in A-mode (it's chosen by default). While the B-mode is
available, the value of the 128MB cache is degraded with a larger memory
footprint.

2) Windows is preferred over linux for now since you can choose the "fast timings"
under win which adds 1.0-1.5 MH/s. Testing was made on Adrenalin 20.12.1.
The AMD recommended driver (20.11.3) had bugs for handling clocks on our 6800.

3) To be able to lower voltage properly, you need to modify the powerplay table
with e.g. MorePowerTool. Install GPU-Z and MorePowerTool. Save the bios using
GPU-Z. Open MorePowerTool, select your Big Navi gpu in the dropdown list. If
necessary, load the bios to get a baseline configuration. Lower the "Power
and Voltage" -> "Minimum Voltage GFX (mV)" limit to the voltage you with to
run. Lower than 625mV is rarely stable, although we've been able to run at 612mV.

NOTE FOR 6800 USERS:
6800 users might also want to increase the SOC TDC limit in the same tab. It
was 30A for our test 6800 which caused throttling. Increase it to 32 or 33A.
Before doing so, you should verify in HWiNFO64 that you're really being
throttled though by checking the "GPU SOC TDC Limit" row. If it hits 100%,
you need to increase the power limit.

3) Clocks suggestions as starting point for further tuning (Windows with "Fast
Timing" selected, Linux can choose similar but will se lower hashrates):

6800: 1235 MHz core, 2124 MHz mem, 625 mV, SOC TDC +10% -> 62.6-62.7 MH/s.
6900XT: 1200 MHz code, 2138 MHz mem, 650 mV, 63.2 MH/s

other use low mv get less wats 100w
RX 6800 improvements

Does anyone have any trick to increase the frequency of the vram?, if I set 2068+ the performance drops when playing and mining, 2128 I get flash, 2128+ the adrenaline controller drops

first try morepowertool set TDC limit soc at 60(55 stock) gaming stable but mining lost performace at less vram frecuency 2050mhz get around 58mh, anyone try TDC soc less 50?

hwinfo report same frecuency 2700mhz and effective 2662


----------



## ZealotKi11er

kratosatlante said:


> with 825mv gave me this Almost the same temperatures but 10c less memory, 134w peak 140w
> 
> 
> 
> 
> 
> 
> 
> the tip is from :
> Team red miner:
> 
> Navi21 (6800/6800XT/6900XT)
> ===========================
> TRM v0.8.1 added basic support for Big Navi cards (Navi21). This section will
> be expanded as we do more work for this gpu generation. For now, the suggested
> tuning process is quite simple:
> 
> 1) Big Navis should run in A-mode (it's chosen by default). While the B-mode is
> available, the value of the 128MB cache is degraded with a larger memory
> footprint.
> 
> 2) Windows is preferred over linux for now since you can choose the "fast timings"
> under win which adds 1.0-1.5 MH/s. Testing was made on Adrenalin 20.12.1.
> The AMD recommended driver (20.11.3) had bugs for handling clocks on our 6800.
> 
> 3) To be able to lower voltage properly, you need to modify the powerplay table
> with e.g. MorePowerTool. Install GPU-Z and MorePowerTool. Save the bios using
> GPU-Z. Open MorePowerTool, select your Big Navi gpu in the dropdown list. If
> necessary, load the bios to get a baseline configuration. Lower the "Power
> and Voltage" -> "Minimum Voltage GFX (mV)" limit to the voltage you with to
> run. Lower than 625mV is rarely stable, although we've been able to run at 612mV.
> 
> NOTE FOR 6800 USERS:
> 6800 users might also want to increase the SOC TDC limit in the same tab. It
> was 30A for our test 6800 which caused throttling. Increase it to 32 or 33A.
> Before doing so, you should verify in HWiNFO64 that you're really being
> throttled though by checking the "GPU SOC TDC Limit" row. If it hits 100%,
> you need to increase the power limit.
> 
> 3) Clocks suggestions as starting point for further tuning (Windows with "Fast
> Timing" selected, Linux can choose similar but will se lower hashrates):
> 
> 6800: 1235 MHz core, 2124 MHz mem, 625 mV, SOC TDC +10% -> 62.6-62.7 MH/s.
> 6900XT: 1200 MHz code, 2138 MHz mem, 650 mV, 63.2 MH/s
> 
> other use low mv get less wats 100w
> RX 6800 improvements
> 
> Does anyone have any trick to increase the frequency of the vram?, if I set 2068+ the performance drops when playing and mining, 2128 I get flash, 2128+ the adrenaline controller drops
> 
> first try morepowertool set TDC limit soc at 60(55 stock) gaming stable but mining lost performace at less vram frecuency 2050mhz get around 58mh, anyone try TDC soc less 50?
> 
> hwinfo report same frecuency 2700mhz and effective 2662


I am getting 95W with 650mV.


----------



## newls1

playing with OC settings again and trying to max out my card, still haven't but got this TS score, whatcha think? Leaderboard has changed quite a bit since my last benching session about 3 weeks ago when I got 9th place. Seems a ton of people have been hitting the leaderboards! I took 29th spot, quite happy about it!



















*pretty much a steady 2735MHz run. Temps got to about 41c core 55c hotspot*


----------



## newls1

is there a way to monitor memory temps on our 6900's?


----------



## TexasAVI03

newls1 said:


> is there a way to monitor memory temps on our 6900's?


Does that work for you?


----------



## majestynl

CS9K said:


> I updated my original post. Stock maximum clock speeds "out of the box"


No I mean the second MHz. Like yours stating "2750-2775"


----------



## CS9K

majestynl said:


> No I mean the second MHz. Like yours stating "2750-2775"


OH, I'm sorry D: 2775MHz is what the max core clock slider is set at currently. It seems stable there so far in benches and games.


----------



## geriatricpollywog

I have a NIB reference 6800 and thinking about building an HTPC/mining SFF PC around it with watercooling. Has anybody flashed a 6900XT bios to the 6800? It not, how much % performance can you get with powerplay tables?


----------



## majestynl

CS9K said:


> OH, I'm sorry D: 2775MHz is what the max core clock slider is set at currently. It seems stable there so far in benches and games.


Aha oke..no worries 

You can write mine down

2519MHz out of box

Slider set at 2840mhz! Game and benchstable with 400W+15%


----------



## 6u4rdi4n

Default max slider for my 6900 XT is 2539 MHz. 

Best time spy run so far at frequency ranging from 2664 to 2754. Just a quick look at the run, I'd say average is about 2730 over the graphics test runs. 3D Mark does give it's own average at the Web site, but that just doesn't look right to me. Does it include the more or less idle frequency from the CPU test? 

I've set the power limit of my card to 400W with the help of MPT and having the slider at +15%. Seems to be enough for now, with it mostly howering around 330W. It does look like I need to take another look at the water block mount though.


----------



## newls1

TexasAVI03 said:


> Does that work for you?
> 
> View attachment 2486022


no sir, my asus tuf doesnt have that reading sadly.....


----------



## newls1

Starkinsaur said:


> Edit: To be clear. By default, there are two memory timing options available in the Radeon Software - "Default" and "Fast Timing".
> 
> So it looks like we can unlock some more memory timings "Fast Timing Level2" by setting Memory Timing Control to 2 in MorePowerTool.
> Mine crashed Unigine Superposition 4k at 2000mhz with this setting.
> What else is hiding in there...?
> 
> View attachment 2471100
> 
> 
> View attachment 2471101


whatever happened with this? Anyone further test this?


----------



## c0rny42

lucker#1 said:


> The alphacool block is currently in stock at alternate.de - It's only listed as RX 6800, but alphacool lists it as 6800 and 6900 compatible. The product number that alternate lists is also the same, so it should work i guess. I've got mine already ordered - should arrive along with the 6900XT.
> 
> 
> https://www.alternate.de/Alphacool/Eisblock-Aurora-Acryl-GPX-A-Radeon-RX-6800-6800XT-Reference-mit-Backplate-Wasserk%FChlung/html/product/1700557


But there is only one problem. The 6900XT Red Devil has 3x8 Pins. The block you ordered would problaby fit but there is no space for the third 8 PIN Cable (?) or do you think it will still fit


----------



## iTTT

i got one AMD RX 6900 XT with Barrow waterblock and tested lastnight.
Ambient 26c


----------



## kratosatlante

newls1 said:


> whatever happened with this? Anyone further test this?


trying Fast Timing Level2 at 2000 get a lot of artifact


----------



## weleh

iTTT said:


> i got one AMD RX 6900 XT with Barrow waterblock and tested lastnight.
> Ambient 26c


These temps are worse than my Toxic at 400W.
Keep in mind only the core on the Toxic is watercooled, the rest is air cooled...


----------



## newls1

yes, those temps are TERRIBLE


----------



## weleh

2680 effective (hwinfo) at 379W.


----------



## 6u4rdi4n

New personal best. Think I need to get myself a new CPU


----------



## weleh

What drivers you guys on?

I'm on the WQHL ones not the latest. Are latest better?


----------



## 6u4rdi4n

I'm using the latest WHQL release.


----------



## Emmett

The MSI 6900XT Gaming X trio is a custom board, will it ever get a water block? I only see them for reference.


----------



## rotasmmm

Merhaba arkadaşlar, kırmızı şeytan 6900xt aldım. İlk test sonucumun iyi bir çip olduğunu düşünüyorum.










Red Devil 6900xt colormaster nr200p sff itx kılıf kullanıyorum.


----------



## HyperC

So I got my new monitor finally now having issues removed the old 1080p swapped over my 1440p 165hz and new acer 270hz in place of that... does amd not like having 2 fast monitor on at the same time? Made sure detects and set main display and other is extended. Rebooted and main display went over to 2 best part it my mouse cursor is stuck on the other display so atm forced with only one on..


----------



## CS9K

CS9K said:


> Semi-related, here's one for yall: I've got my RX 6900 XT reference card under water and in my system now. Temps are nice and cool, but...
> 
> At 2750, 2800, and 2850 on the max clock slider (with 380W power limit via MPT), according to the latest stable HWINFO64, 7.02, regardless of where the power slider goes, HWINFO shows the card maxing out at 1115mV. That's enough voltage for 2750 and 2800 in Heaven, but not enough for 2850. Is HWINFO wrong or is this some artifact of MPT that I'm unaware of. I've only changed the card wattage level and core amperage level in MPT. 21.3.2 drivers, did a fresh install of them w/DDU in safe mode. 2800MHz and everything below is working fine with 1065mV on the voltage slider; a pretty sweet undervolt if you ask me.
> 
> It turns out HWINFO64 7.02-4430 isn't reading core voltage correctly for some reason. The Adrenalin control panel says I'm at 1175mV, HWINFO says 1115mV. I'll believe Adrenalin on this one.


So, the mystery continues! My card still refuses to give the core more than 1115mV core voltage in demanding games and benchmarks. 

MorePowerTool is set to 365W Power Limit GPU, 458A TDC Limit GFX, and 65A TDC Limit SoC.
Adrenalin is set to +15% power.

No combination of clock speed settings, voltage settings, and power settings in the Adrenalin control panel could get GPU Core Voltage (VDDCR_GFX) in HWINFO64 to read higher than 1115mV. I tried limiting voltages in MPT as well, to no avail. GPU-Z and Adrenalin both showed 1175mV during all of this.

I submitted a bug report to HWINFO64, and Martin pointed out that the values displayed in Adrenalin and GPU-Z are akin to a CPU's VID reading, where the reading coming out of the card that is displayed as GPU Core Voltage (VDDCR_GFX) is an actual reading from the card itself. 

TL;DR: It seems that I'm hitting some internal/firmware/driver current limit that we can not change with MPT. Trying maximum clock speeds from 2600MHz up to 2850MHz, the peak GPU Core Voltage (VDDCR_GFX) never went higher than 1120mV under benchmark load.

I can't _really_ complain, the card is stable with the max core clock slider set to 2775MHz, though if the core _does_ really have 50mV more to go, I would LOVE to be able to get at that, but everything I've tried thus far has met with little success. For the record, I spent the 2nd half of 2020 with an RX 5600 XT doing bios modification for gaming, so this isn't my first rodeo. That said, if I'm just being a bonehead and missing a setting, it wouldn't be the first, nor the last time...


----------



## weleh

That's normal. There's more cases of that VCore behaviour on other 6900XT's.

The 6900XT seems to have a really droopier LLC, much more than the 6800XT which can remain at 1.15V during max loads.


----------



## CS9K

weleh said:


> That's normal. There's more cases of that VCore behaviour on other 6900XT's.
> 
> The 6900XT seems to have a really droopier LLC, much more than the 6800XT which can remain at 1.15V during max loads.


I suspected it was something to do with the firmware's LLC setting. I haven't tried to make the card pull more than 400W, but it'll roll right up _to_ 400W in certain benchmarks... but only at 1115mV core voltage. 

It's interesting, but a little frustrating, whatever feature that it is, that's limiting voltage. Ah well. No complaints still :3


----------



## weleh

rotasmmm said:


> Merhaba arkadaşlar, kırmızı şeytan 6900xt aldım. İlk test sonucumun iyi bir çip olduğunu düşünüyorum.
> 
> 
> View attachment 2486140
> 
> Red Devil 6900xt colormaster nr200p sff itx kılıf kullanıyorum.
> 
> View attachment 2486142


No idea where you get this score but I beat your score by almost 1000 points with 2700 core...


----------



## weleh

CS9K said:


> I suspected it was something to do with the firmware's LLC setting. I haven't tried to make the card pull more than 400W, but it'll roll right up _to_ 400W in certain benchmarks... but only at 1115mV core voltage.
> 
> It's interesting, but a little frustrating, whatever feature that it is, that's limiting voltage. Ah well. No complaints still :3


My Toxic behaves exactly like that.

Can't pull more than 400W unless I do OCCT then it will do whatever it can. 
On the other hand, my 6800XT could do it easily due to less droopier Vcore.


----------



## kratosatlante

weleh said:


> No idea where you get this score but I beat your score by almost 1000 points with 2700 core...


you have the 6900xt in water, post your score and temps, i get this with asrock phantom stock cooler 3x8pin conectors


----------



## weleh

He posted a 3000 Mhz core screenshot hence why my coment.
I don't even care about Superposition but here it goes.

This is 330W or so, 2600-2700 core limits.


----------



## kratosatlante

CS9K said:


> So, the mystery continues! My card still refuses to give the core more than 1115mV core voltage in demanding games and benchmarks.
> 
> MorePowerTool is set to 365W Power Limit GPU, 458A TDC Limit GFX, and 65A TDC Limit SoC.
> Adrenalin is set to +15% power.
> 
> No combination of clock speed settings, voltage settings, and power settings in the Adrenalin control panel could get GPU Core Voltage (VDDCR_GFX) in HWINFO64 to read higher than 1115mV. I tried limiting voltages in MPT as well, to no avail. GPU-Z and Adrenalin both showed 1175mV during all of this.
> 
> I submitted a bug report to HWINFO64, and Martin pointed out that the values displayed in Adrenalin and GPU-Z are akin to a CPU's VID reading, where the reading coming out of the card that is displayed as GPU Core Voltage (VDDCR_GFX) is an actual reading from the card itself.
> 
> TL;DR: It seems that I'm hitting some internal/firmware/driver current limit that we can not change with MPT. Trying maximum clock speeds from 2600MHz up to 2850MHz, the peak GPU Core Voltage (VDDCR_GFX) never went higher than 1120mV under benchmark load.
> 
> I can't _really_ complain, the card is stable with the max core clock slider set to 2775MHz, though if the core _does_ really have 50mV more to go, I would LOVE to be able to get at that, but everything I've tried thus far has met with little success. For the record, I spent the 2nd half of 2020 with an RX 5600 XT doing bios modification for gaming, so this isn't my first rodeo. That said, if I'm just being a bonehead and missing a setting, it wouldn't be the first, nor the last time...


mine touch 1.117-118 with core 2630-2650, its hard to pass 2660 core, is the limit frecuency in GFX


----------



## ZealotKi11er

kratosatlante said:


> mine touch 1.117-118 with core 2630-2650, its hard to pass 2660 core, is the limit frecuency in GFX


You can pass 2660 but not with MPT. You have to use Radeon Settings.


----------



## HeLeX63

What are some Superposition scores everyone is getting.?

Here is mine.


----------



## psychomantium

Was wondering, what thermal paste are you guys using under your blocks?


----------



## HeLeX63

Thermal Grizzly Kryonaut (12.5 W/mk)


----------



## HeLeX63

psychomantium said:


> Was wondering, what thermal paste are you guys using under your blocks?


Thermal Grizzly Kryonaut (12.5 W/mk)


----------



## cfranko

My 6900 XT Phamtom D memory junction is at 84-86 degrees celcius while mining on air. Can other people share their memory temps while mining? Because gddr6 shouldn’t be 84 degrees I guess. My GPU fan speed is 2000 RPM


----------



## kratosatlante

cfranko said:


> My 6900 XT Phamtom D memory junction is at 84-86 degrees celcius while mining on air. Can other people share their memory temps while mining? Because gddr6 shouldn’t be 84 degrees I guess. My GPU fan speed is 2000 RPM


phantom at stock vram,stock timings uv 825mv









uv 650 vram fast timings 2050mhz, only can go fast timings at 2068 with out lost performance, What are your times?


----------



## cfranko

kratosatlante said:


> phantom at stock vram,stock timings uv 825mv
> 
> 
> 
> 
> 
> 
> 
> 
> 
> uv 650 vram fast timings 2050mhz, only can go fast timings at 2068 with out lost performance, What are your times?












This is currently what is happening on my card.


----------



## kratosatlante

cfranko said:


> View attachment 2486239
> 
> 
> This is currently what is happening on my card.


cooler 60% + , at 80 no heard, and try edit with morepowertool tdc soc, reduce to 50 and try,,









for my increase to tdc change my vram stability, at stock tdc 2068 fast timing max , 2080 stock timing, at 63a tdc, 2072 fast timing, but in gaming 2100 stock it stable,, with 70a it same a 55a, need more test, you may be test reduce mamium voltage soc,, at max 1050 gave me 5% more performance in re2remake but in shadow of tomb rainder lost 10%, at 1075 run well, only gave me 2-3c les temp in hotspot


----------



## CS9K

psychomantium said:


> Was wondering, what thermal paste are you guys using under your blocks?


Gelid GC-Extreme

I have had thinner pastes "pump-out" over time on bare-die applications of mine in the past. I use thicker pastes for bare-die applications now. Gelid GC-Extreme is my current jam, but other really-thick pastes like Thermal Grizzly Hydronaut and EK Ectotherm are quite resistant to pump-out as well.


----------



## weleh

Fixed the Toxic's lack of QC.

Not gonna divulge how but it's fine now, no more fans hitting the shroud.


----------



## coelacanth

CS9K said:


> TL;DR: LtMatt was curious about the maximum clock speed that the Adrenalin control panel shows when you first install the card and enable overclocking. The theory is, that the number that shows up is a rough gauge of how good/bad of a bin you have. So, I compiled the numbers.
> 
> The numbers below are _that_ number:
> 
> 2577 - (What ZealotKi11er theorizes is the max possible out-of-the-box)
> 2545 - Friend of mine, ref.
> 2534 - ZealotKi11er
> 2524 - CS9K (me) - 2775MHz @ 1175mV 400W
> 2519 - majestynl- 2840MHz @ 1175mV 400W+15%
> 2509 - SLAY3R8888 - 2600MHz @ 1175mV
> 2504 - Spawnyspawn
> 2504 - Stag
> 2504 - Chispy
> 2499 - helis4life
> 2499 - LtMatt
> 2489 - st4rky
> 2484 - Friend of a Friend's ref.
> 2470 - MyShadow
> 2469 - kratosatlante - 2700MHz @ 1175mV
> 2469 - Daemaz
> 2469 - The EX1
> 2440 - Someone, somewhere


XFX Speedster Merc 319 Ultra RX-69XTACUD9 - 2509


----------



## kazukun

*PowerColor Quietly Outs Radeon RX 6900 XT Red Devil Ultimate*








PowerColor Quietly Outs Radeon RX 6900 XT Red Devil Ultimate


PowerColor quietly launched the Radeon RX 6900 XT Red Devil Ultimate graphics card. Not to be confused with the RX 6900 XT Red Devil Limited Edition, which is essentially an RX 6900 XT Red Devil with a few more accessories in the box; the new Red Devil Ultimate ships with higher...




www.techpowerup.com


----------



## ZealotKi11er

GG, if you have 6900 XT and want to be at the top of the leaderboard you have already lost.


----------



## Nighthog

With those unlocked drivers floating maybe we can see some benefits for the other cards unless they are locked to only work on the "new!" sku.


----------



## ZealotKi11er

Nighthog said:


> With those unlocked drivers floating maybe we can see some benefits for the other cards unless they are locked to only work on the "new!" sku.


What unlocked drivers? If this new SKU has higher limits like 1.2v it will not effect 6900XT because it a different device id.


----------



## 6u4rdi4n

PowerColor Red Devil Ultimate Radeon RX 6900 XT Review: Speed Demon


The PowerColor Red Devil Radeon RX 6900 XT used specially binned GPUs and unlocks more power, to boost clocks further and increase performance.




hothardware.com





Review of it. Seems good, but my 6900 XT already holds ~2700MHz in Time Spy.


----------



## cfranko

I feel scammed after seeing this new 6900 XT 😒


----------



## weleh

6u4rdi4n said:


> PowerColor Red Devil Ultimate Radeon RX 6900 XT Review: Speed Demon
> 
> 
> The PowerColor Red Devil Radeon RX 6900 XT used specially binned GPUs and unlocks more power, to boost clocks further and increase performance.
> 
> 
> 
> 
> hothardware.com
> 
> 
> 
> 
> 
> Review of it. Seems good, but my 6900 XT already holds ~2700MHz in Time Spy.


Same, most samples out there already do this kind of clocks, no idea what's binned about this card.

AMD needs to give us access to the unlocked drivers.


----------



## chispy

MickeyPadge said:


> Here is a couple of pics of how I modified the corsair reference block to fit the asrock phantom 6900xt:
> 
> 
> 
> Not so complicated, but you need a steady hand and ideally a dremel style tool. Delicately shave off the acrylic, not too heavy handed, you don't want to melt it, as one is near the gasket.
> 
> Removing the black metal shroud is easy too (four screws in the corners of the block). After that, all but two of the block mounting holes line up perfectly with the card. And the two that don't? Not really relevant for block fitment anyway.


I'm sorry i have not get back to you with this project as i was very busy with important life stuff. Now i have the free time and will do this mod today. I already have the Corsair h2o block for a while but no free time to mod it and install it. 

I will use a dremel for this mod and take it easy. I will get back to you today after i finish moding and installing the water block with feedback on the cooling and clocks on my Asrock Phantom Gaming 6900xt . Thank you for the help , pictures and instructions , appreciate it you sharing this mod.

Kind Regards: Angelo


----------



## ZealotKi11er

weleh said:


> Same, most samples out there already do this kind of clocks, no idea what's binned about this card.
> 
> AMD needs to give us access to the unlocked drivers.



Yeah, before it was random. You got good or bad 6900 xt. Now these are guarantied to 5-10% of all 6900XT.


----------



## OrionBG

Hey guys, 
Have you seen this video?
This 6900XT is built for overclocking...

4000MHz limit on the GPU? 1.2V ?? is it time for a BIOS cross flash?


----------



## MickeyPadge

chispy said:


> I'm sorry i have not get back to you with this project as i was very busy with important life stuff. Now i have the free time and will do this mod today. I already have the Corsair h2o block for a while but no free time to mod it and install it.
> 
> I will use a dremel for this mod and take it easy. I will get back to you today after i finish moding and installing the water block with feedback on the cooling and clocks on my Asrock Phantom Gaming 6900xt . Thank you for the help , pictures and instructions , appreciate it you sharing this mod.
> 
> Kind Regards: Angelo


Cool, look forward to seeing your results!


----------



## Zogge

CS9K said:


> Welp, my hand slipped last week and I accidentally an RX 6900 XT reference model. Boy howdy, I did NOT appreciate how bad of a bin (for water) that my RX 6800 XT was.
> 
> New card feels stable at 1065mV on the slider in non-RT benches from 2400-2700MHz on the Max Frequency slider. I'm still working on RT stability in Port Royal, though I'm mostly limited by my keeping the hotspot under 90C while the card is on air still. I'm eager to get my water block onto the card, but waiting on a few more fittings since I had to get a new pump; old one died.
> 
> I read through the entire thread over on the OC UK forums, and will re-read through this thread again over the course of this week to fill out the list further. Here is the data that I've gathered so far. I'm sure @LtMatt will find it interesting:
> 
> What I've found so far for "out of the box" Max Frequency values. Most of these are from over on the Overclockers UK forum:
> 2577 - (What ZealotKi11er theorizes is the max possible out-of-the-box)
> 2545 - Friend of mine, reference model, 2 weeks old
> 2534 -ZealotKi11er
> 2524 - Me, reference model, 2 days old
> 2504 - Stag
> 2504 - Chispy
> 2499 - helis4life
> 2499 - LtMatt
> 2489 - st4rky
> 2469 - Daemaz
> 2440 - LtMatt someone on ocuk


2684 out of the box here with a 6900XT Merc Black


----------



## weleh

2684 max clock out of the box? That's insanely high.


----------



## HeLeX63

OrionBG said:


> Hey guys,
> Have you seen this video?
> This 6900XT is built for overclocking...
> 
> 4000MHz limit on the GPU? 1.2V ?? is it time for a BIOS cross flash?


Will that work though ?? At least for the Red Devil Owners ? Technically an extra 25mV, would only give you like 20 to 40Mhz higher.. even worth it ?


----------



## ZealotKi11er

weleh said:


> 2684 max clock out of the box? That's insanely high.



That is impossible even for the new XTXH. Max for 6900 xt would be 2660 out of the box.


----------



## OrionBG

HeLeX63 said:


> Will that work though ?? At least for the Red Devil Owners ? Technically an extra 25mV, would only give you like 20 to 40Mhz higher.. even worth it ?


The Red Devil has 2 BIOSes so I guess will find out sooner or later 
I've ordered my water block from Alphacool at the beginning of February and I'm still waiting... Last time, they said 19th of April but I'm suspicious...


----------



## No-one-no1

OrionBG said:


> Hey guys,
> Have you seen this video?
> This 6900XT is built for overclocking...
> 
> 4000MHz limit on the GPU? 1.2V ?? is it time for a BIOS cross flash?


This is extremely interesting! Hope it helps the bios guys figure out how to raise the limits on other cards! And figure out how to trim the voltages! Having two different voltages, 1175 and 1200mV should make it soooo much easier to find the right things to mod.


----------



## skline00

OrionBG, 
I originally was going with Alphacool for the waterblock on my RX6800 but after so long of waiting I ordered the EK block and backplate from Performance PC. 
Very satisfied.


----------



## chispy

MickeyPadge said:


> Cool, look forward to seeing your results!


Hey there friend. How did you removed the metal shroud around the block ? I'm looking for some screws but i have no idea wish ones need to be removed or where they are located ?


----------



## chispy

@MickeyPadge - No worries , i found them. It's done  !

I will post my results later , thank you for the help and guidance , appreciate it.


----------



## CS9K

No-one-no1 said:


> This is extremely interesting! Hope it helps the bios guys figure out how to raise the limits on other cards! And figure out how to trim the voltages! Having two different voltages, 1175 and 1200mV should make it soooo much easier to find the right things to mod.


hellm, et. al, think the same. Hopefully they can get hold of a copy of the bios and the flash tool so work can begin.


----------



## MickeyPadge

chispy said:


> @MickeyPadge - No worries , i found them. It's done  !
> 
> I will post my results later , thank you for the help and guidance , appreciate it.
> 
> 
> View attachment 2486718


No worries, glad you sorted everything out. Make sure to check all the screws on the block are tightened. Found a few wiggling loose when I re did my loop. Look forward to seeing your results. 

I've been running 3D Mark and testing since I got the card, didn't even realise I had a few scores in the hall of fame! lol


----------



## jimpsar

Hello,
Anyone knows if the bios switch for Merc 6900 xt black should be on the right side or left as you see the card fitted? does it matter? Thank you.


----------



## Nighthog

I found my Liquid Devil has the Bios flipped around.

*OC*: *301W* [346]
*Unleash*: *289W* [332]

Came in the OC bios position and thought the Unleash would be the better one but it's the OC side on my card. Benchmarks confirmed it also, OC mode is better.
Was is is supposed to be this way? I was under the impression Unleash mode would be the better one.

Atleast I have the Ryzen 5 Pro 4650G top score. HA!








I scored 16 179 in Time Spy


AMD Ryzen 5 PRO 4650G, AMD Radeon RX 6900 XT x 1, 32768 MB, 64-bit Windows 10}




www.3dmark.com





There is some bug with the drivers causing erratic memory clocks though. (21.3.1 & 21.3.2)


----------



## weleh

Usually cards come with the best bios position out of the box.

At least for MSI and Sapphire it's that way.


----------



## coelacanth

jimpsar said:


> Hello,
> Anyone knows if the bios switch for Merc 6900 xt black should be on the right side or left as you see the card fitted? does it matter? Thank you.


Mine came with the switch to the right.

I believe the switch to the right (closer to the PCIE power connectors) is balanced mode (default), switch to the left is Rage Mode.


----------



## jimpsar

coelacanth said:


> Mine came with the switch to the right.
> 
> I believe the switch to the right (closer to the PCIE power connectors) is balanced mode (default), switch to the left is Rage Mode.


Τhank you for reply. Mine to right too. So we keep it that way i believe.I have pretty good results using Mptool.









I scored 20 521 in Time Spy


Intel Core i9-10850K Processor, AMD Radeon RX 6900 XT x 1, 16384 MB, 64-bit Windows 10}




www.3dmark.com













I scored 26 034 in Fire Strike Extreme


Intel Core i9-10850K Processor, AMD Radeon RX 6900 XT x 1, 16384 MB, 64-bit Windows 10}




www.3dmark.com













I scored 38 813 in Fire Strike


Intel Core i9-10850K Processor, AMD Radeon RX 6900 XT x 1, 16384 MB, 64-bit Windows 10}




www.3dmark.com













I scored 11 504 in Port Royal


Intel Core i9-10850K Processor, AMD Radeon RX 6900 XT x 1, 16384 MB, 64-bit Windows 10}




www.3dmark.com













I scored 9 655 in Time Spy Extreme


Intel Core i9-10850K Processor, AMD Radeon RX 6900 XT x 1, 16384 MB, 64-bit Windows 10}




www.3dmark.com


----------



## OrionBG

skline00 said:


> OrionBG,
> I originally was going with Alphacool for the waterblock on my RX6800 but after so long of waiting I ordered the EK block and backplate from Performance PC.
> Very satisfied.


Unfortunately, EK still has not released the standalone water block for the red devil...


----------



## OrionBG

I can now confirm that the Red Devil and Liquid Devil PCBs are identical. I compared photos of both and even the PCB version is the same "LF P28G V1.0"
At this point, I'm pretty sure that the Liquid Devil BIOS can be flashed on a Red Devil card.


----------



## jfrob75

I recently update to the latest drivers 21.3.2. Found about a 4% improvement in my synthetic benchmark results.

Firestrike results

Timespy results

My reference 6900XT is probably average silicone based on what others are able to OC. It is water cooled with EK block.

This is the first time I have been able to run Timespy with GPU max frequency set to 2755MHz. Usually only been successful with max freq set to 2725MHz.


----------



## weleh

2200 Graphic Score on TS certainly isn't average.

My 6900XT Toxic does 21.9K on TS, however, on Firestrike I'm currently sitting at 66K Graphic which is much higher than yours.
Gotta give new drivers a try. I'm still on WQHL ones.


----------



## jfrob75

weleh said:


> 2200 Graphic Score on TS certainly isn't average.
> 
> My 6900XT Toxic does 21.9K on TS, however, on Firestrike I'm currently sitting at 66K Graphic which is much higher than yours.
> Gotta give new drivers a try. I'm still on WQHL ones.


Well, average for being water cooled, LoL. Your Firestrike graphics score is about 10% higher and higher than anyone has posted on 3dmark for a 6900 XT.


----------



## weleh

22000 WC'ed or not is not average by any means. It's top tier.

Also my FS run









I scored 44 359 in Fire Strike


AMD Ryzen 7 5800X, AMD Radeon RX 6900 XT x 1, 16384 MB, 64-bit Windows 10}




www.3dmark.com


----------



## weleh

Have to rebench this weekend because my Unify X came back from RMA so time to yeet some RAM and some CPU settings to get higher overall.


----------



## CS9K

OrionBG said:


> I can now confirm that the Red Devil and Liquid Devil PCBs are identical. I compared photos of both and even the PCB version is the same "LF P28G V1.0"
> At this point, I'm pretty sure that the Liquid Devil BIOS can be flashed on a Red Devil card.


I believe the folks over on Igor's Lab have tried this already, but they did not get anywhere with it. 

One of the members has a Liquid Devil, so they and hellm are working on a way to get us control.


----------



## nyk20z3

I feel like we benchmark more then actually gaming here lol


----------



## Nighthog

nyk20z3 said:


> I feel like we benchmark more then actually gaming here lol


I haven't gamed more than a couple few hours on my 6900XT Liquid Devil yet...
Been busy with other stuff than game since I bought it. (bought it because it was in stock, not because I needed it)
No cards in stock since I bought it either and prices have since increased even more. So it was get it now or never see it in stock again.


----------



## Nighthog

Testing that MPT, for more power.

Seems efficiency is just out the window ~350watts and more. Was hopeful it would go a bit further but is what it is.

Any tips and tricks from people used this software before [MPT] from Igorslab? First time using it and wondering what settings are worth playing with other than the Board Power Limit.

Set 350W and +15% for 402.500W maximum for a start. Seems about what my waterloop can handle before Hotspot goes above 100C with a continuous load.


----------



## HeLeX63

Nighthog said:


> Testing that MPT, for more power.
> 
> Seems efficiency is just out the window ~350watts and more. Was hopeful it would go a bit further but is what it is.
> 
> Any tips and tricks from people used this software before [MPT] from Igorslab? First time using it and wondering what settings are worth playing with other than the Board Power Limit.
> 
> Set 350W and +15% for 402.500W maximum for a start. Seems about what my waterloop can handle before Hotspot goes above 100C with a continuous load.


 Your hotspot goes close to 100C ? Mine doesn't go above 60C with nearly 400 Watts.


----------



## Nighthog

HeLeX63 said:


> Your hotspot goes close to 100C ? Mine doesn't go above 60C with nearly 400 Watts.


Overall the card runs really hot to be said. Might need to dismantle and put other thermalpads and new TIM to improve the results.
My RX VEGA 64 doesn't get as hot as the 6900XT at similar lower reasonable wattage but that one has even worse Hotspot issues than the 6900XT if I would push it even a little extra.

For example with 400W continuous load my RX 6900XT has 75C Core, 78C MEM, 100C Hotspot at the moment.
Water temperature 41C and ambient 28-29C as it heats up the room considerably at this kind of wattage. (both water & ambient can drop 10C if I stop all workloads)

Any ambient temperature drop helps, and there is room to improve the radiator capacity, that should help. Only problem my case is maximized and would need to have external radiators on the floor or something.


----------



## ZealotKi11er

Nighthog said:


> Overall the card runs really hot to be said. Might need to dismantle and put other thermalpads and new TIM to improve the results.
> My RX VEGA 64 doesn't get as hot as the 6900XT at similar lower reasonable wattage but that one has even worse Hotspot issues than the 6900XT if I would push it even a little extra.
> 
> For example with 400W continuous load my RX 6900XT has 75C Core, 78C MEM, 100C Hotspot at the moment.
> Water temperature 41C and ambient 28-29C as it heats up the room considerably at this kind of wattage. (both water & ambient can drop 10C if I stop all workloads)
> 
> Any ambient temperature drop helps, and there is room to improve the radiator capacity, that should help. Only problem my case is maximized and would need to have external radiators on the floor or something.


100C hotspot is fine if core is 75C. You definitely need more rad space and improve mounting.


----------



## weleh

100C hotspot is super hot though for water.

My AIO Toxic 6900XT hits like 70C hotspot at 390W (at max fan speed).

Even my 6800XT Nitro+ was better on air than that, like 90C hotspot at 410W (at maxfan speed).


----------



## HeLeX63

Nighthog said:


> Overall the card runs really hot to be said. Might need to dismantle and put other thermalpads and new TIM to improve the results.
> My RX VEGA 64 doesn't get as hot as the 6900XT at similar lower reasonable wattage but that one has even worse Hotspot issues than the 6900XT if I would push it even a little extra.
> 
> For example with 400W continuous load my RX 6900XT has 75C Core, 78C MEM, 100C Hotspot at the moment.
> Water temperature 41C and ambient 28-29C as it heats up the room considerably at this kind of wattage. (both water & ambient can drop 10C if I stop all workloads)
> 
> Any ambient temperature drop helps, and there is room to improve the radiator capacity, that should help. Only problem my case is maximized and would need to have external radiators on the floor or something.


I re-mounted mine after a few months of usage. Saw hotspot temps 65-68C with GPU temps up to 45C. So slightly over 20C difference. With new paste and remount (quite alot more paste than before), I see GPU and hotspot difference of 15 to 17C. So at 45C, hotspot will hover around 62C. A small little reduction there. This is with a 480x60 and 560x30 radiators.


----------



## weleh

Since the die is huge, it's normal to have like 20C delta between core and hotspot.


----------



## Oversemper

Just got Liquid Devil version of 6900XT. Nice looking, overall seems like not a bad build from the factory, though there are no thermal pads under the backplate and mine 1.5mm ones did not suite at all. Needs a combination of ~1mm and ~2-3mm. That will be a future job along with a conductonaut repaste which did a great job for Radeon VII.



















First test with second, more aggressive bios, everything by default, no manual overclock/downclock except for +15% PL (def. voltage of 1,175v):
3dMark Firestrike Ultra Torture test
Average frequency 2426, average core and junction temps 54 and 77, max core temp reached 83 (usually stays +/- 80), average power consumption of 346 watt. Seems alright for a factory build card in one loop with CPU (3800X PBO on) and one 360mm radiator.

AMD logging from the torture test:


Spoiler: AMD logging (!long!)




GPU UTILGPU SCLKGPU MCLKGPU TEMPGPU HotspotGPU PWRGPU FANGPU VRAM UTILCPU UTILRAM UTIL98​2410​1988​49​71​346​0​3459​8,25​5,39​98​2454​1988​49​73​345​0​3459​7,47​5,39​98​2431​1990​50​74​346​0​3460​8,83​5,39​99​2458​1988​50​72​346​0​3460​8,29​5,38​99​2424​1988​50​74​347​0​3460​9,3​5,39​98​2435​1990​51​75​346​0​3460​8,46​5,39​98​2433​1990​51​74​346​0​3461​5,71​5,39​99​2450​1988​51​75​346​0​3461​7,27​5,39​99​2460​1990​51​72​345​0​3461​9,59​5,38​98​2451​1990​50​72​344​0​3461​8,97​5,39​98​2430​1988​51​72​346​0​3461​10,54​5,39​99​2431​1990​51​74​345​0​3461​7,76​5,39​98​2456​1988​51​74​346​0​3462​9,36​5,38​99​2494​1988​52​73​345​0​3462​10,97​5,38​99​2450​1988​51​73​344​0​3463​5,88​5,38​99​2410​1990​51​76​345​0​3463​7,21​5,38​99​2447​1988​52​73​346​0​3463​7,51​5,38​99​2419​1988​52​75​345​0​3463​9,62​5,38​99​2455​1988​52​74​348​0​3463​9,83​5,38​99​2418​1990​52​74​345​0​3463​9,1​5,38​98​2427​1988​52​76​346​0​3463​9,03​5,38​99​2428​1990​53​75​345​0​3463​4,98​5,38​99​2460​1988​52​75​347​0​3463​12,43​5,38​98​2460​1988​52​78​346​0​3463​10,93​5,38​98​2461​1990​52​76​347​0​3463​11,78​5,38​99​2418​1990​52​74​346​0​3463​9,94​5,38​98​2432​1988​52​75​346​0​3463​6,25​5,38​99​2448​1988​52​75​346​0​3463​9,14​5,38​99​2479​1988​52​74​348​0​3463​9,37​5,38​99​2447​1988​52​75​346​0​3463​9,79​5,38​99​2410​1990​53​75​346​0​3463​7,56​5,38​99​2425​1988​53​76​346​0​3463​6,83​5,38​99​2415​1990​52​77​345​0​3463​7,71​5,38​98​2439​1988​53​74​345​0​3463​9,41​5,38​99​2416​1990​53​76​348​0​3463​8,94​5,38​98​2437​1990​53​76​346​0​3463​10,59​5,38​99​2407​1988​53​76​346​0​3463​6,54​5,38​99​2435​1988​52​76​344​0​3463​12,98​5,38​99​2450​1988​52​75​344​0​3463​9,95​5,29​98​2461​1990​53​76​346​0​3463​11,66​5,29​99​2418​1988​53​75​347​0​3463​10,44​5,29​99​2429​1988​53​78​346​0​3463​7,57​5,29​99​2437​1988​53​77​346​0​3463​8,39​5,29​99​2460​1990​53​76​344​0​3463​10,1​5,29​99​2449​1988​53​75​347​0​3463​7,9​5,28​99​2406​1988​53​78​346​0​3463​6,8​5,28​99​2419​1988​54​77​344​0​3463​5,61​5,28​99​2427​1990​53​77​345​0​3463​7,88​5,28​98​2456​1988​54​77​346​0​3463​9,12​5,29​99​2413​1990​54​76​346​0​3463​10,1​5,28​98​2426​1988​54​77​346​0​3463​9,02​5,28​99​2429​1988​53​76​344​0​3463​8​5,28​99​2455​1988​54​77​348​0​3463​7,71​5,28​98​2447​1990​54​78​346​0​3463​10,55​5,28​99​2446​1990​53​75​346​0​3463​12,54​5,28​99​2415​1988​54​76​346​0​3463​10,4​5,28​98​2421​1988​54​77​345​0​3463​7,03​5,28​98​2452​1988​54​78​346​0​3463​8,2​5,28​99​2462​1988​54​75​345​0​3463​10,18​5,35​99​2447​1988​53​76​345​0​3463​9,25​5,33​98​2404​1990​54​77​347​0​3463​8,39​5,29​99​2436​1988​54​77​346​0​3463​6,5​5,29​99​2422​1988​54​78​348​0​3463​9,46​5,29​98​2460​1988​54​77​344​0​3463​10,83​5,29​99​2417​1990​54​76​345​0​3463​8,98​5,29​99​2417​1990​54​76​346​0​3463​6,39​5,29​99​2424​1990​54​78​345​0​3463​5,6​5,29​99​2448​1988​54​77​344​0​3463​7,27​5,29​99​2456​1988​54​77​344​0​3463​9,95​5,29​98​2435​1988​55​76​347​0​3463​13​5,29​99​2415​1988​54​77​345​0​3463​9,52​5,28​98​2418​1988​54​78​346​0​3463​5,74​5,29​98​2452​1988​54​78​345​0​3463​8,1​5,29​98​2460​1988​54​77​346​0​3463​8,98​5,29​99​2416​1990​54​77​351​0​3463​7,1​5,28​98​2406​1988​55​78​346​0​3463​6,2​5,28​99​2438​1990​54​77​346​0​3463​6,73​5,28​99​2420​1988​54​77​345​0​3463​7,95​5,28​99​2445​1988​54​76​346​0​3463​9,15​5,28​99​2410​1988​54​77​342​0​3463​9,66​5,28​99​2421​1988​55​79​349​0​3463​8,54​5,28​99​2424​1988​56​82​346​0​3463​5,56​5,28​98​2450​1988​55​77​345​0​3463​7,43​5,28​99​2439​1990​54​77​346​0​3463​9,07​5,28​98​2450​1988​54​76​345​0​3463​10,05​5,28​99​2419​1988​55​77​346​0​3463​11,12​5,28​99​2429​1990​55​78​345​0​3463​7,7​5,28​99​2437​1988​55​78​346​0​3463​8,2​5,28​99​2458​1988​56​77​347​0​3463​8,78​5,28​98​2445​1988​54​76​346​0​3463​8,25​5,28​98​2409​1988​55​80​347​0​3463​6,81​5,28​99​2418​1988​55​78​345​0​3463​7,17​5,28​98​2419​1988​55​78​345​0​3463​8,68​5,28​99​2437​1988​55​78​346​0​3463​10,22​5,28​98​2418​1988​55​77​347​0​3463​9,13​5,28​99​2433​1990​55​78​347​0​3463​9,56​5,28​98​2419​1990​55​78​344​0​3463​5,56​5,28​99​2451​1990​55​77​343​0​3463​8,44​5,28​99​2443​1988​55​78​346​0​3463​9,59​5,28​98​2455​1988​55​78​346​0​3463​10,59​5,28​99​2423​1990​55​77​347​0​3463​9,62​5,28​98​2423​1988​55​78​347​0​3463​6,82​5,28​99​2447​1988​55​78​344​0​3463​9,2​5,27​98​2457​1988​55​77​344​0​3463​7,67​5,27​98​2445​1988​55​76​345​0​3463​9,42​5,27​99​2406​1990​55​78​346​0​3463​6,95​5,27​99​2431​1988​55​77​342​0​3463​6,79​5,27​99​2419​1988​55​79​346​0​3463​8,28​5,27​98​2443​1988​55​80​345​0​3463​8,55​5,27​99​2418​1988​55​80​345​0​3463​8,3​5,27​99​2429​1988​55​78​346​0​3463​7,75​5,27​98​2422​1988​55​79​345​0​3396​9,86​5,25​99​2447​1990​55​78​346​0​3396​11,82​5,25​99​2449​1988​55​77​345​0​3396​10,23​5,25​98​2440​1988​56​77​344​0​2869​14,36​5,23​98​2416​1990​55​82​346​0​2869​10,58​5,23​98​2420​1988​55​78​346​0​2869​6,14​5,22​99​2450​1988​55​78​345​0​2869​10,25​5,21​98​2467​1990​55​78​346​0​2869​10,38​5,21​99​2433​1988​55​78​345​0​2869​9,62​5,21​99​2409​1990​56​79​347​0​2869​7,04​5,21​99​2434​1988​56​78​346​0​2869​8,7​5,2​99​2415​1988​55​79​346​0​2869​8,52​5,2​98​2450​1988​55​77​346​0​2869​9,45​5,2​99​2414​1990​56​78​346​0​2869​8,53​5,2​99​2421​1988​55​79​345​0​2869​7,77​5,2​99​2422​1988​56​78​345​0​2869​6,74​5,2​99​2446​1988​55​78​345​0​2869​13,94​5,2​98​2462​1990​55​78​346​0​2869​10,78​5,2​99​2433​1988​55​77​346​0​2869​11,07​5,2​98​2417​1988​55​79​346​0​2869​9,07​5,19​99​2416​1988​55​79​345​0​2869​7,52​5,19​99​2451​1988​55​78​346​0​2869​9,27​5,19​99​2448​1988​55​77​346​0​2869​8,42​5,19​99​2413​1988​55​78​348​0​2869​9,36​5,19​99​2410​1988​55​79​345​0​2869​5,48​5,19​99​2434​1988​55​77​346​0​2869​9,32​5,19​98​2417​1988​57​79​346​0​2869​8,32​5,19​99​2439​1988​55​78​347​0​2869​9,85​5,19​98​2416​1988​56​78​345​0​2869​7,64​5,19​99​2412​1988​56​80​348​0​2869​7,32​5,19​99​2434​1988​56​78​345​0​2869​5,71​5,19​99​2445​1988​55​78​346​0​2869​7,28​5,19​99​2450​1990​56​78​347​0​2869​9,36​5,19​99​2429​1992​55​77​345​0​2869​10,26​5,19​99​2415​1988​55​78​346​0​2869​8,16​5,19​99​2417​1988​56​79​345​0​2869​5,8​5,19​99​2452​1988​56​80​343​0​2869​11,01​5,19​98​2448​1988​56​79​346​0​2869​8,84​5,19​99​2412​1988​55​78​346​0​2869​8,57​5,19​98​2416​1988​56​81​344​0​2869​5,86​5,19​99​2431​1988​56​82​346​0​2869​7,24​5,19​99​2418​1988​56​79​344​0​2869​9,97​5,19​99​2436​1988​56​78​345​0​2869​10,24​5,19​99​2419​1988​55​80​344​0​2869​7,04​5,19​98​2415​1990​56​81​347​0​2869​8,55​5,19​98​2432​1988​56​79​346​0​2869​5,86​5,19​99​2439​1988​55​78​346​0​2869​7,06​5,19​98​2445​1988​56​79​345​0​2869​10,13​5,18​99​2431​1988​56​79​347​0​2869​11,03​5,18​99​2423​1988​56​79​342​0​2869​7,33​5,18​99​2423​1988​56​80​345​0​2869​7,05​5,19​98​2462​1988​55​79​346​0​2869​11,96​5,19​98​2451​1988​56​82​347​0​2869​10,37​5,19​99​2414​1988​56​79​345​0​2869​8,3​5,18​99​2413​1990​56​80​345​0​2869​7,64​5,18​98​2422​1988​56​79​346​0​2869​7,44​5,18​99​2431​1990​57​79​346​0​2869​9,46​5,18​99​2434​1988​56​78​346​0​2869​8,92​5,18​99​2416​1988​56​81​346​0​2869​9,05​5,18​99​2417​1988​56​81​345​0​2869​6,4​5,18​99​2423​1990​56​80​345​0​2869​7,81​5,17​98​2444​1990​56​79​346​0​2869​8,38​5,17​99​2451​1990​56​78​345​0​2869​11,08​5,17​99​2429​1988​56​79​346​0​2869​10,41​5,17​99​2422​1990​56​79​346​0​2869​9,18​5,17​99​2421​1988​56​79​345​0​2869​7,87​5,17​99​2462​1990​56​79​346​0​2869​7,89​5,17​99​2438​1988​56​78​347​0​2869​8,05​5,17​99​2417​1988​56​80​346​0​2869​8,37​5,17​99​2419​1990​56​79​343​0​2869​6,32​5,17​98​2418​1988​56​79​346​0​2869​7,45​5,17​97​2428​1988​56​81​346​0​2869​9,73​5,17​99​2420​1988​56​80​348​0​2869​11,1​5,17​99​2419​1988​56​80​344​0​2869​10,41​5,17​99​2416​1988​56​81​345​0​2869​7,5​5,17​98​2431​1988​56​81​344​0​2869​12,61​5,17​99​2440​1988​56​81​346​0​2869​10,14​5,17​98​2437​1988​56​78​346​0​2869​12,82​5,17​99​2416​1988​56​80​347​0​2869​9,68​5,17​99​2422​1990​56​81​344​0​2869​6,99​5,17​99​2414​1988​56​79​346​0​2869​8,31​5,17​99​2448​1988​56​79​345​0​2869​9,17​5,17​98​2447​1988​56​79​347​0​2869​9,43​5,17​99​2414​1988​56​81​348​0​2869​8,28​5,17​99​2413​1990​56​79​346​0​2869​7,4​5,17​99​2436​1988​56​79​345​0​2869​8,53​5,17​99​2415​1990​56​81​346​0​2869​9,96​5,17​99​2440​1988​56​78​346​0​2869​8,85​5,17​99​2418​1988​56​80​345​0​2869​6,42​5,17​99​2417​1988​56​79​346​0​2869​6,28​5,17​99​2430​1988​56​82​346​0​2869​10,02​5,17​99​2448​1988​56​80​345​0​2869​9,44​5,17​98​2443​1988​56​79​345​0​2869​9,03​5,17​99​2429​1988​56​80​346​0​2869​11,54​5,17​99​2419​1988​56​78​344​0​2869​7,78​5,17​99​2416​1988​56​80​345​0​2869​7,47​5,17​99​2451​1990​56​80​343​0​2869​10,72​5,17​98​2451​1988​56​82​346​0​2869​8,7​5,17​99​2411​1988​55​80​346​0​2869​7,68​5,17​98​2416​1990​56​79​345​0​2869​7,78​5,17​98​2429​1988​56​78​346​0​2869​6,27​5,17​99​2418​1988​56​80​345​0​2869​9,07​5,17​98​2431​1988​56​78​346​0​2869​9,27​5,17​98​2415​1988​56​80​345​0​2869​9,96​5,17​98​2414​1988​56​82​346​0​2869​7,87​5,17​98​2432​1990​56​81​347​0​2869​5,36​5,17​99​2437​1990​56​79​346​0​2869​12,18​5,24​99​2442​1988​55​79​346​0​2869​10,77​5,22​99​2428​1990​56​79​348​0​2869​10,56​5,19​99​2420​1988​56​78​343​0​2869​7,67​5,18​99​2422​1988​57​81​345​0​2869​6,16​5,18​99​2464​1988​56​79​346​0​2869​11,37​5,18​98​2442​1988​56​79​346​0​2869​8,08​5,18​99​2416​1988​56​80​345​0​2869​7,63​5,18​98​2415​1988​56​80​345​0​2869​8,66​5,18​99​2423​1988​56​82​346​0​2869​8,51​5,18​98​2428​1988​56​79​344​0​2869​8,88​5,18​99​2432​1988​56​79​347​0​2869​9,9​5,18​99​2418​1988​56​78​346​0​2869​9,72​5,18​99​2418​1988​57​81​345​0​2869​6,12​5,18​99​2428​1990​57​80​347​0​2869​6,19​5,18​99​2446​1988​56​79​345​0​2869​9,1​5,18​98​2447​1988​56​82​346​0​2869​12,16​5,18​98​2431​1990​56​79​348​0​2869​10,02​5,18​99​2421​1988​56​80​347​0​2869​10,06​5,18​99​2431​1990​57​80​345​0​2869​8,73​5,16​99​2459​1988​56​79​347​0​2869​10,72​5,15​99​2436​1990​56​79​346​0​2869​9,01​5,15​99​2416​1988​56​80​345​0​2869​7,89​5,15​99​2415​1988​56​79​343​0​2869​12,45​5,19​98​2418​1990​57​81​346​0​2869​9,66​5,19​99​2431​1992​56​80​347​0​2869​16,81​5,18​99​2421​1988​56​78​346​0​2869​11,58​5,18​98​2420​1988​57​80​344​0​2869​9,2​5,18​98​2414​1988​57​81​345​0​2869​6,74​5,18​99​2437​1990​56​81​345​0​2869​7,22​5,17​98​2442​1990​56​81​345​0​2869​14,12​5,17​99​2441​1990​56​79​346​0​2869​10,26​5,17​98​2416​1990​56​80​347​0​2869​11,68​5,17​99​2421​1988​56​81​345​0​2869​9,39​5,17​99​2440​1988​56​80​346​0​2869​7,31​5,18​99​2452​1992​57​80​346​0​2869​10,49​5,18​99​2441​1988​56​79​345​0​2869​9,11​5,18​99​2411​1988​56​80​346​0​2869​7,11​5,18​99​2419​1988​57​79​346​0​2869​7,19​5,17​98​2417​1988​56​81​347​0​2869​9,71​5,18​99​2433​1990​57​84​345​0​2869​10,15​5,17​99​2418​1988​56​82​347​0​2869​11,46​5,17​99​2431​1988​56​81​345​0​2869​7,35​5,18​99​2412​1990​56​80​346​0​2869​8,32​5,17​99​2423​1988​57​80​344​0​2869​14,21​5,17​98​2441​1988​56​80​345​0​2869​10,93​5,17​98​2450​1988​57​79​346​0​2869​11,68​5,17​99​2421​1988​56​80​346​0​2869​9,93​5,17​99​2431​1988​57​80​346​0​2869​10,11​5,17​99​2431​1988​57​80​347​0​2869​7,23​5,17​98​2445​1988​57​79​346​0​2869​10,18​5,17​99​2436​1988​56​79​346​0​2869​7,32​5,17​99​2411​1988​56​80​346​0​2869​8,67​5,17​99​2414​1988​56​80​346​0​2869​8,32​5,16​98​2419​1988​56​83​343​0​2869​7,85​5,16​99​2438​1988​57​79​345​0​2869​8,78​5,16​99​2418​1988​56​80​345​0​2869​6,41​5,16​99​2423​1988​56​80​346​0​2869​7,58​5,16​98​2416​1990​56​79​345​0​2869​5,28​5,16​99​2442​1988​56​80​346​0​2869​7,85​5,16​98​2442​1988​56​79​345​0​2869​11,97​5,16​98​2447​1990​56​79​346​0​2869​13,22​5,16​98​2417​1988​56​79​347​0​2869​8,91​5,16​98​2421​1990​57​82​347​0​2869​8,85​5,16​98​2449​1988​57​80​345​0​2869​9,02​5,16​98​2457​1990​56​80​345​0​2869​9,19​5,16​99​2441​1988​56​79​345​0​2869​8,23​5,16​98​2412​1988​57​81​345​0​2869​6,35​5,16​99​2429​1988​57​79​344​0​2869​6,18​5,16​99​2415​1990​56​81​346​0​2869​7,7​5,16​99​2438​1988​57​82​346​0​2869​8,88​5,16​99​2413​1990​56​79​347​0​2869​11,63​5,16​99​2423​1988​56​80​347​0​2869​9,17​5,16​99​2421​1988​57​81​346​0​2869​6,9​5,16​99​2440​1988​57​79​346​0​2869​8,87​5,16​98​2445​1988​56​82​345​0​2869​11,4​5,16​98​2435​1988​56​79​345​0​2869​10,9​5,16​98​2419​1988​56​83​345​0​2869​9,96​5,16​99​2418​1988​56​80​345​0​2869​9,2​5,16​99​2449​1988​56​81​346​0​2869​6,51​5,16​99​2456​1988​57​78​346​0​2869​8,96​5,16​


----------



## Nighthog

HeLeX63 said:


> I re-mounted mine after a few months of usage. Saw hotspot temps 65-68C with GPU temps up to 45C. So slightly over 20C difference. With new paste and remount (quite alot more paste than before), I see GPU and hotspot difference of 15 to 17C. So at 45C, hotspot will hover around 62C. A small little reduction there. This is with a 480x60 and 560x30 radiators.


I have less than half your radiator space in use. Had to remove several smaller 120mm ones because I wanted to be able to use both my cards.

I tested to see the screw tightness... and damn were they loose! you could turn them with your fingers if you could get a nail into them. I could turn and turn them much tighter with a simple bit head as space was tight for anything longer.
They had that damn warranty sticker on one of them and that screw was almost the most not tightened at all.
Though I didn't see a immediate temperature difference after the tightening. Mostly the VRM got a little better but core & hotspot didn't improve more than a couple degrees.

A tip to the other Liquid Devil owner, tighten those screws down. They are horribly loose from factory.


----------



## Oversemper

Nighthog said:


> I tested to see the screw tightness... and damn were they loose! you could turn them with your fingers if you could get a nail into them. I could turn and turn them much tighter with a simple bit head as space was tight for anything longer.
> They had that damn warranty sticker on one of them and that screw was almost the most not tightened at all.
> Though I didn't see a immediate temperature difference after the tightening. Mostly the VRM got a little better but core & hotspot didn't improve more than a couple degrees.
> 
> A tip to the other Liquid Devil owner, tighten those screws down. They are horribly loose from factory.


Well, I did not write about it, for me it is kinda standard to tighten screws for a new card. As to the sticker, I used hairdryer to heat it enough to take off without tearing. Then tighten the screws well. Every screw turned from quarter to half of a full revolution; so, it wasn't that bad from the factory.


----------



## Pedropc

Very good to all, forgive my terrible English. Do you know if the 6900XT can be flashed ???, I have a reference 6900XT and I would like to flash it with the Powercolor Ultimate bios. All the best.


----------



## CS9K

Pedropc said:


> Very good to all, forgive my terrible English. Do you know if the 6900XT can be flashed ???, I have a reference 6900XT and I would like to flash it with the Powercolor Ultimate bios. All the best.


Not yet. There is a thread over on Igor's Lab where the creators of Red Bios Editor/MorePowerTool are working to enable modification/flashing of RDNA2 cards.


----------



## Pedropc

Thank you. All the best.


----------



## weleh

AMDVBFLASH 3.20 can be used to flash RDNA2 however it's not public (some reviewers from France got sent it so they could flash higher PL on their Ultimate cards)


----------



## newls1

any idea when we might have the new drivers out supporting AMD's version of DLSS?


----------



## weleh

No.

It's probably Adrenaline 2021


----------



## CS9K

newls1 said:


> any idea when we might have the new drivers out supporting AMD's version of DLSS?


I haven't heard any new news. They stated that it would release once it was ready for all forms of RDNA2.


----------



## Nighthog

How do I verify increasing voltage doesn't cause issues?

What were the issues people had?
I set 1.200V maximum in MPT and testing it a bit right now on my Liquid Devil.

EDIT: OK, the drivers break... just erratic behaviour. Can't handle it.
The Software can't deal with the voltage being higher than what the BIOS says it should be and run into trouble, buffer overrun, higher value than specified etc...
Though I can't see how a little better software handling wouldn't solve these issues if the exceptions were handled better, correctly.
Basically the mismatched values cause bugs in the drivers.


----------



## HyperC

Mine will not flash using V3.15 subsystem id mismatch all day long even tried cmd prompt force nadda, wish i had the time to mod my card


----------



## Nighthog

Seems ATIFLASH got updated with RX 6000 series support.



HyperC said:


> Mine will not flash using V3.15 subsystem id mismatch all day long even tried cmd prompt force nadda, wish i had the time to mod my card


ATIFLASH 3.15?








AMDVBFlash / ATI ATIFlash (3.31) Download


AMD AMDVBFlash is used to flash the graphics card BIOS. The version released by ATI was called ATIFlash or just WinFlash. It supports all AMD Radeo




www.techpowerup.com







Code:


AMD has changed the behavior of AMDVBFlash in newer versions. It now requires a constant running Ring-0 kernel-mode driver. 
This is a security risk, you really only need the driver for a few minutes while saving the BIOS or flashing. 
That's why AMDVBFlash 3.15 and newer include AMDVBFlashDriverInstaller.exe, which installs/uninstalls the AMD driver with one click.


----------



## OrionBG

Guys, I'm furious!
Those people from Alphacool are joking with me! On the 6th of February, I ordered a GPU block for my Der Devil 6900XT. It was then saying availability around the end of February.
Somewhere in the middle of March, I asked them what is going on with my order and they told me the expected date is the 19th of April... Yesterday, more than 2 months after my order, I asked again and they told me new expected date is 14th of May...
I can't deal with those *** anymore... I told them I want a refund! I'm never going to purchase anything from those people!

Sorry for the rant... Had to vent it...


----------



## Nighthog

OrionBG said:


> Guys, I'm furious!
> Those people from Alphacool are joking with me! On the 6th of February, I ordered a GPU block for my Der Devil 6900XT. It was then saying availability around the end of February.
> Somewhere in the middle of March, I asked them what is going on with my order and they told me the expected date is the 19th of April... Yesterday, more than 2 months after my order, I asked again and they told me new expected date is 14th of May...
> I can't deal with those *** anymore... I told them I want a refund! I'm never going to purchase anything from those people!
> 
> Sorry for the rant... Had to vent it...


Yeah don't buy stuff unless it's in stock. Their preliminary dates are junk. Stuff can get delayed endlessly, particularly now with overall delays to all manufacturing.
That stuff gets manufactured in China so it might get into issues in particular.


----------



## HeLeX63

OrionBG said:


> Guys, I'm furious!
> Those people from Alphacool are joking with me! On the 6th of February, I ordered a GPU block for my Der Devil 6900XT. It was then saying availability around the end of February.
> Somewhere in the middle of March, I asked them what is going on with my order and they told me the expected date is the 19th of April... Yesterday, more than 2 months after my order, I asked again and they told me new expected date is 14th of May...
> I can't deal with those *** anymore... I told them I want a refund! I'm never going to purchase anything from those people!
> 
> Sorry for the rant... Had to vent it...


Its a que system. You are basically in a que. You pay for it, and if your payment makes it within the number of matches made in that month or timeframe, you get allocated it. It sucks I know, but I would just hold up and wait. A very good waterblock indeed.


----------



## Nighthog

Well I will upgrade my loop with a 'external' XT45 1080 SuperNova radiator.

Wanted to order the 60mm thick one but there were some trouble with payment systems so had to order from another site which only sold the XT45 variant.
Lots of bits and pieces needed could not be ordered either as other sites didn't sell the various accessories I needed but should do with the minimum I have laying around to get started to have temperatures in check.
It was not in stock but should not need wait to long to arrive. (they are waiting for a shipment to arrive)


----------



## Oversemper

OrionBG said:


> Guys, I'm furious!
> Those people from Alphacool are joking with me! On the 6th of February, I ordered a GPU block for my Der Devil 6900XT. It was then saying availability around the end of February.
> Somewhere in the middle of March, I asked them what is going on with my order and they told me the expected date is the 19th of April... Yesterday, more than 2 months after my order, I asked again and they told me new expected date is 14th of May...
> I can't deal with those *** anymore... I told them I want a refund! I'm never going to purchase anything from those people!
> 
> Sorry for the rant... Had to vent it...


That's why I waited for Liquid Devil to appear in store. And they don't have serious availability problems coz nobody needs a +2k USD card slower than 5700XT in mining and even less people got custom water loop or bold enough to build one for 6900XT. 3x8pins Liquid Devil is one of the best offers for radeon 6900xt; in Russia it costs LESS (about $2100) than basic fan versions.


----------



## weleh

Friend bought a Red Devil Ultimate.
The stock frequency on the driver is 2600 Mhz so clearly higher than anything we've seen here.

GPU-Z still doesn't support it so we can't extract the Vbios for experimenting.


----------



## Oversemper

weleh said:


> Friend bought a Red Devil Ultimate.
> The stock frequency on the driver is 2600 Mhz so clearly higher than anything we've seen here.
> 
> GPU-Z still doesn't support it so we can't extract the Vbios for experimenting.


May be its the same dies but only the manufacture spent more time to determine a higher stable clock/voltage combo. The question is - can they go proportionally higher from the base clock.


----------



## kairi_zeroblade

Oversemper said:


> May be its the same dies but only the manufacture spent more time to determine a higher stable clock/voltage combo. The question is - can they go proportionally higher from the base clock.











Powercolor RX 6900 XT VBIOS


16 GB GDDR6, 500 MHz GPU, 914 MHz Memory




www.techpowerup.com





there seems to be plenty on the TPU database (recent uploads for the Liquid Devil Ultimate)


----------



## weleh

Well if we consider the 25mV increase it doesn't look like there's binning going on at all.
Pretty much any card will benefit from vcore increase.


----------



## Nighthog

Anyone adventerious enough to try the BIOS flashing tool update if it works on the RX 6000 series?


----------



## weleh

I'm not at home but I'll try via remote desktop since I have a dual bios card.


----------



## weleh

Tried to flash and the -f option doesn't work.

Any tips?


----------



## DarknightOCR

the -f option does not work on 3.15 or on windows flash 
only in linux flash


----------



## newls1

so tried to use the new 21.4.1 driver and lost my best OC's for some reason. Cant OC nearly as good as before. Seems MPT and the PPT's dont play well with eash other anymore, went back to 21-3-2


----------



## CS9K

newls1 said:


> so tried to use the new 21.4.1 driver and lost my best OC's for some reason. Cant OC nearly as good as before. Seems MPT and the PPT's dont play well with eash other anymore, went back to 21-3-2


This happened to me with 21.2.1 coming from both, 21.1.1 and 21.12.3. It was fixed in 21.3.1, strangely.

I was waiting to see if I saw a comment like this. Thank you for testing it out. I'll stay put on 21.3.2.


----------



## Blameless

Lost a significant fraction of my VRAM OC with 21.4.1 while 21.3.1 before that introduced some memory clocking oddities as well as a hidden performance limiter in the 

MPT/SPPT seem to work normally, but performance increases with memory overclock went from being able to max out the slider to only 2050-2070MHz...anything past that results in a performance degradation.

Will probably still keep the 2.4.1 drivers as there are some fixes that are relevant to me, but the overclocking issue is a bit annoying.


----------



## 6u4rdi4n

21.4.1 gave me a new personal best in Time Spy. I scored 19 243 in Time Spy

137 combined score and 182 graphics score higher than my old PB.


----------



## newls1

I have spent all day since the minute these new drivers posted on techpowerup.com, and here is my update after spending the last 7hrs with them and doing multi installs and removals (DDU used everytime)

THEY SUCK

Something is not happy with these drivers and overclocking. Ive proved this with my 2 machines, 1 with a 6900xt and the other with a 5700XT. Extremely limited OCing especially with the 6900xt. Prior drivers (21.3.1) My OC for TS was 2815MHz (MPT tweaked) and broke many records with that.... Using the 21.4.1 driver 2620Mhz (MPT tweaked) is the fastest I could pass that Benchmark. ***! this is after hours and hours of tweaking with MPT, DDU cleaning, and reinstalling them. Even tried disabling resize bar, etc.... 21.4.1 sucks for OCing absolutely breaks it. 

To sum this all up for those interested, im back on 21.3.1 and all is perfect again, actually increased OC and have a new best for TS @ 2820Mhz (MPT tweaked)....... Hope this saves someone else some time trying these out


----------



## weleh

So I managed to flash ultimate bios on my toxic in Linux. 
It works but there are some issues... 

First of all driver still knows you dont have a xtxh stepping so if you go past 3000 or 2150 you get driver reset. 

The second issue is, Vcore stays at 1018mV... 

In essence theres 0 reason for any of you to attempt to flash this bios on your card. It doesnt work 

Another dangerous thing, if you dont have dual bios you will brick the card and youll need to flash it on Linux via another gpu because you will not get vga output outside of Windows, not even bios will show up.


----------



## ZealotKi11er

weleh said:


> So I managed to flash ultimate bios on my toxic in Linux.
> It works but there are some issues...
> 
> First of all driver still knows you dont have a xtxh stepping so if you go past 3000 or 2150 you get driver reset.
> 
> The second issue is, Vcore stays at 1018mV...
> 
> In essence theres 0 reason for any of you to attempt to flash this bios on your card. It doesnt work
> 
> Another dangerous thing, if you dont have dual bios you will brick the card and youll need to flash it on Linux via another gpu because you will not get vga output outside of Windows, not even bios will show up.


Yes XTXH has different device ID.


----------



## HyperC

So there is no hope unless Linux or wait for another amdflash version?


----------



## kratosatlante

Blameless said:


> Lost a significant fraction of my VRAM OC with 21.4.1 while 21.3.1 before that introduced some memory clocking oddities as well as a hidden performance limiter in the
> 
> MPT/SPPT seem to work normally, but performance increases with memory overclock went from being able to max out the slider to only 2050-2070MHz...anything past that results in a performance degradation.
> 
> Will probably still keep the 2.4.1 drivers as there are some fixes that are relevant to me, but the overclocking issue is a bit annoying.


In my case, drivers 21.3.1 2600-2700 unigine superposition vram 2060










21.4.1 lose 10mhz core 2590-2690 (2700 runs but get artifacts) vram increase to 2086 mhz
and get more 12k points


----------



## weleh

HyperC said:


> So there is no hope unless Linux or wait for another amdflash version?


Yes you have to flash via Linux or wait for AMDVBFLASh 3.2 on Windows however as I said, there's driver level checks and the card won't work past 1018mV so no point flashing.


----------



## Medusa666

Posting my question here too as I started out in the wrong thread lol. 

Is a junction temperature of around 112-115c during heavy gaming such as Red Dead Redemption 2 together with core of 90-95c OK for prolonged use, i.e 2-4 hours a day? Will the card survive for 3-4 years, can the silicone handle it? 

Sorry if it is a stupid question but I don't know.


----------



## L!ME

I have a 6900 Xt Liquid Devil Ultimate the Binning is really good. Gamestable are 2780 min and 2880max in Wattmann clocks are around 2830 to 2850. At timespy the Card can Not compete with my normal 6900 xt only in every other Benchmark and Games.


----------



## weleh

Worthless without benchmarks for us to compare.
Also you need to look at effective clocks not what you set on the driver.


----------



## L!ME

This is better? IT was one of my First benches with the ultimate.


----------



## weleh

Yea, my 6900XT Toxic does better than your card with less effective clocks (2700 range).
Are you on stock PL?


----------



## L!ME

Your card did it better with only 350W?
My Cards Limit is 575W at the moment. But still under testing.
I don't have picture from the ultimate with monitoring yet.
But i have some from my other 6900 XT non Ultimate
From my Timespy Record 23338 graphicscore i have no monitoring because of loosing points when its activated.


----------



## weleh

23300 is insane yea but I'm a bit skeptical.
Looking at top results the first card on there has lower than 2750 average which means effectively it's even lower...
Sounds like someone is playing with driver hacks.

Your card boosting at 2900 almost on TS is just crazy good.


----------



## L!ME

I think not, the card is cpu limited at that levels. I need to run my memory maxed out at c12 to reach that score.
A Intel fully tuned is the better partner for timespy.


----------



## weleh

L!ME said:


> I think not, the card is cpu limited at that levels. I need to run my memory maxed out at c12 to reach that score.
> A Intel fully tuned is the better partner for timespy.


I see.
Thanks for pointing that out.
I'll give it a go.


Which drivers you reckon give best results on TS?


----------



## L!ME

I had a lot of cards here an nearly every card has his favorite driver. But 21.4.1 gave me the worst results.
the two bests are 21.3.1 and 21.3.2. But only at timespy. firestrike it was an older one i must look in my results


----------



## kazukun

*GPU World Record Set by Der8auer on PowerColor RX 6900 XT Liquid Devil Ultimate - 3.225 GHz*









GPU World Record Set by Der8auer on PowerColor RX 6900 XT Liquid Devil Ultimate - 3.225 GHz


Overclocker extraordinnaire Roman "der8auer" Hartung has achieved a new world record for GPU clockspeed with the help of exotic cooling and PowerColor's RX 6900 XT Liquid Devil Ultimate. The PowerColor Red Devil Ultimate features the latest and greatest bin of any RX 6900 XT chip, featuring...




www.techpowerup.com


----------



## Bart

L1ME owns the charts now, holy crap dude, LOL! My best score for that combo is like 38th. 

For those having issues with the new drivers, ALWAYS do a 'factory reset' within the AMD driver AFTER you update. Then reload your MPT settings, and your OCs will probably be back to normal. Sometimes things get VERY flaky with those driver updates, and factory resets actually help in most cases. Mine were borked too, factory reset cured it.


----------



## weleh

Yea I just got my best TS GS with newest drivers.

22 128


----------



## newls1

As my post above... the 21.4.1 drivers SUCK... have 7hours with them tweaking and playing with MPT and my oc just ranks with them


----------



## CS9K

Bart said:


> L1ME owns the charts now, holy crap dude, LOL! My best score for that combo is like 38th.
> 
> For those having issues with the new drivers, ALWAYS do a 'factory reset' within the AMD driver AFTER you update. Then reload your MPT settings, and your OCs will probably be back to normal. Sometimes things get VERY flaky with those driver updates, and factory resets actually help in most cases. Mine were borked too, factory reset cured it.


Interesting. I've been doing the "delete sppt", restart, download new driver, pull network cable, safe mode, ddu, restart normal, install new, restart, MPT, restart. 

My OC, much like newls1's, is down 40MHz core and 20MHz memory on 21.4.1 from 21.3.2. I'll give the factory reset a try in the middle of my routine and see what happens.


----------



## weleh

Contrary to what's being reported here I gained a couple of Mhz on gaming and on Time Spy hence my new record.


----------



## ZealotKi11er

weleh said:


> Contrary to what's being reported here I gained a couple of Mhz on gaming and on Time Spy hence my new record.


There is no stability when it comes to trying to achieve maximum clk speed. Each card is very different. I had 6900 XT that could do 2750MHz in PR but only 2600 in TS. I then got another one which did 2725MHz in TS but could not run 2750 in PR.


----------



## HeLeX63

I am a little paranoid, but does anyone else have the same issue where Memory and Bandwidth values are 0 and Unknown in GPU-z using the latest Adrenalin 21.4.1 when using default and stock non OC values ?

Opening the new Radeon software from desktop also changes my bios code from the usual A0 to d3 for some reason, only when opening the Radeon software. Wierd...


----------



## weleh

ZealotKi11er said:


> There is no stability when it comes to trying to achieve maximum clk speed. Each card is very different. I had 6900 XT that could do 2750MHz in PR but only 2600 in TS. I then got another one which did 2725MHz in TS but could not run 2750 in PR.


PR will always do higher "driver clocks" than any other benchmark due to RT bottleneck. So you can bench PR at 2800 Mhz on the driver but if you look at effective clocks, they will be a ****ton lower, like 200 Mhz lower than "driver clocks"...

TS and FS will always do higher effective clocks but lower "driver clocks" than PR.

Fact is, performance wise, 21.4.1 seem to be the best performance wise.


----------



## weleh

HeLeX63 said:


> I am a little paranoid, but does anyone else have the same issue where Memory and Bandwidth values are 0 and Unknown in GPU-z using the latest Adrenalin 21.4.1 when using default and stock non OC values ?
> 
> Opening the new Radeon software from desktop also changes my bios code from the usual A0 to d3 for some reason, only when opening the Radeon software. Wierd...


Yes, seems like GPU-Z needs to be updated.


----------



## cfranko

How is everyone in this thread running like 2800 core frequency? I have a 6900 XT Phantom which sometimes crashes at 2650 MHz on Timespy. Is it because I am air cooling?


----------



## Nizzen

Error 404


----------



## weleh

Lottery.

Air has nothing to do with it. My 6800XT on air did 2750 Mhz.


----------



## Nizzen

cfranko said:


> How is everyone in this thread running like 2800 core frequency? I have a 6900 XT Phantom which sometimes crashes at 2650 MHz on Timespy. Is it because I am air cooling?


Try COLD water


----------



## DarknightOCR

Nizzen said:


> 6900xt Unlocked bios:
> 
> 
> 
> 
> 
> 
> Liquid_Devil_bios_1226.rar
> 
> 
> 
> 
> 
> 
> 
> drive.google.com


bios for RX 5700XT .. Navi 10


----------



## weleh

Ultimate bios has been around forever... It's useless by the way.
Already tested and explained why some posts ago.


----------



## Nizzen

DarknightOCR said:


> bios for RX 5700XT .. Navi 10


Woops sorry


----------



## cfranko

Also should I update to 21.4.1 as a regular gamer with slight overclocks?


----------



## CS9K

CS9K said:


> Interesting. I've been doing the "delete sppt", restart, download new driver, pull network cable, safe mode, ddu, restart normal, install new, restart, MPT, restart.
> 
> My OC, much like newls1's, is down 40MHz core and 20MHz memory on 21.4.1 from 21.3.2. I'll give the factory reset a try in the middle of my routine and see what happens.


Nope, no change after doing factory reset before new driver installation. Still stuck at 2715/2130, from 2750/2150


----------



## HeLeX63

cfranko said:


> How is everyone in this thread running like 2800 core frequency? I have a 6900 XT Phantom which sometimes crashes at 2650 MHz on Timespy. Is it because I am air cooling?


I crash at 2650MHz on water. With 35C GPU and 50C junction...


----------



## HeLeX63

Anyone who owns an X570 Gigabyte AORUS Master, the new 21.4.1 seems to causes a d3 BIOS error code when launching the Radeon Software.

Just incase anyone is having the same issues. Previous drivers did not do this.


----------



## ZealotKi11er

I even called out the guy that broke the record telling him 2880MHz in TS is a binned GPU even for XTXH. PowerColor probably hand picked it just for him.


----------



## Pedros

New 6900XT Red Devil owner. Jcame from a 3080 










2488/2600 @ 1105mV

Din't play with MPT yet...

Really liking this card. Plus, it's just me or the gameplay seems smoother than nVidia?


----------



## MickeyPadge

HeLeX63 said:


> Anyone who owns an X570 Gigabyte AORUS Master, the new 21.4.1 seems to causes a d3 BIOS error code when launching the Radeon Software.
> 
> Just incase anyone is having the same issues. Previous drivers did not do this.


I have that board, not had any errors, care to elaborate?


----------



## HeLeX63

MickeyPadge said:


> I have that board, not had any errors, care to elaborate?


Look I'm not too sure. I have had the d3 bios code since the new driver installation and only triggered when launching the software in windows. Went to previous driver and the error disappeared. Interesting how its not occurring to you. A weird anomaly it seems.


----------



## HyperC

LOL my board says D3 not sure about any issues... Don't worry AMD has us covered  the next 5 drivers or 20 bios


----------



## HeLeX63

HyperC said:


> LOL my board says D3 not sure about any issues


Is this when you launch Radeon software ?


----------



## HyperC

I have no idea my pc is in a closest and desk is 10 feet away sooooo


----------



## knightriot

Pedros said:


> New 6900XT Red Devil owner. Jcame from a 3080
> 
> View attachment 2487619
> 
> 
> 2488/2600 @ 1105mV
> 
> Din't play with MPT yet...
> 
> Really liking this card. Plus, it's just me or the gameplay seems smoother than nVidia?


May be SAM benefits and lower latency give you these


----------



## Nighthog

HeLeX63 said:


> Look I'm not too sure. I have had the d3 bios code since the new driver installation and only triggered when launching the software in windows. Went to previous driver and the error disappeared. Interesting how its not occurring to you. A weird anomaly it seems.


It says d3 when you read some cpu registers/data. Other applications do it also, it's not and error. It's because of some part of the cpu was accessed for the new telemetric data in the driver.
I think it's because the part accessed "isn't supported" why the message is there.


----------



## Nighthog

On another note my 6900XT Liquid Devil can't clock higher than ~2650-2660Mhz on the core for ~2600Mhz sustained in benchmarks before crashes become a thing.

Voltage starved or really just that bad.


----------



## Pedros

One question that is not directly related to the physical card, but any ideas on how to get invitation codes for the Devil Club? I really don't know how to get those since my card does not bring anything that relates to an invitation code :x 

Thank you!


----------



## ObviousCough

XTXH is reason enough for me to heavily consider nVidia next round. I shelled out $1400 for a MERC319 and i won't even be able to flash more voltage to max out the limited 3ghz clocks its locked to? REEEE!

My card can do 2700MHz with the stock air cooler no problem, but with another 25mv i could be doing way better when it gets on water :|


----------



## Nighthog

AMD has been to prelavent with software limits on OC for a while now... So much for their "unlocked" OC friendly hardware.


----------



## CS9K

ObviousCough said:


> XTXH is reason enough for me to heavily consider nVidia next round. I shelled out $1400 for a MERC319 and i won't even be able to flash more voltage to max out the limited 3ghz clocks its locked to? REEEE!
> 
> My card can do 2700MHz with the stock air cooler no problem, but with another 25mv i could be doing way better when it gets on water :|


Both brands are voltage limited, and there is/was/isn't a guarantee that we'll ever get that extra 25mV for vanilla RX 6900 XT's. Saying this is a bit knee-jerk, no?


----------



## SuprUsrStan

My 6900XT Liquid Devil can only do 2600Mhz-2700Mhz on the core and 2150 on memory. It's capped at 1.175v and the feelsbadman.gif part is the card doesn't even push the power limits. The card will normally sit at around 300W even with the 15% offset on the power limit. If only I can push some more voltage on the core I could push the card much more than right now. The card temps top out in the mid 40's with junction temps at most at 60C.


----------



## ZealotKi11er

ObviousCough said:


> XTXH is reason enough for me to heavily consider nVidia next round. I shelled out $1400 for a MERC319 and i won't even be able to flash more voltage to max out the limited 3ghz clocks its locked to? REEEE!
> 
> My card can do 2700MHz with the stock air cooler no problem, but with another 25mv i could be doing way better when it gets on water :|


Lol no. 25mV on top to 1.175mV will get you maybe 10-20MHz which is nothing.


----------



## ObviousCough

it is for me. 😢


----------



## weleh

Yea you go Nvidia so they release a Ti SKU afterwards as well...

Both brands are bad when it comes to this...


----------



## SuprUsrStan

ZealotKi11er said:


> Lol no. 25mV on top to 1.175mV will get you maybe 10-20MHz which is nothing.


I beg to differ. 1.20 vs 1.175 is a 2% power bump. A 2% increase from 2650Mhz is 2700Mhz. While power to performance isn't linear, the performance gains we're talking about when overclocking is all in the low single digit percents anyway. Especially considering when you're literally voltage limited and running 40C on water.

The 6900XTU with a good bin and 1.2V is pushing 2900Mhz. That 25mV isn't "nothing".


----------



## ZealotKi11er

SuprUsrStan said:


> I beg to differ. 1.20 vs 1.175 is a 2% power bump. A 2% increase from 2650Mhz is 2700Mhz. While power to performance isn't linear, the performance gains we're talking about when overclocking is all in the low single digit percents anyway. Especially considering when you're literally voltage limited and running 40C on water.
> 
> The 6900XTU with a good bin and 1.2V is pushing 2900Mhz. That 25mV isn't "nothing".


The good bin part is more important. That 2900MHz 1.2v GPU most likely does 2880 with 1.175v. Hey you can easily test scaling with voltage. See what 1.15v vs 1.175v gives.


----------



## CS9K

ZealotKi11er said:


> The good bin part is more important. That 2900MHz 1.2v GPU most likely does 2880 with 1.175v. Hey you can easily test scaling with voltage. See what 1.15v vs 1.175v gives.


This x1000. Gotta remember, @SuprUsrStan, the voltage curve goes up exponentially as you go for higher and higher clocks. 25mV makes WAY more of a difference when the curve is more-flat at lower core speeds, whereas adding 25mV to the top end? Not so much.

Your chip's "bin" is simply how far right the *entire* curve is shifted from a/the baseline. The further right, the better the bin.


----------



## weleh

Looks like some cards are already XTXH bin quality before this whole thing started.

I mean L1ME dude here or whatever his name is has a non XTXH card doing 2800+ on TS. That's XTXH bin quality right there.


----------



## ZealotKi11er

weleh said:


> Looks like some cards are already XTXH bin quality before this whole thing started.
> 
> I mean L1ME dude here or whatever his name is has a non XTXH card doing 2800+ on TS. That's XTXH bin quality right there.


Same with some people in 6800 XT club what had 2800 set clk (2750) 1.15v in TS.


----------



## HyperC

LOL all this talk is making me want to MOD my card more, maybe tomorrow is the day wish me luck


----------



## Medusa666

My reference 6900 XT is stable 24/7 in games, Timespy, etc with 2545MHz as minimum clock and 2750MHz as maximum clock with the voltage slider on 1050mv in Wattman, +15% powerlimit. 

Is it XTXH silicone?


----------



## jonRock1992

So I got a 6900 XT Red Devil Ultimate and it can only do 2550/2700 @1175mV with sane levels of noise. Anything over 50% fan speed is unbearably loud with this GPU. Should I return this POS? I bought this for a high ****ing bin and got reference level quality.


----------



## weleh

Medusa666 said:


> My reference 6900 XT is stable 24/7 in games, Timespy, etc with 2545MHz as minimum clock and 2750MHz as maximum clock with the voltage slider on 1050mv in Wattman, +15% powerlimit.
> 
> Is it XTXH silicone?


Don't think so no.


----------



## weleh

jonRock1992 said:


> So I got a 6900 XT Red Devil Ultimate and it can only do 2550/2700 @1175mV with sane levels of noise. Anything over 50% fan speed is unbearably loud with this GPU. Should I return this POS? I bought this for a high ****ing bin and got reference level quality.


You need water for 6900XT references let alone highly binned die...


----------



## jonRock1992

weleh said:


> You need water for 6900XT references let alone highly binned die...


So everything is working as it should you think? I'm just pissed because I spent $2300 on this thing expecting game stable 2700 to 2800 MHz, and I'm getting significantly less. What's the point of this gpu if it performs identical to reference?


----------



## weleh

It depends what you consider bearable...

Some people have no issues with fans at 100% others can't deal with it.
6900XT are very hard cards to cool effectively and it's when they are under water that they shine.

Powercolor should have never have released an air cooler Ultimate card... The Liquid devil is enough.

There's a reason I went Toxic because otherwise I would have to put the card on a loop and I already have 1 AIO.


----------



## jonRock1992

Well damn. My previous card was a founders edition 1080 Ti that I put under water with a G12 and X62. I really didn't wanna mess with that again so I was looking for a highly binned air-cooled chip. Thought this would be the one to get, but I'm highly disappointed. There's no way anybody is gaming with the red devil over 60% fan speeds, it's so freaking loud. Like I'm concerned that is defective or something. It's by far the loudest gpu that I have ever owned or heard. I wonder if somebody would be nice enough to post a video of their red devil at 100% fan speed so I can compare the noise with mine?


----------



## weleh

At complete stock operation with Ultimate bios what kind of perfomance are you getting? Clock speed, Power consumption and temps?


----------



## jonRock1992

Ok. I just did a completely stock Timespy bench run. I got 20348 for the graphics score. Core clock bounced around between 2367MHz and 2500MHz. Max temp was 66C. Max junction temp was 86C. Power draw was 300W. Also, I just tried a Timespy stress test with 2550MHz/2700MHz @1175mV, 2150MHz fast ram timings, +15% power limit, and it just failed. Thought it was stable before, but it's not. This GPU doesn't even undervolt well. Thinking about returning it. There's no way I should have to pay a premium for reference performance.


----------



## Blameless

Medusa666 said:


> My reference 6900 XT is stable 24/7 in games, Timespy, etc with 2545MHz as minimum clock and 2750MHz as maximum clock with the voltage slider on 1050mv in Wattman, +15% powerlimit.
> 
> Is it XTXH silicone?


The slider doesn't change maximum voltage at all, so whatever clocks it's actually hitting in Time Spy (probably around 2.6GHz based on your sliders) is using the full 1175mV. This is entirely typical.


----------



## weleh

Those temps aren't looking good tbh...

Also, I recommend everyone to test their VRAM overclock on Superposition game or something.
Maxing out the slider for me at 8K Optimized Game gives a 10 FPS drop... The sweet spot for me is 16.440Gbps which is +80 offset with fast timings.


----------



## jonRock1992

weleh said:


> Those temps aren't looking good tbh...
> 
> Also, I recommend everyone to test their VRAM overclock on Superposition game or something.
> Maxing out the slider for me at 8K Optimized Game gives a 10 FPS drop... The sweet spot for me is 16.440Gbps which is +80 offset with fast timings.


Was your message directed to me? If so, do you think it is defective? I have great airflow in my case and ambient temp is 66F.


----------



## weleh

Not defective at all.

It's just that the cooler is insuficient for what the card can do.
My Toxic playing COD Warzone at 1440p max settings with 900 RPM for like 2h at 150-250W

70 hotspot at 2700 Mhz.



















Now imagine if this was air cooled.


----------



## ZealotKi11er

Blameless said:


> The slider doesn't change maximum voltage at all, so whatever clocks it's actually hitting in Time Spy (probably around 2.6GHz based on your sliders) is using the full 1175mV. This is entirely typical.


The voltage slider does work but is linked to core clk. I have posted data in 6800 XT thread. Each part has a V/F curve. The moment you overclock the cure what you are really doing is undervolting the part since you re limited to 1.175v. 

Example. 
Stock 2400 @ 1175. If you OC to 2600 with it still take 1175. If you set the slider to to 1100 but still get 1175 it just means 2600 clock required more than 75mV in the actual curve. you probably have to go to 950mv before you see 1150 mv meaning you already undervolted the part by 200 mv. 

The guys that want to "undervolt" should not touch core clock and only move the voltage slider. For my 6800 xt which runs 2384MHz @ 1150mv out of the box i se the voltage to 1150 and it can maintain 2330Mhz within 255w and not power/temp throttle.


----------



## Blameless

ZealotKi11er said:


> The guys that want to "undervolt" should not touch core clock and only move the voltage slider. For my 6800 xt which runs 2384MHz @ 1150mv out of the box i se the voltage to 1150 and it can maintain 2330Mhz within 255w and not power/temp throttle.


I prefer to limit the maximum voltage through MPT then use the clock sliders to get the maximum clock speed that's stable at that voltage. Leaving the maximum voltage as is and trying to bring the curve down with the slider almost invariably results in more voltage than is actually needed.

Prior to 21.4.1 1050mV peak was enough for 2475-2575 on my air cooled 6800XT (2500-2540MHz actual, in most tests). 21.4.1 required me to throw another 50mV at it, but it's still higher clocks at lower voltage than I could run with just the wattman slider.

In Medusa666's case, with the clock sliders at 2545-2750MHz, it doesn't much matter where the voltage slider is, at least during during heavy loads; it's still going to be running at maximum voltage all the time.


----------



## jonRock1992

So I went back to the previous driver, and I also determined that 2150 MHz with fast timings was unstable with my 6900 XT red devil ultimate, and I had to bump it down to 2130 MHz with fast timings. So I went ahead and just used the frequency range that was mentioned by @Medusa666 (2545 MHz to 2750 MHz) and I passed a Timespy stress test with 99.6% stability. Here is my Timespy result with these settings: AMD Radeon RX 6900 XT video card benchmark result - AMD Ryzen 7 5800X,ASUSTeK COMPUTER INC. ROG CROSSHAIR VIII DARK HERO (3dmark.com)


----------



## Medusa666

jonRock1992 said:


> So I went back to the previous driver, and I also determined that 2150 MHz with fast timings was unstable with my 6900 XT red devil ultimate, and I had to bump it down to 2130 MHz with fast timings. So I went ahead and just used the frequency range that was mentioned by @Medusa666 (2545 MHz to 2750 MHz) and I passed a Timespy stress test with 99.6% stability. Here is my Timespy result with these settings: AMD Radeon RX 6900 XT video card benchmark result - AMD Ryzen 7 5800X,ASUSTeK COMPUTER INC. ROG CROSSHAIR VIII DARK HERO (3dmark.com)


Another thing I have noted is that the memory can be quite restricting. 

Try clocking the core first with memory at stock settings, when you can't get core any higher then try the memory in 10-25MHz increments with standard timings.


----------



## jonRock1992

How hard is it to repaste the TIM on the Red Devil? Also, would it be safe to put liquid metal between the heatsink and die? I used liquid metal on my 1080 Ti and it made a world's difference in temps. I'm pretty sure my 6900XTU is being limited by my thermal performance. It's also power limited and I can't use MPT because I have no way to extract the bios. No bios tools work with the 6900 XTXH gpu's yet as far as I'm aware.


----------



## newls1

Really enjoyed this thread and you all have been a huge help along the way, but today I sold my 6900XT and although ill miss the card, please wish me luck in finding a 3090 PLEASE!


----------



## 99belle99

jonRock1992 said:


> How hard is it to repaste the TIM on the Red Devil? Also, would it be safe to put liquid metal between the heatsink and die? I used liquid metal on my 1080 Ti and it made a world's difference in temps. I'm pretty sure my 6900XTU is being limited by my thermal performance. It's also power limited and I can't use MPT because I have no way to extract the bios. No bios tools work with the 6900 XTXH gpu's yet as far as I'm aware.


GPU-z will extract the bios....


----------



## jonRock1992

99belle99 said:


> GPU-z will extract the bios....


It won't for 6900 XTXH gpu's. It's not supported yet.


----------



## iTTT

gpu-z webside already has Liquid Devil Ultimate XTXH vbios.

someone can flash this vbios into AMD reference card in linux.


----------



## jonRock1992

Oh cool! I wonder if I can flash the liquid devil bios onto my red devil ultimate?


----------



## CS9K

iTTT said:


> gpu-z webside already has Liquid Devil Ultimate XTXH vbios.
> 
> someone can flash this vbios into AMD reference card in linux.


cc @jonRock1992 Please do _NOT_ try this, yall.

Go read through this entire thread. The whole thing. TL;DR: Don't flash any RDNA2 card. It takes a physical bios flash tool to get you out of trouble if you do.






AMD - RED BIOS EDITOR und MorePowerTool - BIOS-Einträge anpassen, optimieren und noch stabiler übertakten | Navi unlimited


Ja es waren schon das eine oder andere mal hier im Thread. Mit dem XTXH bios aber maximal einer, und der auf einem anderen weg. (oder ich hab was überlesen) Die infos sind spärlich gesät. Mal quer durchs netz, findet man so den einen oder anderen happen, neben Schwätzern die sich auch eine 3090...




www.igorslab.de


----------



## weleh

CS9K said:


> cc @jonRock1992 Please do _NOT_ try this, yall.
> 
> Go read through this entire thread. The whole thing. TL;DR: Don't flash any RDNA2 card. It takes a physical bios flash tool to get you out of trouble if you do.
> 
> 
> 
> 
> 
> 
> AMD - RED BIOS EDITOR und MorePowerTool - BIOS-Einträge anpassen, optimieren und noch stabiler übertakten | Navi unlimited
> 
> 
> Ja es waren schon das eine oder andere mal hier im Thread. Mit dem XTXH bios aber maximal einer, und der auf einem anderen weg. (oder ich hab was überlesen) Die infos sind spärlich gesät. Mal quer durchs netz, findet man so den einen oder anderen happen, neben Schwätzern die sich auch eine 3090...
> 
> 
> 
> 
> www.igorslab.de


lol...

I already posted about this.

I flashed my Toxic via Linux, the card works, but you still can't go higher than XTX limits (3000-2150).
Also you get voltage locked to 1018mV so useless.

I also warned about the no image output unless you're on windows and since you can't force flash on Windows you're out of luck unless you have a second GPU or you do what I did, if you have a dual bios GPU you boot into Linux with the good bios, switch the flip to the bad bios and flash over it...

A bit more on topic, I determined 16,440 Gbps Fast timings is the best performance for my VRAM... Maxing the slider causes over 10 FPS drop on Superposition 8K Optimized game. 
So again I recomend everyone to test it because eventhough it's stable, it's dropping performance.


----------



## jonRock1992

I got my new high score for Timespy of 21511 for the graphics score! Increasing airflow a little made performance slightly better. Therefore, I believe that this 6900 XT Red Devil Ultimate is in dyer need of liquid cooling or a re-paste with liquid metal. AMD Radeon RX 6900 XT video card benchmark result - AMD Ryzen 7 5800X,ASUSTeK COMPUTER INC. ROG CROSSHAIR VIII DARK HERO (3dmark.com)


----------



## CS9K

weleh said:


> lol...
> 
> I already posted about this.
> 
> I flashed my Toxic via Linux, the card works, but you still can't go higher than XTX limits (3000-2150).
> Also you get voltage locked to 1018mV so useless.
> 
> I also warned about the no image output unless you're on windows and since you can't force flash on Windows you're out of luck unless you have a second GPU or you do what I did, if you have a dual bios GPU you boot into Linux with the good bios, switch the flip to the bad bios and flash over it...
> 
> A bit more on topic, I determined 16,440 Gbps Fast timings is the best performance for my VRAM... Maxing the slider causes over 10 FPS drop on Superposition 8K Optimized game.
> So again I recomend everyone to test it because eventhough it's stable, it's dropping performance.


I wish we could trust everyone to read entire threads, and test their setups, but alas...

Unrelated: I rolled back to 21.3.2. I could not reproduce the success some others have had with 21.4.1. As well, 21.4.1 kept completely crashing the driver when running FS2020, and Chrome, running youtube, with AV1 media playing via GPU hardware decoding, would crash Chrome after about 15 seconds, like clockwork. Ah well.


----------



## newls1

CS9K said:


> I wish we could trust everyone to read entire threads, and test their setups, but alas...
> 
> Unrelated: I rolled back to 21.3.2. I could not reproduce the success some others have had with 21.4.1. As well, 21.4.1 kept completely crashing the driver when running FS2020, and Chrome, running youtube, with AV1 media playing via GPU hardware decoding, would crash Chrome after about 15 seconds, like clockwork. Ah well.


like i posted pages again, the 21.4 driver is terrible... I spent 1/2 my day the other day trying everything to make me keep that driver and everything sucked. 21.3.1 (FOR ME) was the best performer in everything


----------



## CS9K

newls1 said:


> like i posted pages again, the 21.4 driver is terrible... I spent 1/2 my day the other day trying everything to make me keep that driver and everything sucked. 21.3.1 (FOR ME) was the best performer in everything


I believed you, just wanted to try it for myself. Just adding another data point to the "wait for 21.4.2 or 21.5.1" crowd


----------



## cfranko

Why does the GPU not use the Power Limit I gave it in games? It uses all the power in like 3D Mark but not in games. It also doesn’t really push the frequency I set in games. Is there a way to force the gpu to boost and use power?


----------



## weleh

The GPU only uses what it needs.


----------



## Blameless

cfranko said:


> Why does the GPU not use the Power Limit I gave it in games? It uses all the power in like 3D Mark but not in games.


3DMark is more demanding than most games.



cfranko said:


> It also doesn’t really push the frequency I set in games. Is there a way to force the gpu to boost and use power?


Assuming you aren't limited by something other than the GPU, you can set a higher minimum frequency.


----------



## cfranko

Blameless said:


> 3DMark is more demanding than most games.
> 
> 
> 
> Assuming you aren't limited by something other than the GPU, you can set a higher minimum frequency.


After one point, it ignores the minmum frequency and drops below that regardless.


----------



## Nizzen

cfranko said:


> Why does the GPU not use the Power Limit I gave it in games? It uses all the power in like 3D Mark but not in games. It also doesn’t really push the frequency I set in games. Is there a way to force the gpu to boost and use power?


Most likely you are cpubound in that games. Typical running with slow memory with high latency too.


----------



## cfranko

Nizzen said:


> Most likely you are cpubound in that games. Typical running with slow memory with high latency too.


I have a 5600X and a 6900 XT. I tuned my memory as well its running at 3800 tCL 15 with Gear Down Mode off 1T, all subtimgs are tuned too. I think the ram isn't %100 stable but it isn't causing any issues.


----------



## Nizzen

cfranko said:


> I have a 5600X and a 6900 XT. I tuned my memory as well its running at 3800 tCL 15 with Gear Down Mode off 1T, all subtimgs are tuned too. I think the ram isn't %100 stable but it isn't causing any issues.


Maybe AMD does AMD things 😅


----------



## Blameless

cfranko said:


> After one point, it ignores the minmum frequency and drops below that regardless.


The only reasons it should do this is if GPU load falls due to another bottleneck, it's throttling due to a temperature or power limit, or the GDDR6 is throwing errors.


----------



## cfranko

Blameless said:


> if GPU load falls due to another bottleneck


This happens, While Playing Warzone GPU load can sometimes drop all the way to 60 percent however in average the load is around 80 percent but almost never 95-98 percent


----------



## majestynl

cfranko said:


> This happens, While Playing Warzone GPU load can sometimes drop all the way to 60 percent however in average the load is around 80 percent but almost never 95-98 percent


Did you lock your fps ?


----------



## cfranko

majestynl said:


> Did you lock your fps ?


No, vsync is also off


----------



## majestynl

cfranko said:


> No, vsync is also off


Reset everything in game settings, and try in default Radeon Performance settings if it still doesn't use full GPU!

Had same issue with Cold War earlier..


----------



## majestynl

I can confirm latest drivers (21.4.1) are not the best (bench performance wise)! Rolled back to 21.3.2 ! Also achieved my best score in FS Ultra today  Finally top10!









I scored 16 177 in Fire Strike Ultra


AMD Ryzen 9 5900X, AMD Radeon RX 6900 XT x 1, 16384 MB, 64-bit Windows 10}




www.3dmark.com


----------



## L!ME

Hm with my Xtxh the best Driver is 21.4.1 so far. But in timespy it has Problems.








I scored 31 458 in Fire Strike Extreme


AMD Ryzen 9 5950X, AMD Radeon RX 6900 XT x 1, 32768 MB, 64-bit Windows 10}




www.3dmark.com


----------



## ptt1982

A question from a fresh Red Devil 6900xt owner. Bought the GPU used a bit cheaper, but realized why it was cheap: It can only get up to 20700 on Time Spy with the 21.4.1 drivers. I got once 62000 score at Firestrike. Core 1620mhz-2620mhz, VRAM 2080mhz Fast Timings, Voltage 1125mv. MPT 320W/340TDC. I have a Lancool II Performance with fans under the GPU and good airflow so it stays under 66C/100C, some throttling in ray traced titles, but if I lock fps to 60, hot spot drops under 90C. Fans max out at 60%, I have the computer in another room for 100% silent operation. Running B450 motherboard, with 2666mhz cheap [email protected] 3200mhz 18-19-19-39-58 timings. Ryzen 5600X, curve optimized -15 per core, stock frequency. NVME+SSD.

Games like SOTR works just about at 2700mhz ray tracing ultra, anything above that it crashes. Without ray tracing I can boost up to 2770mhz with 1175mv, but it's pretty unstable. TS crashes if CUs are above 2625mhz. I tried two previous drivers, but could not get the clock any higher on Time Spy or games (games work between 2700mhz-2770mhz depending if RT is on or off), but the latest drivers gave me the best scores with the same clocks. Disappointed I cannot push the card to do higher TS / Firestrike score, but then again I paid for the card 1120€ (which in itself is outrageous, I know, but it essentially cost 20-50% less than other new 6900XTs where I live.)

Question: *Did I lose the silicon lottery?*


----------



## CS9K

ptt1982 said:


> TS crashes if CUs are above 2625mhz. I tried two previous drivers, but could not get the clock any higher on Time Spy or games.
> 
> Question: *Did I lose the silicon lottery?*


From the numbers I've seen around over the months with RX 6800 XT's and RX 6900 XT's: Yes, you do have an okay-to-poor-bin RX 6900 XT.

But wait, hear me out!

You _have_ an RX 6900 XT, in your PC, right now
Your card works, is performing as well as it can, and is not running into thermal limits
20700 graphics Time Spy score is nothing to sneeze at!
Your scores _may_ improve slightly with 3600C16 memory or faster w/ 1:1:1 ratio, but 3200C18 isn't the end of the world right now.



ptt1982 said:


> but then again I paid for the card 1120€ (which in itself is outrageous, I know, but it essentially cost 20-50% less than other new 6900XTs where I live.)


There's a lot of ways to look at losing the silicon lottery. Some may be upset, some may be fine with it, some are douchenozzles and buy/return gear until they get a good bin (never do this), which keeps prices high for _everyone_... but I digress.

All things considered, it sounds like you got one hell of a deal for a great GPU. Losing the silicon lottery stinks for folks like us that enjoy fiddling around, but! In the grand scheme of things, it sounds like you're set for gaming for the near future


----------



## ZealotKi11er

CS9K said:


> From the numbers I've seen around over the months with RX 6800 XT's and RX 6900 XT's: Yes, you do have a poor-bin RX 6900 XT.
> 
> But wait, hear me out!
> 
> You _have_ an RX 6900 XT, in your PC, right now
> Your card works, is performing as well as it can, and is not running into thermal limits
> 20700 graphics Time Spy score is nothing to sneeze at!
> Your scores _may_ improve slightly with 3600C16 memory or faster w/ 1:1:1 ratio, but 3200C18 isn't the end of the world right now.
> 
> 
> 
> There's a lot of ways to look at losing the silicon lottery. Some may be upset, some may be fine with it, some are douchenozzles and buy/return gear until they get a good bin (never do this), which keeps prices high for _everyone_... but I digress.
> 
> All things considered, it sounds like you got one hell of a deal for a great GPU. Losing the silicon lottery stinks for folks like us that enjoy fiddling around, but! In the grand scheme of things, it sounds like you're set for gaming for the near future


Does not seem like bad bit to me. My 6900 XT also cant go over 2600MHz in TS. Most people that post are usually the ones with good scores/clk speed.


----------



## jonRock1992

Does anyone know how I can extract my bios from my red devil ultimate? It seems like there is currently no way to extract the bios from the navi 21 xtxh gpu's yet.


----------



## Pedros

ptt1982 said:


> A question from a fresh Red Devil 6900xt owner. Bought the GPU used a bit cheaper, but realized why it was cheap: It can only get up to 20700 on Time Spy with the 21.4.1 drivers. I got once 62000 score at Firestrike. Core 1620mhz-2620mhz, VRAM 2080mhz Fast Timings, Voltage 1125mv. MPT 320W/340TDC. I have a Lancool II Performance with fans under the GPU and good airflow so it stays under 66C/100C, some throttling in ray-traced titles, but if I lock fps to 60, hot spot drops under 90C. Fans max out at 60%, I have the computer in another room for 100% silent operation. Running B450 motherboard, with 2666mhz cheap [email protected] 3200mhz 18-19-19-39-58 timings. Ryzen 5600X, curve optimized -15 per core, stock frequency. NVME+SSD.
> 
> Games like SOTR works just about at 2700mhz ray tracing ultra, anything above that it crashes. Without ray tracing I can boost up to 2770mhz with 1175mv, but it's pretty unstable. TS crashes if CUs are above 2625mhz. I tried two previous drivers, but could not get the clock any higher on Time Spy or games (games work between 2700mhz-2770mhz depending if RT is on or off), but the latest drivers gave me the best scores with the same clocks. Disappointed I cannot push the card to do higher TS / Firestrike score, but then again I paid for the card 1120€ (which in itself is outrageous, I know, but it essentially cost 20-50% less than other new 6900XTs where I live.)
> 
> Question: *Did I lose the silicon lottery?*


I don't see that as a problem ... my 6900XT max was around 20970 on time spy but usually it’s around 20700 - 20850 and that's not bad for a stock card on a stock power envelope. Many people here don’t bench at stock. They increase TDC and so on.


----------



## ptt1982

CS9K said:


> From the numbers I've seen around over the months with RX 6800 XT's and RX 6900 XT's: Yes, you do have an okay-to-poor-bin RX 6900 XT.
> 
> But wait, hear me out!
> 
> You _have_ an RX 6900 XT, in your PC, right now
> Your card works, is performing as well as it can, and is not running into thermal limits
> 20700 graphics Time Spy score is nothing to sneeze at!
> Your scores _may_ improve slightly with 3600C16 memory or faster w/ 1:1:1 ratio, but 3200C18 isn't the end of the world right now.
> 
> 
> 
> There's a lot of ways to look at losing the silicon lottery. Some may be upset, some may be fine with it, some are douchenozzles and buy/return gear until they get a good bin (never do this), which keeps prices high for _everyone_... but I digress.
> 
> All things considered, it sounds like you got one hell of a deal for a great GPU. Losing the silicon lottery stinks for folks like us that enjoy fiddling around, but! In the grand scheme of things, it sounds like you're set for gaming for the near future


Thanks for the answer! We always need perspective here, these are 1st world problems. 

I was thinking that maybe I just have a particular card that doesn’t go well with TS, and works in games @ 2700mhz, but it is quite unstable with low voltages etc. I might actually limit the voltage using MPT because the hotspot tends to jump between 85-100 a bit too much for my taste. For this it would have been fantastic to have good silicon to keep the clocks high, but as you said, it’s a decent working card that I got at a great price (given the circumstances). Maybe the next driver likes my card and makes TS stable at 2700mhz!


----------



## ptt1982

Pedros said:


> I don't see that as a problem ... my 6900XT max was around 20970 on time spy but usually it’s around 20700 - 20850 and that's not bad for a stock card on a stock power envelope. Many people here don’t bench at stock. They increase TDC and so on.


That’s a fantastic score. Is that with a Red Devil? 

I had 19030 in TS stock speed and nothing touched. I got up to 20300 at 2620mhz/2100mhz and 300W/320TDC +15%, which is how I use it daily to keep thermals down. With 340W/360TDC + 15% I got up to 20700, but clocks won’t go higher than 2620mhz. So clock speeds stay the same with 281W and 360W (the highest I tested but it became hot and unstable). Games are stable at 2700mhz even without vsync and at 4K, but temps do get high if fans are under 60% (my Wife’s limit is at around 53%, after that she starts yelling at me about a noisy computer). 

Undervolting doesn’t work at 2600mhz, I’d have to lower to 2550 or 2500mhz max frequency if I want to limit the voltage via MPT. I may test that next, as I like the combination of slight OC and UV. Summer is coming and Tokyo gets to 40C+ at times, so I have to get ready for that...

All in all, we are talking about a couple of frames here, but it still stings. No return policies in Japan if the product works normally, could try RMA, but again, it works ok with promised boost.


----------



## Pedros

ptt1982 said:


> That’s a fantastic score. Is that with a Red Devil?
> 
> I had 19030 in TS stock speed and nothing touched. I got up to 20300 at 2620mhz/2100mhz and 300W/320TDC +15%, which is how I use it daily to keep thermals down. With 340W/360TDC + 15% I got up to 20700, but clocks won’t go higher than 2620mhz. So clock speeds stay the same with 281W and 360W (the highest I tested but it became hot and unstable). Games are stable at 2700mhz even without vsync and at 4K, but temps do get high if fans are under 60% (my Wife’s limit is at around 53%, after that she starts yelling at me about a noisy computer).
> 
> Undervolting doesn’t work at 2600mhz, I’d have to lower to 2550 or 2500mhz max frequency if I want to limit the voltage via MPT. I may test that next, as I like the combination of slight OC and UV. Summer is coming and Tokyo gets to 40C+ at times, so I have to get ready for that...
> 
> All in all, we are talking about a couple of frames here, but it still stings. No return policies in Japan if the product works normally, could try RMA, but again, it works ok with promised boost.


ok, when I said stock it wasn’t. It was at 2488 - 2600 and memory at 2150.

When I say stock i mean, I did not use anything other than the AMD drivers to set up the card, so no MorePowerTools or other tweaks.
But i mean, this card is already so good at "spitting FPS"  and OC this time around doesn't really bring a huge benefit. So you're fine man  We may not have the best TS scores or whatever ... but who cares  i only use TS to track progress on my OC and then i dial down a notch for daily use and... from there on, gaming is where we need to be


----------



## 99belle99

My reference model 6900 XT only does 2640MHz with Timespy. Anything higher and it crashes.


----------



## Pedros

2640 on max freq or min freq ?


----------



## jonRock1992

With the previous optional driver I am able to get a stable 2600min/2800max with 2100 fast timings for the vram during the Timespy stress test. Passed with around 98% stability. It hovers around 2700 to 2750 during gaming.


----------



## Pedros

Ok so but you have a good sample. So not much in line with the topic about people are asking on lower tier bins right ? I can’t get 2600 min


----------



## jonRock1992

Pedros said:


> Ok so but you have a good sample. So not much in line with the topic about people are asking on lower tier bins right ? I can’t get 2600 min


It's relevant because I'm saying that the previous driver yielded me better clocks.


----------



## 99belle99

Pedros said:


> 2640 on max freq or min freq ?


If I set it to 2700MHz in AMD software it will crash in Timespy and that is after using MPT.

So I set it to like 2650 or 2660 max and then run the benchmark and it would be 2640 average in Timespy results.


----------



## ptt1982

jonRock1992 said:


> With the previous optional driver I am able to get a stable 2600min/2800max with 2100 fast timings for the vram during the Timespy stress test. Passed with around 98% stability. It hovers around 2700 to 2750 during gaming.


Is this the driver 21.3.2? I think I got around 20mhz more with TS on that driver.

I'm going to repaste and put extra thermal pads on this Red Devil 6900XT, because I found out that I'm out of warranty as I bought it used (Powercolor's website, and the local Japanese website says that if I have bought it used, even from a shop, I won't have warranty on it.) I still have 3 weeks left of my shop warranty, but they won't let me return it without a flaw (they would analyze it carefully, and send it back to me, high OC temps do not quality.) No teardown videos, but saw from debauer's Red Devil 6800XT video that I have to be careful with some of the connectors so that I won't break them. (I've done this multiple times, but never with a 1100€+ card.) No warranty = I'm going to toy around with this thing. I actually don't mind a "low" OC, however, I mind high temps. On the Nitro+ 5700xt I got hotspot down 15C after repasting, hoping this would be the case here as well. 

The reason I'm doing this is that there is almost 30C difference between the GPU core temp and hotspot. I saw people are running this thing at 55% fan speed @ 2700mhz and MPT 330W/360TDC and with a hotspot of max 95. When I put ultra ray tracing setting on in Shadow of the Tomb Raider at 4K, hotspot temp tries to climb past 110C but starts to throttle. I wonder if there's a shoddy paste work here, or pads are not well aligned, so I will be seeing that myself. 

Question: Anyone here who has done repasting on AIB models?


----------



## jonRock1992

ptt1982 said:


> Is this the driver 21.3.2? I think I got around 20mhz more with TS on that driver.
> 
> I'm going to repaste and put extra thermal pads on this Red Devil 6900XT, because I found out that I'm out of warranty as I bought it used (Powercolor's website, and the local Japanese website says that if I have bought it used, even from a shop, I won't have warranty on it.) I still have 3 weeks left of my shop warranty, but they won't let me return it without a flaw (they would analyze it carefully, and send it back to me, high OC temps do not quality.) No teardown videos, but saw from debauer's Red Devil 6800XT video that I have to be careful with some of the connectors so that I won't break them. (I've done this multiple times, but never with a 1100€+ card.) No warranty = I'm going to toy around with this thing. I actually don't mind a "low" OC, however, I mind high temps. On the Nitro+ 5700xt I got hotspot down 15C after repasting, hoping this would be the case here as well.
> 
> The reason I'm doing this is that there is almost 30C difference between the GPU core temp and hotspot. I saw people are running this thing at 55% fan speed @ 2700mhz and MPT 330W/360TDC and with a hotspot of max 95. When I put ultra ray tracing setting on in Shadow of the Tomb Raider at 4K, hotspot temp tries to climb past 110C but starts to throttle. I wonder if there's a shoddy paste work here, or pads are not well aligned, so I will be seeing that myself.
> 
> Question: Anyone here who has done repasting on AIB models?


Yeah that's 21.3.2. It gave me an additional 50 MHz. If you tear it down, please let me know how you did it! I've been wanting to repaste my red devil ultimate because temps are pretty crappy. I could easily get a stable 2800 MHz if I had better temps.


----------



## ptt1982

Update: I am seeing up to 40C difference between GPU and Hotspot temps, typically howering around 33-36C delta. The previous person has not opened this up (the seal is still on on the X screws). Also, the temp difference rose 5C after I tightened the backplate, thinking that maybe the pads or paste is not fully connected to the chip. Will post my repasted figures later tonight after Amazon has delivered my Grizzly Kryonaut paste and Minus8 pads. This is getting interesting...


----------



## ZealotKi11er

Anyone tried FireStrike and what their best GPU score is. I managed 65K. My 6800XT got 59K back in November.


----------



## 6u4rdi4n

ZealotKi11er said:


> Anyone tried FireStrike and what their best GPU score is. I managed 65K. My 6800XT got 59K back in November.


My best in Fire Strike so far is 64 075. I might be limited by my 9900K.


----------



## CS9K

ptt1982 said:


> Update: I am seeing up to 40C difference between GPU and Hotspot temps, typically howering around 33-36C delta. The previous person has not opened this up (the seal is still on on the X screws). Also, the temp difference rose 5C after I tightened the backplate, thinking that maybe the pads or paste is not fully connected to the chip. Will post my repasted figures later tonight after Amazon has delivered my Grizzly Kryonaut paste and Minus8 pads. This is getting interesting...


Oof, it does sound like your mount isn't the greatest. Even on my reference RX 6900 XT before I unmounted the cooler for the first time, when overclocked, the temp delta wasn't more than 30C at full load.

I recommend a thick paste like Hydronaut or Gelid GC-Extreme. Thinner pastes like Kryonaut, NT-H1/H2, and others, are more-oily and will pump out over time on a bare die, and are more suited for heat-spreader application.


----------



## ptt1982

CS9K said:


> Oof, it does sound like your mount isn't the greatest. Even on my reference RX 6900 XT before I unmounted the cooler for the first time, when overclocked, the temp delta wasn't more than 30C at full load.
> 
> I recommend a thick paste like Hydronaut or Gelid GC-Extreme. Thinner pastes like Kryonaut, NT-H1/H2, and others, are more-oily and will pump out over time on a bare die, and are more suited for heat-spreader application.


It does sound like a bad mount, doesn't it. Were you able to reduce the delta by applying a new paste? 

Thanks for the paste tips. I wish I read your post last night before I ordered 5.5g of Kryonaut from Amazon! I'm on a holiday today, so I wanted to get the paste today so that I can finalize this project to concentrate on gaming the rest of the break (although I probably enjoy the tuning part more than gaming). I'm thinking of using the existing pads and putting them under the backplate if there are no pads between the backplate and the PCB of the Red Devil.


----------



## Pedros

question to the group ... is liquid metal worth it? 

Heard that it has some good effect reducing that hotspot. I have some liquid metal I can use, but will only use it if worth it ( have to isolate everything with polyimide tape... pfff )


----------



## dagget3450

just got my waterblock in, i wonder if i will see much difference from stock cooler, i have the reference model. anyone know?


----------



## CS9K

dagget3450 said:


> just got my waterblock in, i wonder if i will see much difference from stock cooler, i have the reference model. anyone know?


It's a _world_ of difference.
Reference RX 6900 XT with the EK block, 420W card limit, hotspot tops out at 73C under benchmark load, core temp ~50-55C, all other temps stay below 60C. It's _awesome_! See my other posts for thermal paste recommendations (I use GC-Extreme)



ptt1982 said:


> It does sound like a bad mount, doesn't it. Were you able to reduce the delta by applying a new paste?
> 
> Thanks for the paste tips. I wish I read your post last night before I ordered 5.5g of Kryonaut from Amazon! I'm on a holiday today, so I wanted to get the paste today so that I can finalize this project to concentrate on gaming the rest of the break (although I probably enjoy the tuning part more than gaming). I'm thinking of using the existing pads and putting them under the backplate if there are no pads between the backplate and the PCB of the Red Devil.


I would say, you'd be _okay_ with the Kryonaut, but over the months, expect to see temps creep up and the need to re-paste to come along in a year or so, shorter if you really push the card.

The Kryonaut is absolutely worth keeping, as it is by most measures, one of the bests pastes there is for putting on top of metal heat spreaders. If you _can_ get hold of Hydronaut or GC-Extreme in the near future, they're _usually_ not horribly expensive, and would be worth the wait to do the re-paste once and be done with it.

As for thermal pads, be warned, if you pad the back, it'll transmit coil whine into the backplate (which becomes a speaker for the whine). Many folks with EK backplates, and one reference card w/pads I've heard about, had hideous coil whine after padding the back, mine included. I had to tighten the backplate down completely, turn on a benchmark, then tighten down each of the four backplate screws JUST enough to touch the plate, to prevent hearing the coil whine in the next room.

Also, the card NEEDS the pads that are on the front, so if you're going to add pads, get new one for the back.


----------



## iTTT

99belle99 said:


> My reference model 6900 XT only does 2640MHz with Timespy. Anything higher and it crashes.


yes, same as mine...


----------



## ptt1982

CS9K said:


> It's a _world_ of difference.
> Reference RX 6900 XT with the EK block, 420W card limit, hotspot tops out at 73C under benchmark load, core temp ~50-55C, all other temps stay below 60C. It's _awesome_! See my other posts for thermal paste recommendations (I use GC-Extreme)
> 
> 
> 
> I would say, you'd be _okay_ with the Kryonaut, but over the months, expect to see temps creep up and the need to re-paste to come along in a year or so, shorter if you really push the card.
> 
> The Kryonaut is absolutely worth keeping, as it is by most measures, one of the bests pastes there is for putting on top of metal heat spreaders. If you _can_ get hold of Hydronaut or GC-Extreme in the near future, they're _usually_ not horribly expensive, and would be worth the wait to do the re-paste once and be done with it.
> 
> As for thermal pads, be warned, if you pad the back, it'll transmit coil whine into the backplate (which becomes a speaker for the whine). Many folks with EK backplates, and one reference card w/pads I've heard about, had hideous coil whine after padding the back, mine included. I had to tighten the backplate down completely, turn on a benchmark, then tighten down each of the four backplate screws JUST enough to touch the plate, to prevent hearing the coil whine in the next room.
> 
> Also, the card NEEDS the pads that are on the front, so if you're going to add pads, get new one for the back.


Thanks again for a lot of useful info. Just to confirm before I start my operation (which is in an hour or so): I should not change the pads in the front? I got Minus8 pads I was thinking of putting in the front and moving Red Devil's own used pads under the backplate, backside.

Interestingly, I could run Superposition at 2775mhz, fans at 100%, memory at 2115mhz fast timings, and MPT 350W/380TDC. Got 17914 points which is actually not bad. With further testing, it's fully stable at 2770mhz for rasterization without RT, and with RT 2710mhz. I guess it's not a bad card after all, it's just that Timespy doesn't like it, but everything else does. Temps stayed under 100C just about, but again, a delta of 33-38C, so I have to fix that.


----------



## ptt1982

Quick update after repasting: GPU core dropped 5C and Junction by 20C. Now it never goes past 95 (within 22C from GPU temp) whereas it throttled at 110C (within 40C from GPU temp) before worst case scenario. In normal scenarios it stays under 90C, including Time Spy Graphics test 1 and 2 where it peaked to 108C earlier. 

Now, I realized I put 1mm thermal pads on the VRAMs and I think it's not touching the heatsink properly or very very slightly, so I will be changing them asap. 2mm pads are out of stock in Japan right now, so I have to run it as is for a while, probably no OC on VRAMs and undervolt the card. Regardless, the junction temps and temps overall significantly dropped significantly so the success here is that I know I can get this card running perfectly with a little bit of tuning. 

Happy days!


----------



## jimpsar

ZealotKi11er said:


> Anyone tried FireStrike and what their best GPU score is. I managed 65K. My 6800XT got 59K back in November.


Merc 319 Black 6900xt here
65321 Gpu FireStrike Score.


----------



## Pedros

Sorry to ask again but has anyone tried liquid metal ? Is it worth it vs gc-extreme for example?


----------



## CS9K

Pedros said:


> Sorry to ask again but has anyone tried liquid metal ? Is it worth it vs gc-extreme for example?


For me, it's not worth the trouble, no. It would work better, sure, but for the trouble I would just use that time and research custom loop water cooling. It's not near as bad nor expensive as it once was. Start saving and go for it when the time is right :3


----------



## CS9K

ptt1982 said:


> Thanks again for a lot of useful info. Just to confirm before I start my operation (which is in an hour or so): I should not change the pads in the front? I got Minus8 pads I was thinking of putting in the front and moving Red Devil's own used pads under the backplate, backside.
> 
> Interestingly, I could run Superposition at 2775mhz, fans at 100%, memory at 2115mhz fast timings, and MPT 350W/380TDC. Got 17914 points which is actually not bad. With further testing, it's fully stable at 2770mhz for rasterization without RT, and with RT 2710mhz. I guess it's not a bad card after all, it's just that Timespy doesn't like it, but everything else does. Temps stayed under 100C just about, but again, a delta of 33-38C, so I have to fix that.


I should have clarified when I mentioned not changing pads out: For the reference model, one should _not_ change out the pads on the front of the card, nor should anyone sub-in aftermarket pads for ones that come with EK's water blocks. Use the included pads, especially for EK water blocks, else there will be mounting issues. As for aftermarket cards, I don't have any experience there, so I can't comment on that with any amount of certainty.

Regarding benchmark scores: Superposition is VERY forgiving with stability, and IMO Superposition should only be used as a performance metric, not a stability test.

*I see so many posts of "I can only run xxxxMHz in Time Spy/Port Royal, but i'm stable at '+100MHz' in everything else"*.

In my experience, some/most of yall are _not actually_ stable at +100MHz over what TS/PR are stable at, you just haven't crashed _*yet*_. I _highly_ recommend tuning your global profile based on Time Spy and *Heaven benchmarks, and only increase clocks after a few tens of hours of confirmed stability in each game that you play. 

This has been my *experience, anyway, with my reference RX 6900 XT under water, and my home-office-pc's reference RX 6800 on air in my Ncase M1.

* The only caveat to my opinion above is that non-RT games can run a bit faster core clock speed than RT games. But! I would still base my non-RT tune off of Time Spy/Firestrike stability, and likewise my RT tune off of Port Royal and/or Control stability (w/RT Reflections, Transparent Reflections, and Indirect Diffuse Lighting enabled).

* My best Heaven stability results are using the exact settings pictured (yes, it _needs_ to be fullscreen too, else it doesn't push the GPU as hard as it can otherwise).


----------



## jonRock1992

jimpsar said:


> Merc 319 Black 6900xt here
> 65321 Gpu FireStrike Score.


What clocks? And did you use MPT? STILL can't use MPT with my 6900XTXH card. Waiting for GPU-Z support so I can extract the BIOS.


----------



## CS9K

jonRock1992 said:


> What clocks? And did you use MPT? STILL can't use MPT with my 6900XTXH card. Waiting for GPU-Z support so I can extract the BIOS.


Check out the Igor's Lab RBE/MPT thread. I think they found a flash tool that can talk to the XTXH cards via linux. I believe if it can see the card, it'll be able to extract the bios.

I don't know if MPT is updated quite yet, you'll have to read through the most recent few pages of the thread for that one.


----------



## Pedros

CS9K said:


> For me, it's not worth the trouble, no. It would work better, sure, but for the trouble I would just use that time and research custom loop water cooling. It's not near as bad nor expensive as it once was. Start saving and go for it when the time is right :3


My card will be under water as soon as Alphacool releases the block ( middle of May ).

That's why I'm trying to understand if I should use liquid metal or "normal" paste. I've heard the liquid metal will be better but at the same time... wanted to have some more feedback


----------



## jonRock1992

Pedros said:


> My card will be under water as soon as Alphacool releases the block ( middle of May ).
> 
> That's why I'm trying to understand if I should use liquid metal or "normal" paste. I've heard the liquid metal will be better but at the same time... wanted to have some more feedback


I would personally use liquid metal. I have experience with it, and it's always better. Just got to make the necessary precautions and apply it correctly.


----------



## Pedros

I've applied to cpu's ... never did it on GPU's though  but my guess it's gonna be similar


----------



## 99belle99

Problem with liquid metal is it can start to run once hot and short out nearby components as it's conductive.


----------



## CS9K

Pedros said:


> My card will be under water as soon as Alphacool releases the block ( middle of May ).
> 
> That's why I'm trying to understand if I should use liquid metal or "normal" paste. I've heard the liquid metal will be better but at the same time... wanted to have some more feedback


Ah, I get you. Right now, I use Gelid GC-Extreme on my reference RX 6900 XT. Under full benchmark load (2750MHz on the Adrenalin Core-speed slider, core ~330-340W, Card ~390-410W), hotspot tops out at 73C, Core 55C-ish, VRM, Memory Junction, and all other temps all well under 60C. Knock 5-10C off of each number for normal gaming loads at native resolution on my 4k120 LG CX 48" OLED panel. Likewise, core/hotspot temps on the b/f's 3080FE under water are nice and cool with GC-Extreme on _his_ LG CX 48" OLED panel.

In my opinion, a quality, thick paste like Hydronaut, Gelid GC-Extreme, etc., will get you there, won't pump out*, and relieves you of the trouble and worry that come with Liquid Metal.

*will pump out very, very slowly compared to a thinner paste like Kryonaut, NT-H1/H2, etc.


----------



## jimpsar

jonRock1992 said:


> What clocks? And did you use MPT? STILL can't use MPT with my 6900XTXH card. Waiting for GPU-Z support so I can extract the BIOS.


Υep with MPT . Used clocks from LtMatt . 









I scored 38 813 in Fire Strike


Intel Core i9-10850K Processor, AMD Radeon RX 6900 XT x 1, 16384 MB, 64-bit Windows 10}




www.3dmark.com


----------



## jonRock1992

Sweet. I'm not sure what LtMatt is lol.


----------



## welnaseth

Couple of noob questions here:

1. Does Port Royal ever just stop the benchmark mid run but no errors show up in either 3dMark or The Radeon Software? I would take this as a sign of instability but I've literally only have increased the Power limit by 15% in the Radeon software and changed the fan curve, nothing else. EVGA Supernova 850 P2 Power Supply, so I don't think it's a power delivery issue? I'm just wondering if I should be taking this as a sign of instability or not...

2. I've only overclocked an R9 290 before and have heard the process has changed quite a bit on top of these cards having some nuances that need to be taken into account. Is there a good guide for these cards somewhere in this thread that I could follow. I have a reference card with the stock cooler so I doubt MPT is in the realm of feasibility at the moment, but I want to get as much performance as I can without it and see where my junction temps are.

3. When I've tried in the past to overclock this card, sometimes it seems to ignore the MHz range I set it to, is that a sign of instability or is it the Radeon software/drivers being weird?

Thanks!


----------



## jonRock1992

ptt1982 said:


> Quick update after repasting: GPU core dropped 5C and Junction by 20C. Now it never goes past 95 (within 22C from GPU temp) whereas it throttled at 110C (within 40C from GPU temp) before worst case scenario. In normal scenarios it stays under 90C, including Time Spy Graphics test 1 and 2 where it peaked to 108C earlier.
> 
> Now, I realized I put 1mm thermal pads on the VRAMs and I think it's not touching the heatsink properly or very very slightly, so I will be changing them asap. 2mm pads are out of stock in Japan right now, so I have to run it as is for a while, probably no OC on VRAMs and undervolt the card. Regardless, the junction temps and temps overall significantly dropped significantly so the success here is that I know I can get this card running perfectly with a little bit of tuning.
> 
> Happy days!


Care to share how to tear down the gpu?


----------



## ptt1982

CS9K said:


> I should have clarified when I mentioned not changing pads out: For the reference model, one should _not_ change out the pads on the front of the card, nor should anyone sub-in aftermarket pads for ones that come with EK's water blocks. Use the included pads, especially for EK water blocks, else there will be mounting issues. As for aftermarket cards, I don't have any experience there, so I can't comment on that with any amount of certainty.
> 
> Regarding benchmark scores: Superposition is VERY forgiving with stability, and IMO Superposition should only be used as a performance metric, not a stability test.
> 
> *I see so many posts of "I can only run xxxxMHz in Time Spy/Port Royal, but i'm stable at '+100MHz' in everything else"*.
> 
> In my experience, some/most of yall are _not actually_ stable at +100MHz over what TS/PR are stable at, you just haven't crashed _*yet*_. I _highly_ recommend tuning your global profile based on Time Spy and *Heaven benchmarks, and only increase clocks after a few tens of hours of confirmed stability in each game that you play.
> 
> This has been my *experience, anyway, with my reference RX 6900 XT under water, and my home-office-pc's reference RX 6800 on air in my Ncase M1.
> 
> * The only caveat to my opinion above is that non-RT games can run a bit faster core clock speed than RT games. But! I would still base my non-RT tune off of Time Spy/Firestrike stability, and likewise my RT tune off of Port Royal and/or Control stability (w/RT Reflections, Transparent Reflections, and Indirect Diffuse Lighting enabled).
> 
> * My best Heaven stability results are using the exact settings pictured (yes, it _needs_ to be fullscreen too, else it doesn't push the GPU as hard as it can otherwise).
> View attachment 2488753


Thanks for the reply. I changed the pads to 1mm pads and they didn't touch heatsink and VRAMs went up to 90C as they were. Now, I bought Thermalright's 2mm pads and changed them, and now the hotspot on GPU goes way too high, so I think the original pads were simply 1.5mm, and there is no good connection to the heatsink. I do have the original pads still, but they are all squished. They still work, though, and I can put them on. 

On Red Devil, I advice you guys not to change the thermal pads or tune it too much, but repasting it makes wonders to the GPU temps. Just leave the pads as they were. Now, for the fourth time, I need to open it up and tune it back to normal to get it working. Jeesus, took 4 hours already today of my time to get back to normal temps, but I'll do it.


----------



## ptt1982

jonRock1992 said:


> Care to share how to tear down the gpu?


I did a teardown on Red Devil 6900xt (not ultimate).

1) Unscrew all the screws you can see on the backplate
2) Remove two cables, fan (white with colored cables) and another white one on the side of the PCB
3) Carefully lift the heatsink, there’s another black cable under the top of the card (the red devil rgbs), disconnect the cable (it’s linked to another black cable) before fully lifting the heatsink (or lift the heatsink to a standing position next to the pcb and then remove it)
4) (optional) if you want to access what’s under the backplate remove five or six screws (forgot how many) that hold the backplate on the PCB, they are near the gpu chip and memory modules (make sure you don’t remove the hdmi/dp holder screws)
5) (optional) remove one (white) cable from backplate, it gives led light to the red devil logo before fully removing the backplate from the pcb

If you add pads: ONLY use 1.5mm pads, 2mm are too high and 1mm too low for the heatsink to be mounted well (believe me, tried all of them).

My temps for changing to high quality pads for Vram dropped around 5-7C, and changing thermal paste dropped gpu hot spot by 15-20C. In my case the original paste job was sloppy which is why I did this operation. My card is also used, so I had nothing to lose (too hot card, or a broken card, it’s the same to me). Now the temps are great. I used Thermal Grizzly Kryonaut, and I’m aware I have to repaste once a year or so.

Update on clocks and further testing:
My final stable clocks 2620mhz core, 2100mhz memory. Daily use 2600mhz / 2080mhz. Boost between 2480mhz-2580mhz (higher lows and higher highs after repasting). It does work in games at 2750mhz core (no crashes without vsync, 4K couple of hours of testing, crashes may come later) and ray tracing with 2670mhz (4K no vsync).

Previous driver gave me 2640mhz at Timespy, now it’s lower at 2620mhz, but scores are the same. Instant crashes start at 2780mhz, with ray tracing at 2750mhz. Best TS score 20738 using MPT. My card likes power up to 350W/380TDC, after that scaling ends due to the silicon not being able to push clocks higher.

It’a sad that I can’t push the card past 2750mhz at TS and ray tracing, because the temps even on air would allow it with a good 50% fan speed. I’m hoping a new Vbios from powercolor and better drivers from AMD would fix this (for example 1.2 vcore could help.) My clocks seem quite typical for red devil basic though, and most of the extra performance came from the MPT. Daily use TS is around 20350 which is 11% higher than my previous fine-tuned TUF Rtx3080 which I sold due to realizing I don’t need DLSS or Ray Tracing, and like higher raster performance and VRAM capacity.

Done with tuning and testing (used 4 days of my 11 day annual leave on it) until new drivers hit. I think I’ll play some games now!


----------



## jimpsar

jonRock1992 said:


> Sweet. I'm not sure what LtMatt is lol.


A quite experienced user.  
Here is his YouTube channel.


----------



## jonRock1992

Oh sweet! His card seems to run right at around the same settings as mine. Those merc cards might as well be branded as XTXH lol.


----------



## newls1

need y'alls opinion here.... as some of you may know, ive had a waterblocked Asus Tuf 6900XT that was a decent clocker but sold it off to a local friend of mine with the hopes of scooping up a 3090, but obviously that hasent worked out as planned.. Im getting complacent and wanting a fast card again but really wanted DLSS support. My local MC has the Powercolor Liquid devil 6900XT (non ultimate version) In stock for a few days now. What are your thoughts on that card _Maybe_ having a better binned GPU for slightly better OC's and hoping maybe AMD's FSR will drop soon and be supported by game devs?? I can grab this card up pretty quick on my next day off, just want your opinions?? Thanks

This is the card in question 




__





Are you a human?







www.newegg.com


----------



## CS9K

newls1 said:


> need y'alls opinion here.... as some of you may know, ive had a waterblocked Asus Tuf 6900XT that was a decent clocker but sold it off to a local friend of mine with the hopes of scooping up a 3090, but obviously that hasent worked out as planned.. Im getting complacent and wanting a fast card again but really wanted DLSS support. My local MC has the Powercolor Liquid devil 6900XT (non ultimate version) In stock for a few days now. What are your thoughts on that card _Maybe_ having a better binned GPU for slightly better OC's and hoping maybe AMD's FSR will drop soon and be supported by game devs?? I can grab this card up pretty quick on my next day off, just want your opinions?? Thanks
> 
> This is the card in question
> 
> 
> 
> 
> __
> 
> 
> 
> 
> 
> Are you a human?
> 
> 
> 
> 
> 
> 
> 
> www.newegg.com


I've been telling the folks that hang around the Dallas Micro Center discord to not even consider AIB AMD cards, as they are _so_ ludicrously overpriced, that it went past anger-inducing, to comical, then kept going back around to anger-inducing.

If you can be there for AMD's storefront drops every week, usually on Thursdays between 10am and noon CST, then that's your best bet to get a reference AMD GPU. You HAVE to be there, ready, logged in to Paypal, and be paying attention when the drop starts, since there's a single green-light of inventory, and that's it... no trickle-in stock like Best Buy.


----------



## newls1

I’m not sure how that helps my question? I have no issues getting a 6900xt card


----------



## CS9K

newls1 said:


> I’m not sure how that helps my question? I have no issues getting a 6900xt card


If you're comfortable paying the premium that AIB's currently command, then it sounds like you want it. My whole ramble was about how you could likely get a reference card from AMD for the $999 reference card price in 1-3 tries, vs paying the obscene AIB prices.

But, that's up to you.


----------



## majestynl

jonRock1992 said:


> What clocks? And did you use MPT? STILL can't use MPT with my 6900XTXH card. Waiting for GPU-Z support so I can extract the BIOS.


Here is the bios for the liquid devil ultimate:








Powercolor RX 6900 XT VBIOS


16 GB GDDR6, 500 MHz GPU, 914 MHz Memory




www.techpowerup.com


----------



## cfranko

majestynl said:


> Here is the bios for the liquid devil ultimate:
> 
> 
> 
> 
> 
> 
> 
> 
> Powercolor RX 6900 XT VBIOS
> 
> 
> 16 GB GDDR6, 500 MHz GPU, 914 MHz Memory
> 
> 
> 
> 
> www.techpowerup.com


Is this bios flashable to a non powercolor 6900 xt?


----------



## majestynl

cfranko said:


> Is this bios flashable to a non powercolor 6900 xt?


Bios flashing on the 6x series doesn't work yet ! Don't even try it! 

I attached the URL so he can use it for MPT !


----------



## majestynl

Delete post!


----------



## The Stilt

majestynl said:


> Bios flashing on the 6x series doesn't work yet ! Don't even try it!
> 
> I attached the URL so he can use it for MPT !


Are you referring to the missing override command(s) or to something else?


----------



## majestynl

The Stilt said:


> Are you referring to the missing override command(s) or to something else?


The Guy needs the bios file so he can load it into MPT to override the standard values!


----------



## The Stilt

majestynl said:


> The Guy needs the bios file so he can load it into MPT to override the standard values!


I meant the part where you said "bios flashing on the 6x series doesn't work yet".


----------



## ptt1982

Final testing with Red Devil 6900XT after repasting, and changing to driver 21.3.2 using MPT 340W/360TCD + 15% PL:

Timespy

*Stock: 18430* (my previous "stock" was actually Automatic, not actually the real stock. Realized this recently.)
*Daily OC* 2600Core/2080VRAM: *20692 *(12.27% improvement)
*Max OC* 2656Core/2086VRAM: *21022* (14.06% improvement)

Can't complain! This thing overclocks nicely when the drivers support it. Anything above 2656 core or 2086 vram makes it unstable. Let's see if 2650mhz / 2085mhz would be the sweet spot for gaming.


----------



## ZealotKi11er

ptt1982 said:


> Final testing with Red Devil 6900XT after repasting, and changing to driver 21.3.2 using MPT 340W/360TCD + 15% PL:
> 
> Timespy
> 
> *Stock: 18430* (my previous "stock" was actually Automatic, not actually the real stock. Realized this recently.)
> *Daily OC* 2600Core/2080VRAM: *20692 *(12.27% improvement)
> *Max OC* 2656Core/2086VRAM: *21022* (14.06% improvement)
> 
> Can't complain! This thing overclocks nicely when the drivers support it. Anything above 2656 core or 2086 vram makes it unstable. Let's see if 2650mhz / 2085mhz would be the sweet spot for gaming.
> 
> View attachment 2489135


If TS can run 2650, games can probably do 2700 or even more. I played hours of CP2077 with 2750MHz while TS could only pass 2600MHz.


----------



## LtMatt

ZealotKi11er said:


> If TS can run 2650, games can probably do 2700 or even more. I played hours of CP2077 with 2750MHz while TS could only pass 2600MHz.


Yep i see similar tbf. 

Games i can run 2755Mhz in Radeon Software, all day long no crashes, even games with RT that push power draw up 40Watts like Dirt 5. 

Firestrike Extreme i can run 2830. Timespy can only run around 2700, Timespy Extreme around 2690.


----------



## majestynl

The Stilt said:


> I meant the part where you said "bios flashing on the 6x series doesn't work yet".


Aha... No not the override commands. It just doesn't work (properly) after flashing. And some will get headache if card won't boot normal anymore...


----------



## majestynl

Got myself a new toy. Let's see what this thing can do...


----------



## jimpsar

LtMatt said:


> Yep i see similar tbf.
> 
> Games i can run 2755Mhz in Radeon Software, all day long no crashes, even games with RT that push power draw up 40Watts like Dirt 5.
> 
> Firestrike Extreme i can run 2830. Timespy can only run around 2700, Timespy Extreme around 2690.


Nice mate!! 
With the settings you gave???


----------



## ZealotKi11er

majestynl said:


> Got myself a new toy. Let's see what this thing can do...
> 
> View attachment 2489188


Want to see what the out of box clock is.


----------



## ptt1982

ZealotKi11er said:


> If TS can run 2650, games can probably do 2700 or even more. I played hours of CP2077 with 2750MHz while TS could only pass 2600MHz.


Editing my post here after testing further:

I have found my max OC settings finally: Rasterization @2735mhz will start exhibiting artifacts in toughest spots, 2730mhz is fully stable. RT hangs at 2700mhz, but is stable at 2680mhz, set my clocks to 2670mhz for the extra stability's sake, 2720mhz for Raster. This is around 80mhz boost from stable TS scores for raster and 30mhz for RT. I will still be running the card in UV mode at 1050mv vcore, 1025mv soc due to it being so fast. Happy to know I can crank the speed up if required later, though.

Overall I believe 6900xt can be pushed further with better drivers. It hangs at 2780mhz-2790mhz which means that's where the silicon quality comes into play most. It's great to see that with unstable drivers and relatively normal silicon lottery results you can push it past what an OC'd RTX 3090 can do in rasterization. When you read reviews in which they say 6900xt can only gain around +3% performance through OC, it makes you think if the reviewers really did their job. I mean +14% in TS, and +17% in games is an incredibly result for a high-end card.

Finally satisfied with the card as is.


----------



## nyk20z3

newls1 said:


> need y'alls opinion here.... as some of you may know, ive had a waterblocked Asus Tuf 6900XT that was a decent clocker but sold it off to a local friend of mine with the hopes of scooping up a 3090, but obviously that hasent worked out as planned.. Im getting complacent and wanting a fast card again but really wanted DLSS support. My local MC has the Powercolor Liquid devil 6900XT (non ultimate version) In stock for a few days now. What are your thoughts on that card _Maybe_ having a better binned GPU for slightly better OC's and hoping maybe AMD's FSR will drop soon and be supported by game devs?? I can grab this card up pretty quick on my next day off, just want your opinions?? Thanks
> 
> This is the card in question
> 
> 
> 
> 
> __
> 
> 
> 
> 
> 
> Are you a human?
> 
> 
> 
> 
> 
> 
> 
> www.newegg.com


I would be patient and get the 3090


----------



## Maracus

Picked up a 6900XT MSI Gaming Trio last week. Not looking to push it to far atm because I lost the bag of cables that came with my Seasonic Prim Platinum 1000w PSU and Only have 2 cables split into three running it.

Default clock is 2539mhz, ran a couple benchmarks (TS, FS Superpostion) undervolted to 1075mv but HWINFO64 only showed max mv at 993mv.

Going to try and order another cable from somewhere before I go pushing the power through it, although I suspect it would be ok but don't feel like melting PCIE cables


----------



## The Stilt

Maracus said:


> Picked up a 6900XT MSI Gaming Trio last week. Not looking to push it to far atm because I lost the bag of cables that came with my Seasonic Prim Platinum 1000w PSU and Only have 2 cables split into three running it.
> 
> Default clock is 2539mhz, ran a couple benchmarks (TS, FS Superpostion) undervolted to 1075mv but HWINFO64 only showed max mv at 993mv.
> 
> Going to try and order another cable from somewhere before I go pushing the power through it, although I suspect it would be ok but don't feel like melting PCIE cables


The voltage value in Radeon Settings Performance-tab has nothing to do with the absolute voltage.
It is an offset and the default value only indicates the maximum theoretical value, allowed by the SMU. Why AMD decided to display it as such is a very good question...
Hence, there are no guarantees that the maximum voltage will ever be met, as the actual voltage is decided by the AVFS (based on power and current limits, and several co-efficients).

So in case of the MBA cards and their Vmaxes: 1025mV for 6800, 1150mV for 6800 XT and 1175mV for 6900 XT.
Setting "1000mV" on these cards means: -25mV for 6800, -150mV for 6800 XT and -175mV for 6900 XT.

So in pratical terms, at default settings, there is no way to really to control the absolute voltage on these cards, and only relative changes can be made.


----------



## ZealotKi11er

The Stilt said:


> The voltage value in Radeon Settings Performance-tab has nothing to do with the absolute voltage.
> It is an offset and the default value only indicates the maximum theoretical value, allowed by the SMU. Why AMD decided to display it as such is a very good question...
> Hence, there are no guarantees that the maximum voltage will ever be met, as the actual voltage is decided by the AVFS (based on power and current limits, and several co-efficients).
> 
> So in case of the MBA cards and their Vmaxes: 1025mV for 6800, 1150mV for 6800 XT and 1175mV for 6900 XT.
> Setting "1000mV" on these cards means: -25mV for 6800, -150mV for 6800 XT and -175mV for 6900 XT.
> 
> So in pratical terms, at default settings, there is no way to really to control the absolute voltage on these cards, and only relative changes can be made.


Setting 6900 XT to 1075mv and not touching the core he is undervolting by 100mv. If he increase the core clk then you shift the entire curve. You can still use MPT to limit MAX vcore.


----------



## jonRock1992

majestynl said:


> Here is the bios for the liquid devil ultimate:
> 
> 
> 
> 
> 
> 
> 
> 
> Powercolor RX 6900 XT VBIOS
> 
> 
> 16 GB GDDR6, 500 MHz GPU, 914 MHz Memory
> 
> 
> 
> 
> www.techpowerup.com


Thank you for this! It worked like a charm with my Red devil ultimate. I put the power limit to 370W. Now I just need to do a repaste so I can actually get decent temps with this thing lol. Is there a way to do a repaste without voiding the warranty? My junction temp was hitting 105C with this power limit. Junction temp is in the 80's and 90's in games.


----------



## LtMatt

jimpsar said:


> Nice mate!!
> With the settings you gave???


Yep.


----------



## ptt1982

jonRock1992 said:


> Thank you for this! It worked like a charm with my Red devil ultimate. I put the power limit to 370W. Now I just need to do a repaste so I can actually get decent temps with this thing lol. Is there a way to do a repaste without voiding the warranty? My junction temp was hitting 105C with this power limit. Junction temp is in the 80's and 90's in games.


From my experience, not really. It’s difficult to find the exact same factory paste (they’d know you changed it if you RMA the card), or not to mess up the X screw’s sticker on the back of the PCB. The whole idea is not to use the same paste / pads due to its bad quality (and often sloppy application.)

You can see a quick guide how to tear it down on pages 76-78 (I forgot which one.) My junction temps dropped 15-20C, VRAM 5-7C. Highly recommended for Red Devil, but be careful not to break anything (cables, connectors etc.) There were good recommendations for thermal paste there as well.


----------



## jonRock1992

I wonder what thermal paste power color uses? It's really not that good if temps are climbing into the high 90's. If I knew I could replace the paste with kryonaut and then if I ever had to rma I could put the factory paste back on.


----------



## smokedawg

Got a 6900XT Liquid Devil Ultimate to go with my new build. Out of the box I get 2685-2705 MHz SCLK and 1000 MHz MCLK in Shadow of the Tomb Raider. VDDGFX shows 1192mV and ~336W max. Currently waiting for an EK Shipment - until then I have this card and a 5950x running on one 360 radiator. Temperature Delta was about 16-17°C during SotTR benchmark (water got up to 33, GPU to 49). I cannot read junction temp under Linux I think. I will try to install Windows on a second disk when I get time.
Really happy coming from a 2500k and a 290x.



> cat /sys/class/drm/card0/device/pp_od_clk_voltage
> OD_SCLK:
> 0: 500Mhz
> 1: 2614Mhz
> 
> OD_MCLK:
> 0: 97Mhz
> 1: 1000MHz
> 
> OD_VDDGFX_OFFSET:
> 0mV
> 
> OD_RANGE:
> SCLK: 500Mhz 4000Mhz
> MCLK: 674Mhz 1312Mhz


The output stays the same regardless whether I switch to "OC" or "Unleashed" BIOS.


----------



## Oversemper

smokedawg said:


> Got a 6900XT Liquid Devil Ultimate to go with my new build. Out of the box I get 2685-2705 MHz SCLK and 1000 MHz MCLK in Shadow of the Tomb Raider. VDDGFX shows 1192mV and ~336W max. Currently waiting for an EK Shipment - until then I have this card and a 5950x running on one 360 radiator. Temperature Delta was about 16-17°C during SotTR benchmark (water got up to 33, GPU to 49). I cannot read junction temp under Linux I think. I will try to install Windows on a second disk when I get time.


Can you benchmark SoTR at 4k highest qaulity + Ray Tracing with latest drivers and resizebar on?


----------



## weleh

Don't get a XTXH card if you plan to keep it on air.
You're gonna be in for disapointment.


----------



## LtMatt

weleh said:


> Don't get a XTXH card if you plan to keep it on air.
> You're gonna be in for disapointment.


Care to expand?


----------



## weleh

None of the XTXH cards on air will perform better than a reference card due to thermal issues.

Get the water version of the XTXH bins or watercool the card. You'll be hitting hotspot throttleing before you can even go past 2600 Mhz


----------



## jonRock1992

weleh said:


> Don't get a XTXH card if you plan to keep it on air.
> You're gonna be in for disapointment.


This is true. I got the red devil ultimate. I'm greatly limited by thermal performance. Card can get stable at 2700 MHz to 2750 MHz in games that aren't incredibly demanding. But with high gpu utilization my junction temps are in the 90's and clocks go down to 2600 MHz or so. If I had it under water I'm sure it would be 2800 MHz stable.


----------



## ptt1982

New Adrenaline drivers released: 21.5.1. Will be able to test them tomorrow. Would be great to hear about everyone’s experience with them.


----------



## newls1

trying to figure out if there is any difference between my old 6900xt (asus Tuf) and the Powercolor Liquid devil (non ulimate) cards... Is the powercolor "as" good or better then the asus version? I can say that the liquid devil has 3 8pin power connections vs 2 on the asus if that makes any difference. Since I absolutely cant land a 3090 im just gonna get another 6900XT card and my local MC has the liquid devil and was just hoping its quality is as good or better then my past Asus Tuf.... The VRM section looks amazing on the liquid devil and 3 8pins gives me hope and the stock bios allows 302w vs 272 for the asus... Obviously ill be using MPT anyways but with 3 8pins ill feel a little better giving this card 400watts vs my old asus card... just looking for your thoughts here... thanks


----------



## weleh

Good stuff for people on Reference cards.
240€ for an AIO+block+backplate looks good.









Alphacool Eiswolf 2 AIO - 360mm Radeon RX 6800/6800XT/6900 Reference Design mit Backplate


Der Alphacool Eiswolf 2 ist der erste Fullcover GPU AIO Wasserkühler von Alphacool. Er basiert auf dem Alphacool GPX Eisblock Aurora GPX Wasserkühler, einer Pumpeneinheit und einem 360mm NexXxoS ST30 Vollkupfer Radiator, der mit den...




www.alphacool.com


----------



## weleh

Finally got around changing fans on the Toxic and sorting stuff on the PC...
NF A12x25's are so good compared to the stock fans on this card...


----------



## jonRock1992

weleh said:


> Finally got around changing fans on the Toxic and sorting stuff on the PC...
> NF A12x25's are so good compared to the stock fans on this card...
> 
> View attachment 2489496


Nice! I had just got 5 of those for my system. They're great.


----------



## newls1

ok update (for anyone that might care LOL) this is just a quick follow up: So I sold my Waterblocked Asus TUF 6900XT 2 weeks back with the hopes of getting a 3090, THAT IS IMPOSSIBLE!! so newegg had the Powercolor Liquid Devil ULTIMATE in stock for 5 seconds so I grabbed that just now. I think i checked out fast enough to actually get it, and newegg charged me already for it and order says "processing" so maybe ill actually get it. This card does have the new XTX-H binned gpu and 1.2v allowance soooooooooooo, im hoping for slightly better T/S results! Ill update this if it actually ships. Anyone have any first hand experience with the XTX-H bin yet or the card in particular.....






Liquid Devil AMD Radeon™ RX 6900 XT Ultimate 16GB GDDR6 - PowerColor







www.powercolor.com


----------



## ZealotKi11er

For sre you will get better binned GPU. Also 1.2v will help a bit.


----------



## newls1

hoping so, i really loved my Asus TUF card so this will hopefully take it up a notch or 2...


----------



## majestynl

ZealotKi11er said:


> Want to see what the out of box clock is.


2564mhz! nothing special. It runs smooth. I'm testing it out. Probably will re-paste cause the temps can definitely be better. Powercolor needs to learn how to paste and also use a better paste if you ask me!



jonRock1992 said:


> Thank you for this! It worked like a charm with my Red devil ultimate. I put the power limit to 370W. Now I just need to do a repaste so I can actually get decent temps with this thing lol. Is there a way to do a repaste without voiding the warranty? My junction temp was hitting 105C with this power limit. Junction temp is in the 80's and 90's in games.


Your welcome! By the way I'm now running a Beta version of GPU-Z (2.38.3) and this card is now supported. I can save the bios  You can ask *W1zzard *at echpowerup.com/forums for a download link if you need one!




smokedawg said:


> Got a 6900XT Liquid Devil Ultimate to go with my new build. Out of the box I get 2685-2705 MHz SCLK and 1000 MHz MCLK in Shadow of the Tomb Raider. VDDGFX shows 1192mV and ~336W max. Currently waiting for an EK Shipment - until then I have this card and a 5950x running on one 360 radiator. Temperature Delta was about 16-17°C during SotTR benchmark (water got up to 33, GPU to 49). I cannot read junction temp under Linux I think. I will try to install Windows on a second disk when I get time.
> Really happy coming from a 2500k and a 290x.
> 
> 
> The output stays the same regardless whether I switch to "OC" or "Unleashed" BIOS.


That's probably not out of the box  That does sound more like a OC. You probably mean you haven't used MPT?!


----------



## weleh

Should perform decently on water.


----------



## jonRock1992

Does anyone know if that 360mm alpha cool AIO will work with the red devil? I really wanna water cool this gpu without setting up a custom loop. It's too bad the Kraken G12 hasn't been updated.


----------



## ZealotKi11er

jonRock1992 said:


> Does anyone know if that 360mm alpha cool AIO will work with the red devil? I really wanna water cool this gpu without setting up a custom loop. It's too bad the Kraken G12 hasn't been updated.


Most likely not.


----------



## weleh

Anyone tested 21.5.1? or whatever thje newest drivers are?


----------



## CS9K

weleh said:


> Anyone tested 21.5.1? or whatever thje newest drivers are?


I have. On my office PC's RX 6800, the existing overclock is fine, drivers seem stable, AND I don't appear to need to disable Deep Sleep with MPT anymore. So far, at least.


----------



## newls1

CS9K said:


> I have. On my office PC's RX 6800, the existing overclock is fine, drivers seem stable, AND I don't appear to need to disable Deep Sleep with MPT anymore. So far, at least.


so OCing isnt broken with this driver like it was with 21.4.1? would you say its safe to use this driver or stick with 21.3.1


----------



## smokedawg

majestynl said:


> That's probably not out of the box  That does sound more like a OC. You probably mean you haven't used MPT?!


Just installed the card and started a benchmark. Have not tried it on Windows yet. Only using the amdgpu driver on Linux.
I am currently waiting for some more fittings before I can use my PC again but will try to get a WindowsToGo install going to see if I get different readings.


----------



## ptt1982

weleh said:


> Anyone tested 21.5.1? or whatever thje newest drivers are?


I’ve done some testing. For my Red Devil (regular) it improved the stable OC by 30mhz from the previous driver in TS (2650mhz) and I can do now 2720mhz pass on Port Royal. Games previously started to exhibit glitches at 2735mhz, but now they run stable 2740mhz, glitches start at 2745mhz. The card insta crashes at 2790mhz, whereas it was 2780mhz before. 

There are still bugs (Planetfall, Metro 2033 Redux and Disco Elysium have graphical glithes) but it seems to be almost the same as the March 29th driver speed-wise, with featured added and some improvements to games like Metro Exodus Enhanced and RE Village.

Just my two cents!


----------



## CS9K

newls1 said:


> so OCing isnt broken with this driver like it was with 21.4.1? would you say its safe to use this driver or stick with 21.3.1


My testing hasn't been as thorough as @ptt1982 's, but so far it's been smooth sailing. I'd say you're probably good to upgrade.


----------



## majestynl

smokedawg said:


> Just installed the card and started a benchmark. Have not tried it on Windows yet. Only using the amdgpu driver on Linux.
> I am currently waiting for some more fittings before I can use my PC again but will try to get a WindowsToGo install going to see if I get different readings.


Hmm thats really high for stock. Dunno if thats because of Linux


----------



## weleh

ptt1982 said:


> I’ve done some testing. For my Red Devil (regular) it improved the stable OC by 30mhz from the previous driver in TS (2650mhz) and I can do now 2720mhz pass on Port Royal. Games previously started to exhibit glitches at 2735mhz, but now they run stable 2740mhz, glitches start at 2745mhz. The card insta crashes at 2790mhz, whereas it was 2780mhz before.
> 
> There are still bugs (Planetfall, Metro 2033 Redux and Disco Elysium have graphical glithes) but it seems to be almost the same as the March 29th driver speed-wise, with featured added and some improvements to games like Metro Exodus Enhanced and RE Village.
> 
> Just my two cents!


You have to look at actual effective clocks because I've noticed drivers change how the card behaves and eventhough sometimes it allows for higher clocks the effective clocks hardly change.


----------



## CS9K

newls1 said:


> so OCing isnt broken with this driver like it was with 21.4.1? would you say its safe to use this driver or stick with 21.3.1


Update: Got 21.5.1 onto my gaming PC with the RX 6900 XT in it. I had to drop core clock from 2750 to 2735, but it's stable and running great otherwise. Scores were either the same or increased from before, so no complaints here. 

Also confirmed, I no longer have stuttering issues, nor issues with the drivers not obeying my minimum-set-core-clock-speed anymore with 21.5.1. Oh happy day \o/


----------



## jimpsar

CS9K said:


> Update: Got 21.5.1 onto my gaming PC with the RX 6900 XT in it. I had to drop core clock from 2750 to 2735, but it's stable and running great otherwise. Scores were either the same or increased from before, so no complaints here.
> 
> Also confirmed, I no longer have stuttering issues, nor issues with the drivers not obeying my minimum-set-core-clock-speed anymore with 21.5.1. Oh happy day \o/


Nice!! 
Care to share radeon settings and mpt if any? Thank you!


----------



## CS9K

jimpsar said:


> Nice!!
> Care to share radeon settings and mpt if any? Thank you!


In MPT, the only change I have now is "Power Limit GPU (W)" set to "365" to give me a card power limit of 365 + 15% = 420W.

2735MHz is Time Spy and Port Royal stable. Likewise I tested 2100MHz through 2150MHz in Unigine Superposition 4k Optimized, every 10MHz with 3 runs each step, to make sure that 2150MHz + Fast Timings was indeed the best performance for my card. YMMV.

Here is a screen capture of my global profile in Adrenalin:


----------



## jonRock1992

Just did a timespy run with my red devil ultimate. This is at my game stable clocks, default fan cure, 400W power target.
AMD Radeon RX 6900 XT video card benchmark result - AMD Ryzen 7 5800X,ASUSTeK COMPUTER INC. ROG CROSSHAIR VIII DARK HERO (3dmark.com)

Update:
So 21.3.2 is still more stable for me. Here is a run with 21.3.2 with the same settings as the bench above. AMD Radeon RX 6900 XT video card benchmark result - AMD Ryzen 7 5800X,ASUSTeK COMPUTER INC. ROG CROSSHAIR VIII DARK HERO (3dmark.com)


----------



## smokedawg

Finally got my build ready and installed Windows on a second disk. Stock clock for my 6900xt liquid devil ultimate during SotTR benchmark is around 2550-2600 MHz compared to 2685-2700 running under Linux. Not sure if those are different readings or real differences (in Windows I used hwmon64, for Linux I used MangoHud and watched pp_od_clk_voltage)
I did some comparisons using the latest windows driver (21.5.1) at 3440x1440, no AA, highest:
dx11






dx12






linux/vulkan








Also I did some timespy runs. At stock:








I scored 19 887 in Time Spy


AMD Ryzen 9 5950X, AMD Radeon RX 6900 XT x 1, 32768 MB, 64-bit Windows 10}




www.3dmark.com




And with some OC (+15% PT, Max Clock 2850, Fast Timings, Memory 2030 MHz)








I scored 20 656 in Time Spy


AMD Ryzen 9 5950X, AMD Radeon RX 6900 XT x 1, 32768 MB, 64-bit Windows 10}




www.3dmark.com





Luckily 3dmark saved my history: my last timespy run using my previous build: 2500k + Dual 290x:








I scored 6 172 in Time Spy


Intel Core i5-2500K Processor, AMD Radeon R9 290X x 2, 16384 MB, 64-bit Windows 10}




www.3dmark.com


----------



## newls1

my liquid devil ultimate comes in on friday and cant wait. hoping for better clocks then my asus tuf 6900xt. thanks for your update. By any chance, did you ever run 21.3.1 driver on your ultimate to compare the OC results with 21.5.1? Before I sold my asus tuf 6900xt, I had tried the 21.4.1 driver, and that completely gimped my OCs severely, so i switched back to 21.3.1 and got all my OCs back to normal. Something very wrong with that driver and was hoping 21.5.1 doesnt have the same issue


----------



## chispy

And ... we got a couple of new World Records today with Powercolor Red Devil Ultimate 6900xtx-h clock speed at 3306Mhz ~ 3274Mhz ( highest core clock in the world for a gpu benchmark stable ) and highest ever 3dmark Firestrike Extreme score. It was done by none the less the good guys OGS from Greece , congratulations on such a huge achievement. Absouletely Massive score 👍
OGS`s 3DMark - Fire Strike Extreme score: 35869 marks with a Radeon RX 6900 XT


----------



## CS9K

newls1 said:


> my liquid devil ultimate comes in on friday and cant wait. hoping for better clocks then my asus tuf 6900xt. thanks for your update. By any chance, did you ever run 21.3.1 driver on your ultimate to compare the OC results with 21.5.1? Before I sold my asus tuf 6900xt, I had tried the 21.4.1 driver, and that completely gimped my OCs severely, so i switched back to 21.3.1 and got all my OCs back to normal. Something very wrong with that driver and was hoping 21.5.1 doesnt have the same issue


I would say that while I had to back off core clock a _little_ for 21.5.1, the improvements made to other things in the driver netted me the same or better scores in all benchmarks that I tried, not to mention not having the deep-sleep stutters anymore! For my use case, 21.5.1 might not have the best core clock numbers to show off, but bench scores are as good and I'm happy to have one less thing to fiddle with/worry about now.


----------



## newls1

CS9K said:


> I would say that while I had to back off core clock a _little_ for 21.5.1, the improvements made to other things in the driver netted me the same or better scores in all benchmarks that I tried, not to mention not having the deep-sleep stutters anymore! For my use case, 21.5.1 might not have the best core clock numbers to show off, but bench scores are as good and I'm happy to have one less thing to fiddle with/worry about now.


fair enough, thanks for your input


----------



## chispy

Today arrived my new addition to the vga family , finally i was able to snag one Red Devil Ultimate xtx-h new gpu , arrived a minute ago  , i'm happy !


----------



## jonRock1992

chispy said:


> Today arrived my new addition to the vga family , finally i was able to snag one Red Devil Ultimate xtx-h new gpu , arrived a minute ago  , i'm happy !
> 
> View attachment 2490211


Sweet! Let us know how it performs. I'd like to compare it to my red devil ultimate. I feel like I got a lower binned one.


----------



## chispy

Anyone knows how to flash this new cards ? I would like to flash the red devil utimate water cooled to this air cool red devil ultimate. Any guidance , help will be greatly appreciate it.


----------



## chispy

jonRock1992 said:


> Sweet! Let us know how it performs. I'd like to compare it to my red devil ultimate. I feel like I got a lower binned one.


Will do ! thanks


----------



## smokedawg

Raising the power limit with MPT netted a nice boost:








I scored 30 712 in Fire Strike Extreme


AMD Ryzen 9 5950X, AMD Radeon RX 6900 XT x 1, 32768 MB, 64-bit Windows 10}




www.3dmark.com


----------



## newls1

smokedawg said:


> Raising the power limit with MPT netted a nice boost:
> 
> 
> 
> 
> 
> 
> 
> 
> I scored 30 712 in Fire Strike Extreme
> 
> 
> AMD Ryzen 9 5950X, AMD Radeon RX 6900 XT x 1, 32768 MB, 64-bit Windows 10}
> 
> 
> 
> 
> www.3dmark.com
> 
> 
> 
> 
> View attachment 2490262


what settings did you change in MPT for your ultimate card? Could you please maybe share a pic of your MPT settings.. My liquid devil ultimate will be friday and was thinking about letting her rip to around 450w (since it has 3 pcie connectons) and compare it to my TUF 6900XT which i limited to 385w


----------



## L!ME

chispy said:


> Anyone knows how to flash this new cards ? I would like to flash the red devil utimate water cooled to this air cool red devil ultimate. Any guidance , help will be greatly appreciate it.


You can use amdvbflash for Linux or use a ch341a programmer but you can do the same things with a modified powertabel. My advice Use Morepowertool its realy easy to modifie your powertabel with it.


----------



## smokedawg

newls1 said:


> what settings did you change in MPT for your ultimate card? Could you please maybe share a pic of your MPT settings..


I just changed the Power Limit GPU to 375 and the TDC Limit GFX to 395. Together with the +15% this resulted in a maximum power draw of ~430W in GPU-Z.







Hotspot Temp went up to ~100ºC with these settings.


----------



## newls1

smokedawg said:


> I just changed the Power Limit GPU to 375 and the TDC Limit GFX to 395. Together with the +15% this resulted in a maximum power draw of ~430W in GPU-Z.
> View attachment 2490310
> 
> Hotspot Temp went up to ~100ºC with these settings.


100C WITH A WATERBLOCK?! This is a waterblocked gpu right?


----------



## smokedawg

Yes, with a waterblock. But seems like I misremembered. Just did another run to double check:


----------



## ZealotKi11er

That looks about right. 425w is a lot of power. That is about 500-525W actual for entire GPU.


----------



## smokedawg

Thank you for confirming. Still reading up on this thread and did not know what temperatures to expect. I was a little worried as I just got done with a new loop with too many 90° fittings.


----------



## newls1

ZealotKi11er said:


> That looks about right. 425w is a lot of power. That is about 500-525W actual for entire GPU.


i dont know about that TBH..... I had 400w going thru my asus tuf 6900 with an alphacool block and during T/S benchies my core never got hotter then 43/44c and hotspot never more then 54/55 with a 22/23c ambient temp. Your temps seem rather high and would make me think about a remount of block. I obviously have no idea what you are using for watercooling loop, but mine was a 420mm rad in push/pull, D5 @ 100%..... gpu only loop.


----------



## CS9K

Yall don't have to guess about which value is the GPU Core power draw, and which is the GPU Card total power draw, both values appear in HWINFO64.


----------



## smokedawg

I left "The Witcher 3" running for 20 minutes to have numbers to compare to this igorslab review of the regular liquid devil. I ran the "unleashed" bios though without a MPT edit and ended up with
60 edge
86 junction
54 mem
Ambient was ~26°C
Water 35°C at the end.

Compared to Igor's numbers of
54 edge
74 junction
52 mem
Ambient 22°

my loop (1x 360PE 40mm pull, 1x360XE 60mm push/pull) performs worse but does not appear to be that far off. My case (Lian Li O11 XL, filters removed) was closed during this and fans (Corsair ML120 Pro) at around 800rpm.
Maybe I'll take the block off as you suggested @newls1 during next loop maintenance and/or when warranty expires and check if contact & paste are ok.
Edit: Also I have the 5950x in the loop as well.


----------



## HeLeX63

smokedawg said:


> Thank you for confirming. Still reading up on this thread and did not know what temperatures to expect. I was a little worried as I just got done with a new loop with too many 90° fittings.


How do you get nearly 2200MHz on the memory. My normal red devil maxes out at 2150MHz ???


----------



## smokedawg

I am not sure where that came from. I set it to 2150.


----------



## Blameless

HeLeX63 said:


> How do you get nearly 2200MHz on the memory. My normal red devil maxes out at 2150MHz ???


Some of the XTX cards have much higher memory limits.


----------



## newls1

only XTX-H cards maybe??


----------



## Nighthog

There is a driver bug that causes erratic memory speed reports. I've seen all kind of values for memory clock on my benchmarks.


----------



## Henry Owens

alexp247365 said:


> Can anyone recommend a good paste/pads for this card. I've got my Red Devil LE under water and for the most part it is fine temp wise. However, for The Division two game, it can hit a constant 345w, which has brought the temps up to 70c/100c - which seems way too high for a watercooled card.
> 
> 2 360 ek radiators for the 6900xt and a non overclocked 5900x.


I went all out with mine and put thermal grizzly thermal pads and kryonaught extreme paste (which is pink) on.


----------



## Henry Owens

Interesting my card allows a higher stable oc setting (2700) while at stock and max power limit compared to using mpt. With mpt using 350w it will only allow around 2650. It does score higher timespy with mpt though.


----------



## ZealotKi11er

Henry Owens said:


> Interesting my card allows a higher stable oc setting (2700) while at stock and max power limit compared to using mpt. With mpt using 350w it will only allow around 2650. It does score higher timespy with mpt though.


That is because it never hits 2700MHz at stock power limit.


----------



## ptt1982

Henry Owens said:


> Interesting my card allows a higher stable oc setting (2700) while at stock and max power limit compared to using mpt. With mpt using 350w it will only allow around 2650. It does score higher timespy with mpt though.


It might be ok with a higher setting on stock, but the setting itself doesn’t guarantee the card is running the actual clocks higher. GPUs crash when the clocks get too high for the silicon quality, not based on what the settings you have on the OC software menu. 

In other words, perhaps the reason you are stable with MPT only at 2650mhz is because your clocks are higher than when you are using MPT. So the MPT essentially lifts the clocks higher due to the card having more power to do so. If you use MPT and up the power limits and set 2700mhz, the clocks go too high for the silicon. It could be so that with stock power limits applied, there’s not enough power drawn into the card to get the clocks high enough to crash the card. 

This might not be the only reason though, as some 6900xt cards seem to be sensitive to PSUs and motherboards power delivery systems (not much scientific proof of that yet though.)


----------



## Henry Owens

ptt1982 said:


> It might be ok with a higher setting on stock, but the setting itself doesn’t guarantee the card is running the actual clocks higher. GPUs crash when the clocks get too high for the silicon quality, not based on what the settings you have on the OC software menu.
> 
> In other words, perhaps the reason you are stable with MPT only at 2650mhz is because your clocks are higher than when you are using MPT. So the MPT essentially lifts the clocks higher due to the card having more power to do so. If you use MPT and up the power limits and set 2700mhz, the clocks go too high for the silicon. It could be so that with stock power limits applied, there’s not enough power drawn into the card to get the clocks high enough to crash the card.
> 
> This might not be the only reason though, as some 6900xt cards seem to be sensitive to PSUs and motherboards power delivery systems (not much scientific proof of that yet though.)


Feel it might be wanting more power I have Corsair rmx 850w. Also I'm using 18gauge pcie extension cables, not sure if those are too weak.
Edit: also are 350w and 370a the highest I should go with mpt?


----------



## chispy

MickeyPadge said:


> Did you get the block? Be interested to see some results temp wise. I've upgraded to a Ryzen 5800X now too, the xeon 1680v2 is my backup


Sorry for the late reply amigo. Here is the temperatures with everything at stock on radeon settings ( no overclocking , nada , bone stock settings ) During stress test. The cosair water block it's pretty good on this non-reference ASRock Phantom Gaming OC RX 6900xt i'm impressed - During gaming for 3 ~ 4 hours in Forza horizon 4 , cp2077 temps are 42c average / max i have seen is 49c with a big custom water loop 1x slim x360 rad + 1x fat big boy x280 rad and 1x140 rad dual d5 pumps on 22c~25c room temps. My card loves it.


----------



## kak4rot

Hi all,



















This is the ****ty max clock I get on my watercooled reference AMD RX 6900 XT when I press manual setting in the perfomance tab in wattman.










It looks the max stable OC I can reach is 2539 Mhz with 15% power limit and 1175 mV. Increasing power limit with MPT doesn't change anything on my max clock, if I increase max clock it becomes unstable on TimeSpy.










Can you say that I got the most ****ty reference card ever? what a pity that the waterblock ended up on this card 

Other thing I have noticed is I have random micro stutter accompanied by a small drop in fps in some situations in Battlefield V and Quake Champions...


----------



## newls1

kak4rot said:


> Hi all,
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> This is the ****ty max clock I get on my watercooled reference AMD RX 6900 XT when I press manual setting in the perfomance tab in wattman.
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> It looks the max stable OC I can reach is 2539 Mhz with 15% power limit and 1175 mV. Increasing power limit with MPT doesn't change anything on my max clock, if I increase max clock it becomes unstable on TimeSpy.
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> Can you say that I got the most ****ty reference card ever? what a pity that the waterblock ended up on this card
> 
> Other thing I have noticed is I have random micro stutter accompanied by a small drop in fps in some situations in Battlefield V and Quake Champions...


just out of curiosity, are you feeding that card 2 dedicated 8pins or a single 8pin leg with 2 connections? also, post up a pic of your mpt settings, maybe we can help


----------



## kak4rot

newls1 said:


> just out of curiosity, are you feeding that card 2 dedicated 8pins or a single 8pin leg with 2 connections? also, post up a pic of your mpt settings, maybe we can help


Hi again, 

I'm feeding the card with 2 dedicated 8pins PCI cables, This is one of the mpt setting I have tried but I couldn't achieve more stable OC


----------



## marcoschaap

Hi guys, 

I recently completed my first custom loop with a Bykski GPU block for my Sapphire Nitro+ 6900XT, and I wanted to share some pics and experiences about temps, OC, benchmark scores (and the difference I saw between the latest 21.5.1 driver and it's predecessor. First some info about clocks and temps.

Settings in Radeon software:

Min target clock: 2500Mhz
Max target clock: 2645Mhz
Target voltage: 1135mV

Mem timings: Fast
Mem speed: 2150Mhz

MPT Power Limit @375w

HWI statistics after 6 TimeSpy runs:








Link to one of these runs: TimeSpy 21281 Graphics Score with driver 21.5.1, and one with an older driver: TimeSpy 21564 Graphics Score, curious thing is the score with the older driver is higher, but the average clock is lower. There is a difference of about 60mhz max GPU clock between the old an new driver, old OC settings only yield instability.

Some pictures of the GPU and the block:


























On the third picture (backside) originally were no thermal pads needed, as per instruction of Bykski, after reading some information I wanted to experiment by using the backplate for some heat dissipation. The only thing I doubted about was, would it be helpfull to also cover the back of the actual GPU core with a thermal pad, or would it soak heat away from the top block and only cause a opposite effect?


----------



## newls1

I just installed my Liquid Devil Ultimate card and driver 21.5.1 and still have same results with any driver after 21.3.1 (so 21.4.1 and 21.5.1) that my OC's are crap with anything past 21.3.1. This XTX-H core should be capable of timespy runs @ 2750+ but im not getting them and this is at 1.2v.... Going to install 21.3.1 and see if I can get my scores better


----------



## Spawnyspawn

So, I've been trying to get a bit more out my reference card with mixed results. At first I used Wattman only and got some nice results. Wattman recommended OC for core clock is 2574Mhz, mem clock 2150. Worked fine with fast timings and +15% power on the slider. Didn't touch the voltage slider. Had to bump up the fan profile a bit, but it worked.

Then I tried MPT. Set the Wattman settings back to stock and did the following:

Increase Power Limit in MTP incrementally until I hit instability during TS benchmark
Increase core clock from there on, again until instability or other issues
Increase mem clock in small increments until instability

What I found:

AMD drivers crash as soon as I allow my card to pull more than 350W. Either by setting the power limit to < 350W in MTP and an increased power limit % in Wattman so that the limit exceeds 350W or by setting a power limit of > 350W in MTP and setting the slider in Wattman to 0%.
Stable mem clock capped at 2100 -> 50 less than when I was using only Wattman for OC
Core clock in Wattman could be increased to 2700Mhz stable
Decreasing max core clock to 2650Mhz allows a stable 2130Mhz core clock, makes 0% difference in TS results
GPU-Z temperature logs show a max of about 70-74 degrees on the die and junction temps of max 92 degrees during all runs I've done
Trying any kind of undervolt with MTP doesn't work, it causes crashes as soon as anything below the 1175 default is set

Before (stock everything):








I scored 17 201 in Time Spy


AMD Ryzen 7 5800X, AMD Radeon RX 6900 XT x 1, 32768 MB, 64-bit Windows 10}




www.3dmark.com





After (PL 320W (with MTP), TDC limit 380, 2700Mhz core, 2100Mhz memory, normal timing, +9% Power limit in Wattman):








I scored 18 674 in Time Spy


AMD Ryzen 7 5800X, AMD Radeon RX 6900 XT x 1, 32768 MB, 64-bit Windows 10}




www.3dmark.com





All in all an increase of approximately 10%.
Some questions I still have:

Is there anything I can still do to maybe get just a bit more performance from the card?
Did the latest driver release change anything in the max power the reference design? I am very sure I was able to allow the card to use more than 350W of power prior to having to reinstall windows a few days ago
Would the card's performance benefit in any way from a waterblock? Since temperatures are well within the limits there doesn't seem to be a bottleneck there, but it might be throttling. No clue.

Any help or information would be much appreciated!


----------



## Spawnyspawn

newls1 said:


> I just installed my Liquid Devil Ultimate card and driver 21.5.1 and still have same results with any driver after 21.3.1 (so 21.4.1 and 21.5.1) that my OC's are crap with anything past 21.3.1. This XTX-H core should be capable of timespy runs @ 2750+ but im not getting them and this is at 1.2v.... Going to install 21.3.1 and see if I can get my scores better


What I found the past few days from logs I collected during my OC attempts, is that my card's voltage was wildly inconsistent during TimeSpy runs. Only when I maxed out the power limit of the card I got stable voltages of 1175 during the runs. So... maybe it needs more power?


----------



## ZealotKi11er

newls1 said:


> I just installed my Liquid Devil Ultimate card and driver 21.5.1 and still have same results with any driver after 21.3.1 (so 21.4.1 and 21.5.1) that my OC's are crap with anything past 21.3.1. This XTX-H core should be capable of timespy runs @ 2750+ but im not getting them and this is at 1.2v.... Going to install 21.3.1 and see if I can get my scores better


I dont think 2750MHz+ is possible just because its XTX-H in TimeSpy.


----------



## newls1

Spawnyspawn said:


> What I found the past few days from logs I collected during my OC attempts, is that my card's voltage was wildly inconsistent during TimeSpy runs. Only when I maxed out the power limit of the card I got stable voltages of 1175 during the runs. So... maybe it needs more power?


Yeah man, exact same thing here.... I actually nearly completed a TS run @ 2815MHz "ISH" using 1.175 down from 1.2v set in wattman. If I kept 1.2 set, insta crash the very first second. using 1.175 (just like a normal 6900xt) I can ALMOST complete a run.... its not until I drop core speed down to 2725 (using 21.5.1 driver) that I can compelte multi pass runs.. anything over that and NOPE! Not seeing the advantage of this XTX-H core... really no different then my other 6900xt. Maybe if I was using LN2 then thats the difference, but us normal joes.... NOPE.


----------



## newls1

ZealotKi11er said:


> I dont think 2750MHz+ is possible just because its XTX-H in TimeSpy.


Im seeing that now.


----------



## ZealotKi11er

newls1 said:


> Im seeing that now.


Bro you are not alone. Mister der8auer was saying in his youtube video how 2800MHz, for his card he got 2880Mhz was nothing special for 6900xt. Man was given a cherry pricked XTXH from Powercolor.


----------



## newls1

alot of these issues tho is that there are alot of weird shananagins going on with the driver behind the scenes. I can set 2750 with driver 21.3.1 then with 21.4 I cant set anything past 2640, and with 21.5 I cant go past 2670 (so in reality both 21.4.and 21.5 net the same speeds in TS) I wish i understood AMD drivers..... I VERY MUCH HOPE the new drivers that introduce FSR will come with my OC's coming back.


----------



## MickeyPadge

chispy said:


> Sorry for the late reply amigo. Here is the temperatures with everything at stock on radeon settings ( no overclocking , nada , bone stock settings ) During stress test. The cosair water block it's pretty good on this non-reference ASRock Phantom Gaming OC RX 6900xt i'm impressed - During gaming for 3 ~ 4 hours in Forza horizon 4 , cp2077 temps are 42c average / max i have seen is 49c with a big custom water loop 1x slim x360 rad + 1x fat big boy x280 rad and 1x140 rad dual d5 pumps on 22c~25c room temps. My card loves it.
> 
> View attachment 2490567


Thanks for the info, great job! When pushing the max on my card, I can hit 400W on Time Spy Extreme. Actual core speeds at around 2600-2630, mem at 2130. The temps during those runs (peak) seem to be 60/80c core/junction, give or take a couple of degrees, mostly 55-75 ish. That's with very low 700 rpm fans on a 280 and external 560 rad.

Gaming wise temps are generally between 50-55c, and junction sometimes hits 60c....

Seems the corsair reference block (with a tweak or two) really can fit the non reference Asrock cards without a problem at all!

Cheers! 









I scored 9 275 in Time Spy Extreme


AMD Ryzen 7 5800X, AMD Radeon RX 6900 XT x 1, 32768 MB, 64-bit Windows 10}




www.3dmark.com


----------



## MickeyPadge

Top 50 HAF score 









I scored 44 604 in Fire Strike


AMD Ryzen 7 5800X, AMD Radeon RX 6900 XT x 1, 32768 MB, 64-bit Windows 10}




www.3dmark.com


----------



## ZealotKi11er

MickeyPadge said:


> Top 50 HAF score
> 
> 
> 
> 
> 
> 
> 
> 
> 
> I scored 44 604 in Fire Strike
> 
> 
> AMD Ryzen 7 5800X, AMD Radeon RX 6900 XT x 1, 32768 MB, 64-bit Windows 10}
> 
> 
> 
> 
> www.3dmark.com


Anything special to keep the card busy during the first test? My GPU usage keeps dropping.


----------



## rodac

Hello community of AMD rx9600xt owners. like @chispy , I purchased the Powercolor Ultimate rx6900xt, beautiful card. I got it sunday 16th May.
Although I used DDU to remove NVIDIAdrivers (1080ti), the RADEON app reports that the card does not meet mininum requirements in the games performance tab.
Likewise, the Oculus app I use for VR with the link cable reports that my system does not meet the min requirements.
Benchmarking with 3dmark, I get roughly the stats published in the online reviews.
Can anyone help, is this a known issue or else am I am the only one havign this problem. Clearly the performance in VR is little bit worse. Give that I paid a lot for this card, is this something that warrants an return. Thanks in advance for any comments.
It would be useful to know if @chispy who has the same card potentially came across a similar issue.


----------



## LtMatt

ZealotKi11er said:


> Anything special to keep the card busy during the first test? My GPU usage keeps dropping.


If you are using a 12 core or 16 core Ryzen 5000 series, disable half the cores via BIOS and you will get 99% locked GPU utilisation on the first test. Only happens in standard FS. I scored 71300 graphics score doing this.


----------



## chispy

rodac said:


> Hello community of AMD rx9600xt owners. like @chispy , I purchased the Powercolor Ultimate rx6900xt, beautiful card. I got it sunday 16th May.
> Although I used DDU to remove nvidia drivers (1080ti), the radeon app reports that the card does not meet mininum requirements in the games performance tab.
> Likewise, the Oculus app I use for VR with the link cable reports that my system does not meet the min requirements.
> Benchmarking with 3dmark, I get roughly the stats that the reviewed published about this card.
> Can anyone help, is this a known issue or else am I am the only one havign this problem. Clearly the performance in VR is worse. Give that I paid a lot for this card, is this something that warrants an return. Thanks in advance for any comments. It would be useful to know if @chispy who has the same card noted the same issue. I have had a chance to look into overclocking it yet.



Hello there , i have not experienced those problems in the radeon app. , VR i do not use or have one. My advice would be to clean install a fresh windows 10 and update it completely , install latest amd drivers. Hopefully that will solve your issues , i hope this helps.

Kind regards: chispy


----------



## rodac

chispy said:


> Hello there , i have not experienced those problems in the radeon app. , VR i do not use or have one. My advice would be to clean install a fresh windows 10 and update it completely , install latest amd drivers. Hopefully that will solve your issues , i hope this helps.
> 
> Kind regards: chispy


Thanks @chispy for your speedy response, I was of course very curious about your experience since you have the exact same card.
Yes, it looks like a driver issue. I did try the latest drivers indee, but issue is persisting. 
I have read that AMD is not as good with VR as Nvidia, but that is more like for previous generations of cards, I l should at least get the same level of performance as with my 1080 TI and I am getting 10 frames less per second approx.


----------



## marcoschaap

rodac said:


> Thanks @chispy for your speedy response, I was of course very curious about your experience since you have the exact same card.
> Yes, it looks like a driver issue. I did try the latest drivers indee, but issue is persisting.
> I have read that AMD is not as good with VR as Nvidia, but that is more like for previous generations of cards, I l should at least get the same level of performance as with my 1080 TI and I am getting 10 frames less per second approx.


That is very weird, I have exact the opposite, my 6900XT is even performing better and more stable then a 3080 I owned for a month. In comparison to my older card (2080Ti) it performs miles better. Does your VR application (like SteamVR or Oculus) automatically scale the HMD resolution or PPD settings?


----------



## MickeyPadge

ZealotKi11er said:


> Anything special to keep the card busy during the first test? My GPU usage keeps dropping.


Not that I remember. Was an older driver though in that run. New drivers seem to drop performance and oc limits.


----------



## rodac

@marcoschaap 
That is good to hear you did not experience the same issue as I have, and this gives me hope that this card can perform the way it should. I use both SteamVR and Oculus VR, they is clearly something wrong since Oculus reports that my env does not meet min requirements. The resolution appears OK but the frame rates can lag a bit.I do not know what to check to see if this scales up but there is an issue for sure that is specific to VR since when I run 3dMark benchmark, it runs as it should. Probably a driver issue.


----------



## newls1

MickeyPadge said:


> Not that I remember. Was an older driver though in that run. New drivers seem to drop performance and oc limits.


this! and its pissing me off. Im giving up on benchmark runs... with every new driver my OC gets lower and lower and my score drops with it. However i can game much faster with the ultimate card (playing Doom Eternal @ 2.8Ghz with ease) and all my other games play just perfect so it is what it is i guess. T/S was fun but completely aggrivated that the OC's keep getting worse and im sure my PSU is thanking me for stopping finally! I know ill never see FC6 come to light, but now all i can hope for is a game to come out that ill enjoy the is FC5 "like" been waiting for years, and still nothing...... so glad i spent 3000$ on this GPU for a game LOL


----------



## Spawnyspawn

newls1 said:


> this! and its pissing me off


Exactly!
My card now seems to be locked at 1175mV (lower=crash) and 350W. The settings I've been using daily for the past week or so suddenly make the drivers crash. Even with 350W I can't get the card stable on 2600 core and 2100 mem. Any more than 350W and crash again.


----------



## jonRock1992

I feel your pain. The max I can set my clock in RE8 with ray tracing is 2740MHz with my Red Devil Ultimate. The clock hovers somewhere between 2650 and 2700 with the max clock set at 2740 MHz. I do believe that I'm being held back from thermals because my junction temp is at around 100C with a 400W power limit. I really want to repaste, but I don't wanna void the warranty yet.


----------



## marcoschaap

jonRock1992 said:


> I feel your pain. The max I can set my clock in RE8 with ray tracing is 2740MHz with my Red Devil Ultimate. The clock hovers somewhere between 2650 and 2700 with the max clock set at 2740 MHz. I do believe that I'm being held back from thermals because my junction temp is at around 100C with a 400W power limit. I really want to repaste, but I don't wanna void the warranty yet.


Not that I want to push you in any direction, but using a hairdryer and tweezers I managed to salvage my warranty stickers, see the picture below. Offcourse there are no guarantees yours will not break, but if I can do it, anyone could do it.


----------



## newls1

jonRock1992 said:


> I feel your pain. The max I can set my clock in RE8 with ray tracing is 2740MHz with my Red Devil Ultimate. The clock hovers somewhere between 2650 and 2700 with the max clock set at 2740 MHz. I do believe that I'm being held back from thermals because my junction temp is at around 100C with a 400W power limit. I really want to repaste, but I don't wanna void the warranty yet.


those "warranty void" stickers here in USA are illegal.... you can rip that bastard off and repaste to your hearts desire









We’re Afraid of Warranty Stickers, but Really, Manufacturers Should Be | iFixit News


According to a survey published today by consumer group U.S. PIRG, 45 out of 50 appliance manufacturers automatically void the warranty of a device if it has…




www.ifixit.com


----------



## jonRock1992

Is this legit though? They literally provided me with a gpu that has an insufficient heat sink for the gpu it's trying to cool. This gpu is supposedly highly-binned, but the heat sink is such crap that the high bin doesn't even matter. The only thing that would save it is a repaste with actual high-quality TIM, or put a water block on it. And apparently both of those options would void their warranty.


----------



## newls1

cant see how that would void your warranty.... Sure if something was to happen, they more then likely will throw their company tactics at you but by law, you are covered.


----------



## newls1

another thing i noticed with this liquid devil "ultimate" card..... There has to be some form of memory straps in place that are driver implemented. The normal OC of 2150 "Fast" timings is perfectly fine and performance is still climbing upon various benchmarks, so that tells me memory error correction is NOT kicking in, so I took memory to 2200 (via slider in wattman) and put it on "Defualt" timings, and lost 1100 points in T/S! Thinking error correction kicked in, I tried "Fast" timings just for giggles and lost 900ish points..... so fast timings makes a small difference... Tried 2300 "defualt" timings, and lost 500ish points.... so clearly there is a strap / latency change somewhere out of our control.....


----------



## jfrob75

LtMatt said:


> If you are using a 12 core or 16 core Ryzen 5000 series, disable half the cores via BIOS and you will get 99% locked GPU utilisation on the first test. Only happens in standard FS. I scored 71300 graphics score doing this.


So, I did this and got the expected results, i.e. gpu utilization went from 70's-80's% to 96-97%. Any idea why FireStrike behaves like this?


----------



## LtMatt

jfrob75 said:


> So, I did this and got the expected results, i.e. gpu utilization went from 70's-80's% to 96-97%. Any idea why FireStrike behaves like this?


I suspect it's something in the benchmark or perhaps the driver that needs updating to address, does not seem to affect Nvidia though. 

That said, we can spank them handily in Extreme and Ultra though so it's not all bad.


----------



## CS9K

newls1 said:


> another thing i noticed with this liquid devil "ultimate" card..... There has to be some form of memory straps in place that are driver implemented. The normal OC of 2150 "Fast" timings is perfectly fine and performance is still climbing upon various benchmarks, so that tells me memory error correction is NOT kicking in, so I took memory to 2200 (via slider in wattman) and put it on "Defualt" timings, and lost 1100 points in T/S! Thinking error correction kicked in, I tried "Fast" timings just for giggles and lost 900ish points..... so fast timings makes a small difference... Tried 2300 "defualt" timings, and lost 500ish points.... so clearly there is a strap / latency change somewhere out of our control.....


If the memory in RDNA2 works like it does in RDNA1, it sounds like the drivers may just be downclocking you back to 2050 or 2100MHz, since it appears that the "normal" and "fast" timings still make a difference. The memory timing straps aren't that complicated in RDNA1, and I had a chance to fiddle with them when I had my RX 5600 XT. Unfortunately, there wasn't a lot of headroom with memory overclocking on the RX 5600 XT, and there were a few timings I couldn't do much with due to heat, but given how much headroom some RDNA2 cards seem to have (my RX 6900 XT sees performance climb from 2100MHz right up to 2150MHz w/fast timings), I'm eager to get bios control and be able to fiddle around again with memory speeds and timing.

If you want to melt your brain a little, go check out the JEDEC GDDR6 specification publication. Like the DDR4 specification publication, there's a lot of neat things in there to be aware of, and a striking number of similarities with tuning and overclocking both memory types. Likewise, there's a few differences, and a few relationships in GDDR6 that don't follow DDR4, but I digress. GRAPHICS DOUBLE DATA RATE 6 (GDDR6) SGRAM STANDARD | JEDEC


----------



## ZealotKi11er

Was able to get 2800MHz TimeSpy with with 21.3.1 driver.


----------



## newls1

ZealotKi11er said:


> Was able to get 2800MHz TimeSpy with with 21.3.1 driver.


yeah, I believe i said that back a page or 2 ago. What card do you have, dont think ive ever asked that. Also what voltage were you using and MPT settings? Sorry to ask so many questions LOL. Next time i get bored enough to mess with the drivers again, i'd like to try 21.5.1 doing the "driver only" option, and install the latest MSI Afterburner app and see if that makes anything better


----------



## ZealotKi11er

newls1 said:


> yeah, I believe i said that back a page or 2 ago. What card do you have, dont think ive ever asked that. Also what voltage were you using and MPT settings? Sorry to ask so many questions LOL. Next time i get bored enough to mess with the drivers again, i'd like to try 21.5.1 doing the "driver only" option, and install the latest MSI Afterburner app and see if that makes anything better


First time I tried 21.4.1 and was not impressed with the clk speed in TS. Was getting like 2725. Its a XTXH 1.2v. I set 400W/420A.


----------



## newls1

ZealotKi11er said:


> First time I tried 21.4.1 and was not impressed with the clk speed in TS. Was getting like 2725. Its a XTXH 1.2v. I set 400W/420A.


21.4 WAS THE WORST!! thanks for the reply sir, much appreciated. We have the same exact card it looks like (I have the liquid devil ultimate XTX-H) so ill try your mpt settings and see if i net any stability. Just to confirm here tho, you are 400w + 15% for a total of 460w? or 400w total AFTER 15% added in?


----------



## ZealotKi11er

newls1 said:


> 21.4 WAS THE WORST!! thanks for the reply sir, much appreciated. We have the same exact card it looks like (I have the liquid devil ultimate XTX-H) so ill try your mpt settings and see if i net any stability. Just to confirm here tho, you are 400w + 15% for a total of 460w? or 400w total AFTER 15% added in?



I did not add extra power on top. Mostly was trying to see what the max clk is for timespy run. I failed 2850 in gt2. Probably can pass with colder temps.


----------



## newls1

driver 21.5.2 is out... can someone please check to see if the wacky OC issue still exists with this driver PLEASE... I would do it right now but just started shift (24hrs) so wont be near PC till tomorrow.


----------



## Spawnyspawn

newls1 said:


> driver 21.5.2 is out... can someone please check to see if the wacky OC issue still exists with this driver PLEASE... I would do it right now but just started shift (24hrs) so wont be near PC till tomorrow.


Only "OC" I got stable so far was Rage mode. Everything else I've tried and even limiting it to only Wattman recommended OC without MPT changes would crash during TimeSpy or gaming.


----------



## newls1

Spawnyspawn said:


> Only "OC" I got stable so far was Rage mode. Everything else I've tried and even limiting it to only Wattman recommended OC without MPT changes would crash during TimeSpy or gaming.


Isn’t that just amazing…. *** AMD


----------



## Enzarch

rodac said:


> Hello community of AMD rx9600xt owners. like @chispy , I purchased the Powercolor Ultimate rx6900xt, beautiful card. I got it sunday 16th May.
> Although I used DDU to remove nvidia drivers (1080ti), the radeon app reports that the card does not meet mininum requirements in the games performance tab.
> Likewise, the Oculus app I use for VR with the link cable reports that my system does not meet the min requirements.
> Benchmarking with 3dmark, I get roughly the stats that the reviewed published about this card.
> Can anyone help, is this a known issue or else am I am the only one havign this problem. Clearly the performance in VR is worse. Give that I paid a lot for this card, is this something that warrants an return. Thanks in advance for any comments. It would be useful to know if @chispy who has the same card noted the same issue. I have had a chance to look into overclocking it yet.


I also did a very similar upgrade (from 1080ti to 6900xt) and even after multiple DDU runs, manual cleanup, and tons of tweaking/testing, I had nothing but issues (including erratic performance and random driver crashes) 
However, after a fresh Windows install, everything is all good now; stable and running smooth. This is probably your best option.


----------



## jonRock1992

I'm also coming from a 1080 Ti. I had forgotten how crappy AMD drivers can be. The last time I had an AMD GPU was with the HD 7970M!


----------



## CS9K

jonRock1992 said:


> I'm also coming from a 1080 Ti. I had forgotten how crappy AMD drivers can be. The last time I had an AMD GPU was with the HD 7970M!


AMD drivers since the launch of RDNA2 have been better and more consistent than AMD has had in many, many years.

The new Adrenalin drivers don't, however, play nicely with Nvidia drivers, nor with many/any of the tools that Nvidia users have been using for years.

I hate that you had to resort to scorched-earth to get windows running smoothly, but I'm glad things are going well now!


----------



## Enzarch

jonRock1992 said:


> I'm also coming from a 1080 Ti. I had forgotten how crappy AMD drivers can be. The last time I had an AMD GPU was with the HD 7970M!





CS9K said:


> AMD drivers since the launch of RDNA2 have been better and more consistent than AMD has had in many, many years.
> 
> The new Adrenalin drivers don't, however, play nicely with Nvidia drivers, nor with many/any of the tools that Nvidia users have been using for years.
> 
> I hate that you had to resort to scorched-earth to get windows running smoothly, but I'm glad things are going well now!


I certainly wasnt putting any blame on the Radeon drivers; No definitive way of determining if it was Radeon being sensitive, nVidia being invasive, or MS being janky. 
Ive been around a long time (still have my 3DFX VooDoo 3) and have had very few real attributable driver issues with either vendor.

Frankly, with how complicated driver stacks are nowadays, a fresh install should be mandatory for any major hardware change.


----------



## newls1

CS9K can you by chance give the 21.5.2 driver a shot and see if you can OC with them? Like i said before i would but im on duty today


----------



## CS9K

newls1 said:


> CS9K can you by chance give the 21.5.2 driver a shot and see if you can OC with them? Like i said before i would but im on duty today


Seems stable so far at my old 2750MHz overclock. Scores are the as good as they've ever been.








I scored 18 215 in Time Spy


Intel Core i7-9700K Processor, AMD Radeon RX 6900 XT x 1, 32768 MB, 64-bit Windows 10}




www.3dmark.com












I scored 11 400 in Port Royal


Intel Core i7-9700K Processor, AMD Radeon RX 6900 XT x 1, 32768 MB, 64-bit Windows 10}




www.3dmark.com


----------



## newls1

CS9K said:


> Seems stable so far at my old 2750MHz overclock. Scores are the as good as they've ever been.
> 
> 
> 
> 
> 
> 
> 
> 
> I scored 18 215 in Time Spy
> 
> 
> Intel Core i7-9700K Processor, AMD Radeon RX 6900 XT x 1, 32768 MB, 64-bit Windows 10}
> 
> 
> 
> 
> www.3dmark.com
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> I scored 11 400 in Port Royal
> 
> 
> Intel Core i7-9700K Processor, AMD Radeon RX 6900 XT x 1, 32768 MB, 64-bit Windows 10}
> 
> 
> 
> 
> www.3dmark.com


ok, thank you very much for taking the time to do that... ill play with this driver when i get off duty tomorrow.


----------



## D1g1talEntr0py

CS9K said:


> Seems stable so far at my old 2750MHz overclock. Scores are the as good as they've ever been.
> 
> 
> 
> 
> 
> 
> 
> 
> I scored 18 215 in Time Spy
> 
> 
> Intel Core i7-9700K Processor, AMD Radeon RX 6900 XT x 1, 32768 MB, 64-bit Windows 10}
> 
> 
> 
> 
> www.3dmark.com
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> I scored 11 400 in Port Royal
> 
> 
> Intel Core i7-9700K Processor, AMD Radeon RX 6900 XT x 1, 32768 MB, 64-bit Windows 10}
> 
> 
> 
> 
> www.3dmark.com


Same here. Running the same overclock that I was using on the previous driver. My scores aren't as good as yours as my max is 2675 @ 1.075V. But I'm on air

Time Spy - 21,222 (Graphics)
I scored 20 407 in Time Spy

Port Royal
I scored 11 049 in Port Royal


----------



## newls1

well, spent entire day trying to OC on this 21.5.2 driver and its "blah" at best.... Im over this game, guess ill just wait and see how the new 2021 driver with FSR (dlss competitor) will turn out and what changes they will make. I got bored after "playing" timespy for a few hours and decided to play with the memory now that this XTX-H applies 1.2v allowance and seems memory follows this voltage to according to hwinfo... and got to 2210 (fast timings) and 2330 (default timings) but I kept it on 2210 Fast as benchies were slightly faster. Here is a quick pic of the mem OC in gpuz


----------



## ZealotKi11er

Was able to get 2900MHz FS Extreme Runs. Score was not the best because the system was not tuned at all.


----------



## newls1

ZealotKi11er said:


> Was able to get 2900MHz FS Extreme Runs. Score was not the best because the system was not tuned at all.


what settings? *** am I doing wrong here!! I cant even pass T/S (non extreme) with max slider bar above 2750 which actually only is an average speed of 26xx during the actual run. Benching this card is difficult, but games play perfect.


----------



## LtMatt

newls1 said:


> what settings? *** am I doing wrong here!! I cant even pass T/S (non extreme) with max slider bar above 2750 which actually only is an average speed of 26xx during the actual run. Benching this card is difficult, but games play perfect.


Firestrike Standard and Extreme draw less wattage so it's easier to crank the clock speed up without hitting instability. Timespy draws more, and Timespy Extreme draws even more than that so that = lower possible clock speeds. This plays out in the average clock speed reported by 3DMark.


----------



## newls1

LtMatt said:


> Firestrike Standard and Extreme draw less wattage so it's easier to crank the clock speed up without hitting instability. Timespy draws more, and Timespy Extreme draws even more than that so that = lower possible clock speeds. This plays out in the average clock speed reported by 3DMark.


oh, i mis read hthe benchmark he ran, my brain just processed it as TS...... Im having withdrawals from that benchmark


----------



## LtMatt

Days Gone has just been released. It's running well for me. 
Days Gone - Part 2 | 6900 XT Merc | 2160P - YouTube


----------



## lestatdk

Hi, everyone. New in here and just got myself a 6900xt gaming x trio.

Seems like most of you guys are on water, but was wondering of anyone could give me some input as to a safe power limit for this card on the stock air cooler ?

These are my current values:










It's not near maxing out on TDC GFX or TDC SoC, so I only raised TDC GFX from 320 to 330. Power limit is default at 281.

It's keeps increasing frequency with raising the power limit as it's constantly bumping it at 100% . Temperatures appear to be lower than others I compare to in 3dmark .

Just because I can I've given the card 3 full pcie power cables so as to not use the breakout cable at all. Power should not be a problem


----------



## Henry Owens

LtMatt said:


> Firestrike Standard and Extreme draw less wattage so it's easier to crank the clock speed up without hitting instability. Timespy draws more, and Timespy Extreme draws even more than that so that = lower possible clock speeds. This plays out in the average clock speed reported by 3DMark.


Strangely timespy extreme will pass while regular timespy crashes for me when overclocking.


----------



## jonRock1992

lestatdk said:


> Hi, everyone. New in here and just got myself a 6900xt gaming x trio.
> 
> Seems like most of you guys are on water, but was wondering of anyone could give me some input as to a safe power limit for this card on the stock air cooler ?
> 
> These are my current values:
> 
> View attachment 2511622
> 
> 
> It's not near maxing out on TDC GFX or TDC SoC, so I only raised TDC GFX from 320 to 330. Power limit is default at 281.
> 
> It's keeps increasing frequency with raising the power limit as it's constantly bumping it at 100% . Temperatures appear to be lower than others I compare to in 3dmark .
> 
> Just because I can I've given the card 3 full pcie power cables so as to not use the breakout cable at all. Power should not be a problem


I'm using the red devil ultimate and I have TDC GFX and PL set to 350. In Wattman I set to +15% which gives around 400W. Apparently these gpu's are good up to 110C on the junction temp. My junction temp hovers around 100C with my OC applied at 400W and full GPU utilization. I'm considering doing a repaste to hopefully bring the junction temp down into the 90's at full utilization.


----------



## lestatdk

jonRock1992 said:


> I'm using the red devil ultimate and I have TDC GFX and PL set to 350. In Wattman I set to +15% which gives around 400W. Apparently these gpu's are good up to 110C on the junction temp. My junction temp hovers around 100C with my OC applied at 400W and full GPU utilization. I'm considering doing a repaste to hopefully bring the junction temp down into the 90's at full utilization.


OK, I'll try and bump it up a little more then


----------



## LtMatt

Henry Owens said:


> Strangely timespy extreme will pass while regular timespy crashes for me when overclocking.


Might be because Timespy puts out more FPS. More FPS can mean more heat in some cases. I think others here have said similar with the regular Timespy bench.


----------



## chispy

If anyone is interested on a fair price rx6900xt i have one for sale - [FS-USA] - RX6900xt - ASRock RX6900xt Phantom Gaming D OC


----------



## chispy

chispy said:


> If anyone is interested on a fair price rx6900xt i have one for sale - [FS-USA] - RX6900xt - ASRock RX6900xt Phantom Gaming D OC


Well it sold within 5 minutes  . It's gone to a new home.


----------



## D1g1talEntr0py

chispy said:


> Well it sold within 5 minutes  . It's gone to a new home.


Very cool to sell it at that price!


----------



## jonRock1992

I'm not at my pc to test this out atm, but does enabling SAM affect the benchmark scores?


----------



## weleh

Anyone here has knowledge about certain drivers not having voltage lock?

Read a comment today on Reddit about 2 or 3 older driver versions that leave 6800 unlocked vcore which you can increase without bugs via MPT.


----------



## CS9K

weleh said:


> Anyone here has knowledge about certain drivers not having voltage lock?
> 
> Read a comment today on Reddit about 2 or 3 older driver versions that leave 6800 unlocked vcore which you can increase without bugs via MPT.


I've had a Radeon card since December and stay up to speed on the AMD reddit, and haven't heard of this, myself.


----------



## ZealotKi11er

weleh said:


> Anyone here has knowledge about certain drivers not having voltage lock?
> 
> Read a comment today on Reddit about 2 or 3 older driver versions that leave 6800 unlocked vcore which you can increase without bugs via MPT.


One of the early driver you can go from 1.025 to 1.05.


----------



## ptt1982

Strangely, the latest driver used to work at 2650mhz in TS (multiple runs), but now I have to lower it to 2620mhz to make a 100% pass. Extreme can hold higher clocks (maybe because of lower fps), and games as well (without vsync and RT on to stress test it.) I have installed new chipset drivers, keyboard and mouse software and AI Suite 3. When I did a fresh windows install TS worked up to 2660mhz at the very best, but wasn’t fully stable. Hmm... Maybe the runs I had were not stable to begin with, or the drivers truly behave strangely over time. That being said, 2620mhz has been the highest ”always stable” clock for me, so it simply could be the limit of the silicon. Using MPT 350/375, fans at 47% steady until 95 degrees it kicks up to 75%. I score around 20600, highest I’ve had so far was 21022 with fans tuned up and MPT even higher.

I’d like Powercolor to give me voltage up to 1.2v, because that would probably give me 2700mhz+ clocks. On air it might be a bit much, though.


----------



## weleh

ZealotKi11er said:


> One of the early driver you can go from 1.025 to 1.05.


Doesn't work on 6900XT?


----------



## ptt1982

Update to my neverending grief with the Red Devil 6900xt: 

It seems my Red Devil temps crawled back up recently, and the junction was going past 100C again. Opened it up, and did a couple of repastes, but it got worse. Went out and bought 1.5mm pads and new 1mm pads for capacitators etc. and Gelid GC Extreme 4g, and did a very careful job at repasting and putting vram pads etc, but no avail. Maybe it's a broken heatsink, or I'm constantly remounting it really badly. Without Vsync and RT on the the Junction Temp jumps past 100C quite fast, creating a delta of 35-40C between core temp. This happens in TS as well. It also very suddenly drops to under 15C delta and 20C, and again shoots back up.

In normal gameplay at 60fps locked and Vsynced at 4K the Junction Temp stays below 95C, mostly hovering around 70-85C, around 10-20C delta with core temp. Only when I stop using MPT the temps drop to sustainable level. I wonder if it is the summer here in Tokyo (it's pretty hot inside) or if something is just simply broken, or I'm constantly doing bad mounts and have wrong thermal pads (although Vram temps stay under 75C), or I'm doing a bad job with thermal paste (I try and spread it out relatively evenly). I've repasted multiple cards, but never had this type of problems, they simply worked on the first go. Thermal pads are touching heatsink just about, screws are tight on the backplate etc, nothing in and between, all cables correctly attached... I wonder what's going on. Sadly, I don't have a warranty so I have to live with a hot card or keep repasting it (did already probably 8-9 repastes in total.) Undervolting works great, and stock temps are OK (I can live with them), but OC at 2620mhz core and vram 2080mhz gets a bit too hot at even just mpt 310PL/335TDC settings.

Oh god, when is it going to end. This is my 9th GPU within 12 months, and the only one I actually like... All of the others worked well, but this one runs hot and has problems.


----------



## Henry Owens

Is 350/370 mpt the recommended values for ref 6900xt?


----------



## LtMatt

ptt1982 said:


> Update to my neverending grief with the Red Devil 6900xt:
> 
> It seems my Red Devil temps crawled back up recently, and the junction was going past 100C again. Opened it up, and did a couple of repastes, but it got worse. Went out and bought 1.5mm pads and new 1mm pads for capacitators etc. and Gelid GC Extreme 4g, and did a very careful job at repasting and putting vram pads etc, but no avail. Maybe it's a broken heatsink, or I'm constantly remounting it really badly. Without Vsync and RT on the the Junction Temp jumps past 100C quite fast, creating a delta of 35-40C between core temp. This happens in TS as well. It also very suddenly drops to under 15C delta and 20C, and again shoots back up.
> 
> In normal gameplay at 60fps locked and Vsynced at 4K the Junction Temp stays below 95C, mostly hovering around 70-85C, around 10-20C delta with core temp. Only when I stop using MPT the temps drop to sustainable level. I wonder if it is the summer here in Tokyo (it's pretty hot inside) or if something is just simply broken, or I'm constantly doing bad mounts and have wrong thermal pads (although Vram temps stay under 75C), or I'm doing a bad job with thermal paste (I try and spread it out relatively evenly). I've repasted multiple cards, but never had this type of problems, they simply worked on the first go. Thermal pads are touching heatsink just about, screws are tight on the backplate etc, nothing in and between, all cables correctly attached... I wonder what's going on. Sadly, I don't have a warranty so I have to live with a hot card or keep repasting it (did already probably 8-9 repastes in total.) Undervolting works great, and stock temps are OK (I can live with them), but OC at 2620mhz core and vram 2080mhz gets a bit too hot at even just mpt 310PL/335TDC settings.
> 
> Oh god, when is it going to end. This is my 9th GPU within 12 months, and the only one I actually like... All of the others worked well, but this one runs hot and has problems.


Sometimes I think relating often leads to more problems than solutions unless you are putting a block on. These days the AIBs do a good job more often than not with the paste and mount job.


----------



## ptt1982

LtMatt said:


> Sometimes I think relating often leads to more problems than solutions unless you are putting a block on. These days the AIBs do a good job more often than not with the paste and mount job.


Aye. This is that time. I have spent $75 trying to fix the temps. They were high from the beginning to be honest, but now they are higher. I wonder what I am doing wrong. I think I will take it into pieces and do everything from scratch and see where that leads. Another day of taking it apart and putting back together ahead of me. Pissed off, not the way I wanted to spend my weekend.

Maybe I should just put 2x the paste I typically do, the only time it stayed cool was when I put too much of it rather than too little. Somehow the heatsink doesn't have a good connection to the GPU. I know I can get the temps with good luck to close to normal (review values), and if I do, I will leave it as is, undervolt it and just play games and forget this tuning crap. Well, it's time and small money, not the end of the world...


----------



## jonRock1992

Is it necessary to put TDC higher than the power limit in MPT? I've just put both to 350 and all seems good, but I've noticed everyone is putting TDC higher.


----------



## jonRock1992

Your experience is really turning me off from repasting my red devil ultimate lol. Leads me to believe that the only thing that will make a difference is liquid metal or a water block.


----------



## Henry Owens

ptt1982 said:


> Update to my neverending grief with the Red Devil 6900xt:
> 
> It seems my Red Devil temps crawled back up recently, and the junction was going past 100C again. Opened it up, and did a couple of repastes, but it got worse. Went out and bought 1.5mm pads and new 1mm pads for capacitators etc. and Gelid GC Extreme 4g, and did a very careful job at repasting and putting vram pads etc, but no avail. Maybe it's a broken heatsink, or I'm constantly remounting it really badly. Without Vsync and RT on the the Junction Temp jumps past 100C quite fast, creating a delta of 35-40C between core temp. This happens in TS as well. It also very suddenly drops to under 15C delta and 20C, and again shoots back up.
> 
> In normal gameplay at 60fps locked and Vsynced at 4K the Junction Temp stays below 95C, mostly hovering around 70-85C, around 10-20C delta with core temp. Only when I stop using MPT the temps drop to sustainable level. I wonder if it is the summer here in Tokyo (it's pretty hot inside) or if something is just simply broken, or I'm constantly doing bad mounts and have wrong thermal pads (although Vram temps stay under 75C), or I'm doing a bad job with thermal paste (I try and spread it out relatively evenly). I've repasted multiple cards, but never had this type of problems, they simply worked on the first go. Thermal pads are touching heatsink just about, screws are tight on the backplate etc, nothing in and between, all cables correctly attached... I wonder what's going on. Sadly, I don't have a warranty so I have to live with a hot card or keep repasting it (did already probably 8-9 repastes in total.) Undervolting works great, and stock temps are OK (I can live with them), but OC at 2620mhz core and vram 2080mhz gets a bit too hot at even just mpt 310PL/335TDC settings.
> 
> Oh god, when is it going to end. This is my 9th GPU within 12 months, and the only one I actually like... All of the others worked well, but this one runs hot and has problems.


Looks like RMA time


----------



## newls1

jonRock1992 said:


> Is it necessary to put TDC higher than the power limit in MPT? I've just put both to 350 and all seems good, but I've noticed everyone is putting TDC higher.


ive often wondered if changing the tdc actually does anything at all, anyone know this answer?


----------



## LtMatt

ptt1982 said:


> Aye. This is that time. I have spent $75 trying to fix the temps. They were high from the beginning to be honest, but now they are higher. I wonder what I am doing wrong. I think I will take it into pieces and do everything from scratch and see where that leads. Another day of taking it apart and putting back together ahead of me. Pissed off, not the way I wanted to spend my weekend.
> 
> Maybe I should just put 2x the paste I typically do, the only time it stayed cool was when I put too much of it rather than too little. Somehow the heatsink doesn't have a good connection to the GPU. I know I can get the temps with good luck to close to normal (review values), and if I do, I will leave it as is, undervolt it and just play games and forget this tuning crap. Well, it's time and small money, not the end of the world...


Or you could just sell it on Ebay or elsewhere, get back what you paid for it, or very close to it and buy another 6900 XT and not re-paste this one. That's what I did with my 2x Merc's. XFX did a solid paste job and my attempts using Liquid Metal only made things worse.


----------



## dagget3450

LtMatt said:


> Or you could just sell it on Ebay or elsewhere, get back what you paid for it, or very close to it and buy another 6900 XT and not re-paste this one. That's what I did with my 2x Merc's. XFX did a solid paste job and my attempts using Liquid Metal only made things worse.



I am about to install a water block on a ref 6900xt. I am hoping to use LM as well.

Oddball question do you think AMD will ever bring back crossfire profiles for rdna2? I had a chance to try 6900xt x2 and was sadly let down that the traditional CF stopped working with profiles. I had thought if it worked I would be playing some of my fav games in so really high resolution and get a playable frame rate. Sadly that wasn't the case. So I am really bummed about future of mgpu and projects I wanted to do.


----------



## newls1

dagget3450 said:


> I am about to install a water block on a ref 6900xt. I am hoping to use LM as well.
> 
> Oddball question do you think AMD will ever bring back crossfire profiles for rdna2? I had a chance to try 6900xt x2 and was sadly let down that the traditional CF stopped working with profiles. I had thought if it worked I would be playing some of my fav games in so really high resolution and get a playable frame rate. Sadly that wasn't the case. So I am really bummed about future of mgpu and projects I wanted to do.


to my knowledge dx12 titles use "explict" gpu support and "multi gpu" either it be Sli/CF will work as long as the game or benchmark support it. driver level support stopped ages ago unfortunately


----------



## ptt1982

Henry Owens said:


> Looks like RMA time


Yeah... Unfortunately I have no warranty left. Bought it used, and Powercolor states that if I opened it myself or if it’s bought used, they won’t give me an RMA. I can get it work to a degree, and use it, but it runs hot. Just need to put crapload of paste. I’m going to do a full disassembly and see where that gets me.


----------



## dagget3450

newls1 said:


> to my knowledge dx12 titles use "explict" gpu support and "multi gpu" either it be Sli/CF will work as long as the game or benchmark support it. driver level support stopped ages ago unfortunately


Yes, it does work in about 5 or so things, but even benchmarks are screwed because of this. Some of the 3dmark benchmarks won't run it due to profiles being gone. It really sucks as a benchmarker and gamer to loose so much support. I would be happy even with an unofficial way to force CF profiles. I had no issue doing it with my Vega cards, but apparently rdna they gave it the middle finger.


----------



## ptt1982

Apologies for multiple posts, I don’t mean to hijack the thread.

Update with a Solution:

There’s a huge gap between the heatsink of Red Devil 6900xt and the gpu by design. The solution is simply put crapload of thermal paste, otherwise the temps will go high. I believe this to be the cause of many of the high temps on 6900xts out there, the factory paste job has to be very precise and there needs to be a lot of it, like a lot. Maybe a pad could help to narrow the gap. I believe spreading the paste and putting more on top of it after that works the best. I had to even check if the PCB is straight, and it was. Initially I thought it was the thermal pads that were too thick, but it’s the amount of the paste that makes the difference. Use 1.5mm pads if you do a repaste, the squishy ones that come with the card can break or might not be reusable (might not touch heatsink the second time around). Repasting can lower temps drastically, but you have to be careful and do it right. My advice: Don’t do it if your temps are within spec. If your temps are high, first RMA, if Powercolor won’t help you, repaste with a ton of paste and you should be fine.


----------



## blackzaru

Here are my results for my AMD Reference 6900XT. It seems like an alright, although not great, overclocker, averaging around the 2650 MHz mark in most tests (2700MHz for Superposition), but, it somehow goes to hell with TimeSpy (only stable at 2575MHz or less), despite running TimeSpy Extreme fine. Go figure.

BTW, the CPU is not in any kind of "final OC settings", so, please disregard those stats.


----------



## dagget3450

ptt1982 said:


> Apologies for multiple posts, I don’t mean to hijack the thread.
> 
> Update with a Solution:
> 
> There’s a huge gap between the heatsink of Red Devil 6900xt and the gpu by design. The solution is simply put crapload of thermal paste, otherwise the temps will go high. I believe this to be the cause of many of the high temps on 6900xts out there, the factory paste job has to be very precise and there needs to be a lot of it, like a lot. Maybe a pad could help to narrow the gap. I believe spreading the paste and putting more on top of it after that works the best. I had to even check if the PCB is straight, and it was. Initially I thought it was the thermal pads that were too thick, but it’s the amount of the paste that makes the difference. Use 1.5mm pads if you do a repaste, the squishy ones that come with the card can break or might not be reusable (might not touch heatsink the second time around). Repasting can lower temps drastically, but you have to be careful and do it right. My advice: Don’t do it if your temps are within spec. If your temps are high, first RMA, if Powercolor won’t help you, repaste with a ton of paste and you should be fine.


Sorry if I missed anything earlier. The reference cooler has a thermal pad on the gpu die according to gamer nexus teardown. Does the red devil not have the same thing? Perhaps it was missing from a previous owner?


----------



## ptt1982

dagget3450 said:


> Sorry if I missed anything earlier. The reference cooler has a thermal pad on the gpu die according to gamer nexus teardown. Does the red devil not have the same thing? Perhaps it was missing from a previous owner?


Correct, there’s no pad on my Red Devil 6900xt. I believe that is only on the reference design. This one had a lot of paste, with one spot that wasn’t covered well. After first repaste things got a lot better, but after that it was a struggle, as I didn’t know how much paste to put there (am a bit scared to put too much, but clearly that creates a problem with the heatsink.)

I am genuinely thinking of putting this on water if I can’t get the temps under control. I’ll try repasting three more time today. Might just put it on a waterblock + AIO if it doesn’t work!


----------



## Henry Owens

ptt1982 said:


> Correct, there’s no pad on my Red Devil 6900xt. I believe that is only on the reference design. This one had a lot of paste, with one spot that wasn’t covered well. After first repaste things got a lot better, but after that it was a struggle, as I didn’t know how much paste to put there (am a bit scared to put too much, but clearly that creates a problem with the heatsink.)
> 
> I am genuinely thinking of putting this on water if I can’t get the temps under control. I’ll try repasting three more time today. Might just put it on a waterblock + AIO if it doesn’t work!


That is strange were you the first one to open it? The thermal pad on my ref 6900 was a large pain to remove taking probably 30 minutes removing it bit by bit.


----------



## LtMatt

blackzaru said:


> Here are my results for my AMD Reference 6900XT. It seems like an alright, although not great, overclocker, averaging around the 2650 MHz mark in most tests (2700MHz for Superposition), but, it somehow goes to hell with TimeSpy (only stable at 2575MHz or less), despite running TimeSpy Extreme fine. Go figure.
> 
> BTW, the CPU is not in any kind of "final OC settings", so, please disregard those stats.
> 
> View attachment 2511855
> 
> 
> View attachment 2511856
> 
> 
> View attachment 2511858
> 
> 
> View attachment 2511860
> 
> 
> View attachment 2511861
> 
> 
> View attachment 2511862
> 
> 
> View attachment 2511864
> 
> 
> View attachment 2511865
> 
> 
> View attachment 2511866


Those results are very good indeed considering it is the reference model. I would say you have a good sample, or perhaps you live in the Arctic circle.


----------



## Henry Owens

LtMatt said:


> Those results are very good indeed considering it is the reference model. I would say you have a good sample, or perhaps you live in the Arctic circle.


Should we be aiming for timespy regular stability for our overclocks? Mine is picky while sometimes passing with score and othertimes finishing with no score. Don't want to gimp myself.


----------



## ZealotKi11er

Henry Owens said:


> Should we be aiming for timespy regular stability for our overclocks? Mine is picky while sometimes passing with score and othertimes finishing with no score. Don't want to gimp myself.


If you want 24/7 OC I would use TimeSpy. You could have more aggressive OC but it will depend on the game and how much RT it has.


----------



## newls1

RDNA2 is so weird to bench with and certainly different on a per driver basis.... I gave up on TS and other benchies.... I set 2740/2860 for my sliders @ 1.2v and 2210 fast on mem and just game now... way more enjoyable then bashing my keyboard every time TS errors out to desktop!


----------



## Emmett

hello. Long time nvidia user, just picked up a Red devil ultimate 6900XT waterblocked to mess with and support AMD.
Don't laugh, but I like to play rocket league a bit, and noticed with Nvidia that locking core clock makes the
game much smoother.

I have been trying Min/Max slider in AMD software, but clocks fluctuate wildly in a cycle, like 33/125/2100 (max slider set to 2100 MIN 2000) then repeats.

Is wattman removed from software? I do not see any P0,P1 ETC from googling around where is it suggested to use.


----------



## newls1

just played an hour of crysis remastered and game played in the range of 2780-2820MHz @ around a 250-300w GPU load and R/T OFF. She played fine with no crashes. If I enable R/T and use those clocks, power goes to 280-330w and clocks go to 2790-2850 and game CTD after about 20mins! So im sure if i lowered OC to no more then 2800-2810MHz it would be stable with R/T enabled.


----------



## 99belle99

Emmett said:


> hello. Long time nvidia user, just picked up a Red devil ultimate 6900XT waterblocked to mess with and support AMD.
> Don't laugh, but I like to play rocket league a bit, and noticed with Nvidia that locking core clock makes the
> game much smoother.
> 
> I have been trying Min/Max slider in AMD software, but clocks fluctuate wildly in a cycle, like 33/125/2100 (max slider set to 2100 MIN 2000) then repeats.
> 
> Is wattman removed from software? I do not see any P0,P1 ETC from googling around where is it suggested to use.


You cannot set a clock and stay at that frequency with AMD it is always fluctuating. You can even set a high frequency in Wattman and it will average below what you set.


----------



## Emmett

Thank you, I'll just stick to nvidia for that game, and I have noticed a little bit of average below setting like you said.


----------



## blackzaru

LtMatt said:


> Those results are very good indeed considering it is the reference model. I would say you have a good sample, or perhaps you live in the Arctic circle.


Waterblocked, but, nope, it was a good 28 degrees Celsius, outside, yesterday. (Although, I maintain temp at a constant 21 degrees inside)


----------



## dagget3450

99belle99 said:


> You cannot set a clock and stay at that frequency with AMD it is always fluctuating. You can even set a high frequency in Wattman and it will average below what you set.


I was able to do that on a dx9 title and bring up fps. Maybe it doesn't work in all games.


----------



## ptt1982

Finally got my Red Devil's temps closer to normal, although even undervolted to 1.1mv, junction goes up to 100C on air at 52% fans when VsyncOff, and VsyncOn it stays under 90C. There's an extra 20C on it still, and after inspecting closely, it is purely a mounting/mounting pressure issue. The temperature inside the apartment is probably 30C, so that probably adds a couple of degrees, and it's 30C+ outside. It's not only the pads, but using 1.5mm pads from Gelid solved most of the problems, any other pad were too firm. Vram temps are perfect. To note, I could continue disassembling the card and changing pastes, but it costs a lot of money and time to keep doing it over and over again. To get the most performance out of it, I would need to put it under water. I'm leaning towards 1081mv undervolt and 2450mhz clocks and leaving it as is, however, the tinkerer in me wants to ask...

A Question: what's the most simple, hassle-free way to put the Red Devil 6900xt under water. I've seen Alphacool blocks at 130€, but I think I would need a custom loop. Is there an AIO for Alphacool blocks that is easy to install and good? I could work with a 360mm radiator in my Lancool II Performance case. (would be a bit shame to put it under water in an airflow case!)


----------



## MickeyPadge

newls1 said:


> just played an hour of crysis remastered and game played in the range of 2780-2820MHz @ around a 250-300w GPU load and R/T OFF. She played fine with no crashes. If I enable R/T and use those clocks, power goes to 280-330w and clocks go to 2790-2850 and game CTD after about 20mins! So im sure if i lowered OC to no more then 2800-2810MHz it would be stable with R/T enabled.


Run Port Royal for RT stability check


----------



## EastCoast

I wouldn't. They have, imo, a bad reputation for implementing nvidia way of coding and trying to push it as standard

I would wait until Amd starts pumping out more games optimizing rt for their hw. Besides anyone with a 6000 series card already know they are an early RT adapter anyway.

It doesn't help things that AMD didn't take the time to create their own RT Benchmark demo. But here we are.


----------



## Henry Owens

Advise on mpt settings for reference card?


----------



## marcoschaap

Henry Owens said:


> Advise on mpt settings for reference card?


With stock cooling?


----------



## Henry Owens

marcoschaap said:


> With stock cooling?


Waterblock, EKWB quantum.


----------



## marcoschaap

Henry Owens said:


> Waterblock, EKWB quantum.


I would try 375 GFX Power and TDC limit, and try your stability with altering clocks in Radeon Software.


----------



## blackzaru

marcoschaap said:


> I would try 375 GFX Power and TDC limit, and try your stability with altering clocks in Radeon Software.


375 GFX power might be a bit on the high side. With 15% additional power in the settings, this means 431.25W limit. Although the card won't reach that (most likely), it's a bit on the high side for the reference card.

I'm running 350W/400Amps on mine, with the exact same setup. (Which means 402.5W peak power.)


----------



## marcoschaap

blackzaru said:


> 375 GFX power might be a bit on the high side. With 15% additional power in the settings, this means 431.25W limit. Although the card won't reach that (most likely), it's a bit on the high side for the reference card.
> 
> I'm running 350W/400Amps on mine, with the exact same setup. (Which means 402.5W peak power.)


That's probably true, but I see the limits I set in MPT as a hard limit, I disable the Energy control in Radeon Software and do not use it.


----------



## jonRock1992

ptt1982 said:


> Finally got my Red Devil's temps closer to normal, although even undervolted to 1.1mv, junction goes up to 100C on air at 52% fans when VsyncOff, and VsyncOn it stays under 90C. There's an extra 20C on it still, and after inspecting closely, it is purely a mounting/mounting pressure issue. The temperature inside the apartment is probably 30C, so that probably adds a couple of degrees, and it's 30C+ outside. It's not only the pads, but using 1.5mm pads from Gelid solved most of the problems, any other pad were too firm. Vram temps are perfect. To note, I could continue disassembling the card and changing pastes, but it costs a lot of money and time to keep doing it over and over again. To get the most performance out of it, I would need to put it under water. I'm leaning towards 1081mv undervolt and 2450mhz clocks and leaving it as is, however, the tinkerer in me wants to ask...
> 
> A Question: what's the most simple, hassle-free way to put the Red Devil 6900xt under water. I've seen Alphacool blocks at 130€, but I think I would need a custom loop. Is there an AIO for Alphacool blocks that is easy to install and good? I could work with a 360mm radiator in my Lancool II Performance case. (would be a bit shame to put it under water in an airflow case!)


I also would like to know the easiest way to put the red devil ultimate under water. The stock cooler is trash imo. I've been hitting 110C occasionally and thermal throttling with my red devil ultimate. Wish there was an AIO solution for it.


----------



## Henry Owens

blackzaru said:


> 375 GFX power might be a bit on the high side. With 15% additional power in the settings, this means 431.25W limit. Although the card won't reach that (most likely), it's a bit on the high side for the reference card.
> 
> I'm running 350W/400Amps on mine, with the exact same setup. (Which means 402.5W peak power.)


Ok so you do 350/400 and leave the wattman at 0 percent


----------



## blackzaru

Henry Owens said:


> Ok so you do 350/400 and leave the wattman at 0 percent


No,the results I posted a few days ago (previous page I think) were with 350/400 and wattman at 15%, which brings max power draw at 402.5W (350W*1.15).


----------



## blackzaru

On a sidenote:

Anyone here tried liquid metal on their 6900XT (waterblocked, logically), and has feedback compared to high end paste? (12-14 W/mK or so)


----------



## ptt1982

jonRock1992 said:


> I also would like to know the easiest way to put the red devil ultimate under water. The stock cooler is trash imo. I've been hitting 110C occasionally and thermal throttling with my red devil ultimate. Wish there was an AIO solution for it.


Finally someone with the same problem as I have! What settings are you using on Wattman / MPT?

I think the only way to get these cards to the max is to put them under water. It might cost 30% more and you get 10% performance for that, but the temps will be lower and the card itself will stay more efficient. I think the Red Devil 6900xt cooler has a mounting problem, it is incredibly hard to remount it correctly. I’ve worked on at least 10 other GPUs and every single time I did a repaste and thermal pad change, I got better results than before. Red Devil 6900xt is the only card that gave me same (initially better, then temps go high in two weeks due to loose cooler) or worse results. 

Alphacool has a waterblock for it. I think I’ll put this thing under water with a 360 radiator and crank the clocks up. It’s going to be a fun and new project as well.

The other option is to undervolt the card to 1050mv or 1082mv and set aggressive fans, but who buys an enthusiast level card for that kind of stuff. I bought this card so that I can overclock it, not underclock or use it @ stock levels. For example, Days Gone at 4K max settings drops (and stutters horribly while at it, this is fps related, not the gamelogic slowing down thing digital foundry found out about) under 60 fps occasionally when I keep the card undervolted and at 2400mhz. All of that goes away at 2700mhz, and it is a much more enjoyable gaming experience. Just need a waterblock to keep the card cool and quiet now...


----------



## jonRock1992

The Red Devil Ultimate is just a terrible air-cooled GPU. I bought into the hype around it and went for it. I'm upset that I did. This card was $2300 and it thermal throttles. Come on. That's just terrible engineering. RMA'ing this thing probably wouldn't be worth it since I'd probably just either get it back with no change or get a new one with the same problem. It's just the heatsink design that's crap. It's extremely loud if you crank the fans up, and higher fan speeds barely even affect thermal performance! There definitely seems to be a mounting problem. Maybe it could be "fixed" with a copper shim and some liquid metal, but that's just janky.


----------



## dagget3450

blackzaru said:


> No,the results I posted a few days ago (previous page I think) were with 350/400 and wattman at 15%, which brings max power draw at 402.5W (350W*1.15).


I plan to do this very soon my only hold up is settling on a cpu. I have x299 7940x or I may just get a 5800x for lower heat/thermals of total for water-cooling.


----------



## blackzaru

dagget3450 said:


> I plan to do this very soon my only hold up is settling on a cpu. I have x299 7940x or I may just get a 5800x for lower heat/thermals of total for water-cooling.


Update me on the result.


----------



## LtMatt

jonRock1992 said:


> The Red Devil Ultimate is just a terrible air-cooled GPU. I bought into the hype around it and went for it. I'm upset that I did. This card was $2300 and it thermal throttles. Come on. That's just terrible engineering. RMA'ing this thing probably wouldn't be worth it since I'd probably just either get it back with no change or get a new one with the same problem. It's just the heatsink design that's crap. It's extremely loud if you crank the fans up, and higher fan speeds barely even affect thermal performance! There definitely seems to be a mounting problem. Maybe it could be "fixed" with a copper shim and some liquid metal, but that's just janky.


Are you outside the return window? I agree it does not seem the best as a few have had similar problems. Based on size alone it should be one of the best in theory, does sound like a mounting issue.

In my opinion the XFX Merc is by far the best air cooling solution for the 6900 XT, but i believe even that would struggle with 1.2v at full load as with an unlocked power limit (400W) and 1.175v in Ray Tracing titles it requires 80% fan speed to stay below 110c junction. You can dial that down to 75-70% if Ray Tracing is disabled.

For 24/7 i just run the card at 2310-2410Mhz in Radeon Software (2350Mhz+ actual in game core clock) and 1.050v set in Radeon Software, which is 0.950v-0.975v actual in game voltage as displayed by HWINFO64. With these settings i can run 25% fan speed at just 1200RPM and keep temps at or around 100c junction.

I keep being tempted by the Sapphire 6900 XTXU, but I know it's almost certainly a bad idea. Can anyone talk me out of it? ▷ Sapphire Radeon RX 6900 XTU ULTIMATE Toxic Wa… | OcUK (overclockers.co.uk)


----------



## Henry Owens

LtMatt said:


> Are you outside the return window? I agree it does not seem the best as a few have had similar problems. Based on size alone it should be one of the best in theory, does sound like a mounting issue.
> 
> In my opinion the XFX Merc is by far the best air cooling solution for the 6900 XT, but i believe even that would struggle with 1.2v at full load as with an unlocked power limit (400W) and 1.175v in Ray Tracing titles it requires 80% fan speed to stay below 110c junction. You can dial that down to 75-70% if Ray Tracing is disabled.
> 
> For 24/7 i just run the card at 2310-2410Mhz in Radeon Software (2350Mhz+ actual in game core clock) and 1.050v set in Radeon Software, which is 0.950v-0.975v actual in game voltage as displayed by HWINFO64. With these settings i can run 25% fan speed at just 1200RPM and keep temps at or around 100c junction.


100c just does not sound healthy


----------



## LtMatt

Henry Owens said:


> 100c just does not sound healthy


Why? The maximum spec is 110c before thermal throttling and around 120c before shutdown. I'm quite happy to let the temperatures go high as long as nothing throttles so i can keep a silent system.


----------



## ptt1982

LtMatt said:


> Are you outside the return window? I agree it does not seem the best as a few have had similar problems. Based on size alone it should be one of the best in theory, does sound like a mounting issue.
> 
> In my opinion the XFX Merc is by far the best air cooling solution for the 6900 XT, but i believe even that would struggle with 1.2v at full load as with an unlocked power limit (400W) and 1.175v in Ray Tracing titles it requires 80% fan speed to stay below 110c junction. You can dial that down to 75-70% if Ray Tracing is disabled.
> 
> For 24/7 i just run the card at 2310-2410Mhz in Radeon Software (2350Mhz+ actual in game core clock) and 1.050v set in Radeon Software, which is 0.950v-0.975v actual in game voltage as displayed by HWINFO64. With these settings i can run 25% fan speed at just 1200RPM and keep temps at or around 100c junction.
> 
> I keep being tempted by the Sapphire 6900 XTXU, but I know it's almost certainly a bad idea. Can anyone talk me out of it? ▷ Sapphire Radeon RX 6900 XTU ULTIMATE Toxic Wa… | OcUK (overclockers.co.uk)


LtMatt, don't do it!


----------



## Henry Owens

blackzaru said:


> 375 GFX power might be a bit on the high side. With 15% additional power in the settings, this means 431.25W limit. Although the card won't reach that (most likely), it's a bit on the high side for the reference card.
> 
> I'm running 350W/400Amps on mine, with the exact same setup. (Which means 402.5W peak power.)


So you are running 400w peak power on a reference card? Also why not set 400 in mpt and have 0% on the slider?


----------



## marcoschaap

Henry Owens said:


> So you are running 400w peak power on a reference card? Also why not set 400 in mpt and have 0% on the slider?


Exactly what I was saying.


----------



## EastCoast

dagget3450 said:


> I plan to do this very soon my only hold up is settling on a cpu. I have x299 7940x or I may just get a 5800x for lower heat/thermals of total for water-cooling.


Get the 5800x. But it would be well worth getting a Dark Hero so you can use DOS OC. I am not sure if DOS OC has been updated on other Asus MB's yet as I read it's more of a hw limitation for them then software.

It's so easy to use DOS OC. CPU Voltage, Amp switching point and frequency. That's it.

In any case. I've found that if you use use Radeon Performance Tab it's best to use Automatic and GPU OC seems to get the best results overall. Too bad you can't do GPU and Vram OC at the same time.


----------



## dagget3450

EastCoast said:


> Get the 5800x. But it would be well worth getting a Dark Hero so you can use DOS OC. I am not sure if DOS OC has been updated on other Asus MB's yet as I read it's more of a hw limitation for them then software.
> 
> It's so easy to use DOS OC. CPU Voltage, Amp switching point and frequency. That's it.
> 
> In any case. I've found that if you use use Radeon Performance Tab it's best to use Automatic and GPU OC seems to get the best results overall. Too bad you can't do GPU and Vram OC at the same time.


Well I sadly already have a x570 mobo. So all I need is cpu. I and contemplating selling some of my old GPU's and buy the 5800x... Cannot believe what old GPU's are selling for right now on eBay


----------



## EastCoast

dagget3450 said:


> Well I sadly already have a x570 mobo. So all I need is cpu. I and contemplating selling some of my old GPU's and buy the 5800x... Cannot believe what old GPU's are selling for right now on eBay


That's too bad. You should still be fine though. Just make sure you get a top notch cooling solution though. The cpu boosts itself as long as you keep temps lower then 70C in game.


----------



## coelacanth

jonRock1992 said:


> I also would like to know the easiest way to put the red devil ultimate under water. The stock cooler is trash imo. I've been hitting 110C occasionally and thermal throttling with my red devil ultimate. Wish there was an AIO solution for it.


Are you Red Devil Ultimate guys with the high temps running the stock fan profile?


----------



## jonRock1992

Yeah. It's a pretty aggressive stock fan profile with the OC bios switch on. I've seen it going to around 2200 RPM and it's very loud. Manually increasing the fan speed barely does anything for me. It will maybe drop the junction temp by 2C to 3C.


----------



## coelacanth

jonRock1992 said:


> Yeah. It's a pretty aggressive stock fan profile with the OC bios switch on. I've seen it going to around 2200 RPM and it's very loud. Manually increasing the fan speed barely does anything for me. It will maybe drop the junction temp by 2C to 3C.


Interesting. The XFX 6900 XT Merc peaks about 1,350 RPM at stock benching Time Spy and Junction temp peaks about 92C and it's silent. If I ramp the fans up to about 2,600 (about 75%) it's still quiet and Junction temp is about 82C. This is without messing with the power.


----------



## Spawnyspawn

LtMatt said:


> I keep being tempted by the Sapphire 6900 XTXU, but I know it's almost certainly a bad idea. Can anyone talk me out of it? ▷ Sapphire Radeon RX 6900 XTU ULTIMATE Toxic Wa… | OcUK (overclockers.co.uk)


Same here. It's so difficult to find information on it's performance and benchmarks. So if you have any, please share!
Would be a shame if I had to buy one to find out for myself.


----------



## thomasck

Hi guys, I've just joined the club. I am using dual 1440P 144hz screens, and I notice the memory clock is always stuck at 19XXMhz even at Windows. Is that normal? I have the MSI 6900 XT, which seems to be the reference card.


----------



## Henry Owens

thomasck said:


> Hi guys, I've just joined the club. I am using dual 1440P 144hz screens, and I notice the memory clock is always stuck at 19XXMhz even at Windows. Is that normal? I have the MSI 6900 XT, which seems to be the reference card.


Most the times dual screens causes higher idle clocks.


----------



## thomasck

@Henry Owens I didn't face that with the Radeon Vii .. Any solution? Gonna Google about it.

Sent from my Pixel 2 XL using Tapatalk


----------



## 6u4rdi4n

thomasck said:


> @Henry Owens I didn't face that with the Radeon Vii .. Any solution? Gonna Google about it.
> 
> Sent from my Pixel 2 XL using Tapatalk


Does it matter?


----------



## LtMatt

coelacanth said:


> Interesting. The XFX 6900 XT Merc peaks about 1,350 RPM at stock benching Time Spy and Junction temp peaks about 92C and it's silent. If I ramp the fans up to about 2,600 (about 75%) it's still quiet and Junction temp is about 82C. This is without messing with the power.


Have you got the Black Edition or the Ultra? The former has a slightly higher boost clock (10-20Mhz or so) and a slightly higher default Power Limit (281W vs 289W).


----------



## coelacanth

LtMatt said:


> Have you got the Black Edition or the Ultra? The former has a slightly higher boost clock (10-20Mhz or so) and a slightly higher default Power Limit (281W vs 289W).


I have the Ultra.


----------



## thomasck

6u4rdi4n said:


> Does it matter?


Doest it matter what? 5700XTs got the same issue. R7s no. This is AMDs issue not mine.

Sent from my Pixel 2 XL using Tapatalk


----------



## LtMatt

6u4rdi4n said:


> Does it matter?


Not in the slightest.


----------



## thomasck

@LtMatt from my point of view it does. At the "correct" clock power draw while in Windows goes from 45W to 12W, memory temperature goes from 55C to 36C.


----------



## LethaR

HeLeX63 said:


> Thank you very much !!! Excited to test the new card.
> 
> I just increased the total power limit after the 15% allowable in the AMD software to boost to 340W. Clock speeds now stay locked between 2600 and 2690 MHz on the core.


Hello man! I'm an owner of a XFX Merc 319 Radeon RX 6900XT Black Gaming since two months ago. Im pretty socked with the performance and temperatures of this gpu, its amazing, but using msi afterburner, even writing manually 2600mhz on the core, the card tops at between 2500-2550 and around 60-65C on games when they use 99% of the gpu. I tried to manually oc the memory aswell, but everytime I try to get more than 2100mhz, i start to see weird things on the screen while playing games. Im not very used to oc, so i dont know if i should increase the power limit to 100 someting % or if i should oc using the radeon software instead of afterburner. Also i cant use the rage mode on the adrenalin software, everytime i select it and close or minimize the program and come back to it, the rage mode is unselected again, not sure why. Any suggestions or recomendations for me??? 
Im pairing the 6900xt with a Ryzen 7 5800x 4.7Ghz all cores, 64GB 3600Mhz CL18 TeamGroup Xtreem White and an ASUS B550 A-Gaming.


----------



## LtMatt

thomasck said:


> @LtMatt from my point of view it does. At the "correct" clock power draw while in Windows goes from 45W to 12W, memory temperature goes from 55C to 36C.


And, that does not impact the longevity of the GPU in anyway. The clocks run at a higher frequency due to the vertical blank timing used by your display. If you lower the refresh rate, this adjusts the timing. You may find that a lower refresh rate relaxes the timing enough to allow memory clock drop - if it bothers you that much.


----------



## HeLeX63

LethaR said:


> Hello man! I'm an owner of a XFX Merc 319 Radeon RX 6900XT Black Gaming since two months ago. Im pretty socked with the performance and temperatures of this gpu, its amazing, but using msi afterburner, even writing manually 2600mhz on the core, the card tops at between 2500-2550 and around 60-65C on games when they use 99% of the gpu. I tried to manually oc the memory aswell, but everytime I try to get more than 2100mhz, i start to see weird things on the screen while playing games. Im not very used to oc, so i dont know if i should increase the power limit to 100 someting % or if i should oc using the radeon software instead of afterburner. Also i cant use the rage mode on the adrenalin software, everytime i select it and close or minimize the program and come back to it, the rage mode is unselected again, not sure why. Any suggestions or recomendations for me???
> Im pairing the 6900xt with a Ryzen 7 5800x 4.7Ghz all cores, 64GB 3600Mhz CL18 TeamGroup Xtreem White and an ASUS B550 A-Gaming.


I am at 2730MHz sustained frequency in games no problem. This is with a waterblock now. Some cards can do 2100 only and some can do 2150Mhz, depends on the card you have.


----------



## Spawnyspawn

LethaR said:


> Hello man! I'm an owner of a XFX Merc 319 Radeon RX 6900XT Black Gaming since two months ago. Im pretty socked with the performance and temperatures of this gpu, its amazing, but using msi afterburner, even writing manually 2600mhz on the core, the card tops at between 2500-2550 and around 60-65C on games when they use 99% of the gpu. I tried to manually oc the memory aswell, but everytime I try to get more than 2100mhz, i start to see weird things on the screen while playing games. Im not very used to oc, so i dont know if i should increase the power limit to 100 someting % or if i should oc using the radeon software instead of afterburner. Also i cant use the rage mode on the adrenalin software, everytime i select it and close or minimize the program and come back to it, the rage mode is unselected again, not sure why. Any suggestions or recomendations for me???
> Im pairing the 6900xt with a Ryzen 7 5800x 4.7Ghz all cores, 64GB 3600Mhz CL18 TeamGroup Xtreem White and an ASUS B550 A-Gaming.


I've found so far that a limited clock speed with thermal headroom usually means you can add more power to increase clockspeed. That is, until you hit the limit of your silicon. So increasing power would be my first step in your case.


----------



## EastCoast

From my understanding you have to find the higher OC then start increasing the power limit until the clock rate drops or remains at the frequency. Whatever that number is (between 1-15) would be the sweet spot for your card.


----------



## jimpsar

HeLeX63 said:


> I am at 2730MHz sustained frequency in games no problem. This is with a waterblock now. Some cards can do 2100 only and some can do 2150Mhz, depends on the card you have.


Can you please share your settings? Radeon and mot? Thank you


----------



## thomasck

LtMatt said:


> And, that does not impact the longevity of the GPU in anyway. The clocks run at a higher frequency due to the vertical blank timing used by your display. If you lower the refresh rate, this adjusts the timing. You may find that a lower refresh rate relaxes the timing enough to allow memory clock drop - if it bothers you that much.


If that does not affect the longevity of the card is fine then. Yet, I will try to sort this in some way just because of the power draw and temperature. Gonna try using CDR (I think that's the name of the software) or via custom resolution using adrenaline, or even setting a profile with lower memory clock while not gaming. 

I still have loads to do and try, I've just set the card but work is now allowing me to use the computer. Water block, MPT, undervolt (definitely it does not need overclock for my games and benchmarking is out of my way for a long time).



Sent from my Pixel 2 XL using Tapatalk


----------



## LtMatt

thomasck said:


> If that does not affect the longevity of the card is fine then. Yet, I will try to sort this in some way just because of the power draw and temperature. Gonna try using CDR (I think that's the name of the software) or via custom resolution using adrenaline, or even setting a profile with lower memory clock while not gaming.
> 
> I still have loads to do and try, I've just set the card but work is now allowing me to use the computer. Water block, MPT, undervolt (definitely it does not need overclock for my games and benchmarking is out of my way for a long time).
> 
> 
> 
> Sent from my Pixel 2 XL using Tapatalk


Yes if you reduce refresh rate on desktop to 120HZ that may allow downclock. Also using CRU (not CDR, they made the Witcher 3 Lol) to reduce the refresh rate a few HZ may help. 

I see the same behaviour with my 120HZ OLED BTW. If i set it to 60HZ it downclocks. Temps are 10c+ higher than they normally would be, but the GPU is not affected and still maintain its zero RPM mode despite idling in the low 50s. Most importantly, performance is not affected. 

I don't worry about it and I can't really understand why people get so hung up on something like this, but to each his own of course.


----------



## ptt1982

I went all in! Bought the waterblock for my Red Devil 6900xt, and a Corsair custom loop set with a 360mm radiator. Bought also 6x Arctic P12 120fans for push-pull configuration, and a superbly small pump/reservoir. Soft tubing, as I’m doing liquid cooling the first time and this PC is in another room (not a showcase piece, function over form all the way.)

Remounting the heatsink became so frustrating that I decided to make this a project in which I get results I have to be happy with! The whole set cost me around 590 US. Can obviously use most of the stuff in the next build as well.

Exciting times! I’ll report back to you about how I managed to soak my PC and short it. Watch this space next week.


----------



## LtMatt

ptt1982 said:


> I went all in! Bought the waterblock for my Red Devil 6900xt, and a Corsair custom loop set with a 360mm radiator. Bought also 6x Arctic P12 120fans for push-pull configuration, and a superbly small pump/reservoir. Soft tubing, as I’m doing liquid cooling the first time and this PC is in another room (not a showcase piece, function over form all the way.)
> 
> Remounting the heatsink became so frustrating that I decided to make this a project in which I get results I have to be happy with! The whole set cost me around 590 US. Can obviously use most of the stuff in the next build as well.
> 
> Exciting times! I’ll report back to you about how I managed to soak my PC and short it. Watch this space next week.


You definitely did the right thing. The card and cooler would have been tainted forever if you didn't do that i think.


----------



## Pedros

ptt1982 said:


> I went all in! Bought the waterblock for my Red Devil 6900xt, and a Corsair custom loop set with a 360mm radiator. Bought also 6x Arctic P12 120fans for push-pull configuration, and a superbly small pump/reservoir. Soft tubing, as I’m doing liquid cooling the first time and this PC is in another room (not a showcase piece, function over form all the way.)
> 
> Remounting the heatsink became so frustrating that I decided to make this a project in which I get results I have to be happy with! The whole set cost me around 590 US. Can obviously use most of the stuff in the next build as well.
> 
> Exciting times! I’ll report back to you about how I managed to soak my PC and short it. Watch this space next week.


What block did you get? I'm still waiting for my Alphacool


----------



## jonRock1992

ptt1982 said:


> I went all in! Bought the waterblock for my Red Devil 6900xt, and a Corsair custom loop set with a 360mm radiator. Bought also 6x Arctic P12 120fans for push-pull configuration, and a superbly small pump/reservoir. Soft tubing, as I’m doing liquid cooling the first time and this PC is in another room (not a showcase piece, function over form all the way.)
> 
> Remounting the heatsink became so frustrating that I decided to make this a project in which I get results I have to be happy with! The whole set cost me around 590 US. Can obviously use most of the stuff in the next build as well.
> 
> Exciting times! I’ll report back to you about how I managed to soak my PC and short it. Watch this space next week.


I'm jealous! I'm not willing to drop that kind of cash on just a cooling solution right now though lol. Was hoping there would be an AIO solution for the red devil ultimate. The only options to do that right now are with a modded Kraken G12 or maybe the ID-Cooling Iceflow 240. I used a G12 and X62 on my last GPU (1080 Ti) with great success.


----------



## ptt1982

Pedros said:


> What block did you get? I'm still waiting for my Alphacool


I found the Alphacool Red Devil 6900xt block on Amazon Japan (Auction, not directly Amazon). It was a bit more expensive than the original piece directly from Alphacool at 168€, including shipping. Ordering it from Alphacool would have cost me surely just 139€ for the waterblock, but they said on the forum that the shipment they are getting in 10-12 days only covers about 80% of the pre-orders, and at the same VAT+Customs+shipping would have cost a total of 280€ so I got a good deal locally.

If you want to order it from Amazon Japan (they very effortlessly send stuff to Europe, doesn't take too many days to arrive either, typically within five days), check this link, they still have some available as of writing this: Amazon Japan Alphacool Red Devil 6900xt Waterblock

You can change Amazon Japan's website language into English as well. It's available through the menu marked by the small Japanese flag icon on the upper right corner.


----------



## chispy

I have a pre-order of this alphacool red devil water block for over a month now at aquatuning but they keep changing the dates and delaying for weeks at a time  , i have just received an email from them telling me that it _might be or might not_ be in stock in June 30 - July 2 , 2021 but i'm not holding my breath as they explicitly say on the email to me that this is a guess and not a fix delivery date and that it is highly possible that this date could change again  . I think i will just wait until July 2 , 2021 if they do not deliver i will cancel my order and mod my corsair reference 6900xt water block to fit this red devil 6900xtx-h ultimate card i have here as the air cooler is not enough for this card , just tired of waiting after paying for it. Just a heads up for everyone.

Here is the forum post where they say on this next batch they will be able to cover only 80% of the orders and God only knows when there will be another batch available to customers.

Post # 43






6900 XT Red Devil Waterblock


It went from 5-6 weeks down to 12-13 days. I really Hope it wont get delayed another Time. Been waiting 4 months now.




forum.alphacool.com


----------



## lowrider_05

chispy said:


> I have a pre-order of this alphacool red devil water block for over a month now at aquatuning but they keep changing the dates and delaying for weeks at a time  , i have just received an email from them telling me that it _might be or might not_ be in stock in June 30 - July 2 , 2021 but i'm not holding my breath as they explicitly say on the email to me that this is a guess and not a fix delivery date and that it is highly possible that this date could change again  . I think i will just wait until July 2 , 2021 if they do not deliver i will cancel my order and mod my corsair reference 6900xt water block to fit this red devil 6900xtx-h ultimate card i have here as the air cooler is not enough for this card , just tired of waiting after paying for it. Just a heads up for everyone.
> 
> Here is the forum post where they say on this next batch they will be able to cover only 80% of the orders and God only knows when there will be another batch available to customers.
> 
> Post # 43
> 
> 
> 
> 
> 
> 
> 6900 XT Red Devil Waterblock
> 
> 
> It went from 5-6 weeks down to 12-13 days. I really Hope it wont get delayed another Time. Been waiting 4 months now.
> 
> 
> 
> 
> forum.alphacool.com


I was in a similar kind of boat with my ASRock 6900XT OC Formula, it´s likely, no Waterblocks will be made for this card so I bought an Universal GPU Watercooler from Freezemod on Aliexpress and cool the VRAM and VRM with slim Aluminium Heatsinks i attached with thermal conductive tape. The VRAM gets a little warm with up to 75C Memhotspot Temp but thats absolutely in Spec and the GPU stays absolutely cool with around 65C for the Hotspot Temp.


----------



## thomasck

Guys, what waterblock do you recommend for the reference card? I've been using Bykski blocks without issues but I'd like to hear your experiences about it. I have the msi 6900 XT, the cooler looks like the same from AMDs one so I guess the pcb is the same too?

Also, what is the average GPU voltage? Mine seems to work with 1053mv under load. 

Sent from my Pixel 2 XL using Tapatalk


----------



## rodac

marcoschaap said:


> That is very weird, I have exact the opposite, my 6900XT is even performing better and more stable then a 3080 I owned for a month. In comparison to my older card (2080Ti) it performs miles better. Does your VR application (like SteamVR or Oculus) automatically scale the HMD resolution or PPD settings?


Great to hear, I ended up panicking 2 weeks ago and RMAed the Powercolor Red Devil Ultimate card.
However, I am having second thoughts now, I maybe should have followed the advice from chispy and re-installed Windows.
I use my custom built PC for many applications other than gaming and 2 weeks ago, I was reluctant to re-install Windows, I have this bad feeling that I rushed. By now I have read users complaining about the 6900XT mostly because games would crash but that is related to their power supply that could not power it up well enough. I have also read some issues abbout the RTX series and some of the capacitors that would prevent some of the models from being oveclocked. Maybe I should buy another 6900XT, how about the Sapphire Radeon RX 6900 XT Nitro+ SE OC ?


----------



## marcoschaap

rodac said:


> Great to hear, I ended up panicking 2 weeks ago and RMAed the Powercolor Red Devil Ultimate card.
> However, I am having second thoughts now, I maybe should have followed the advice from chispy and re-installed Windows.
> I use my custom built PC for many applications other than gaming and 2 weeks ago, I was reluctant to re-install Windows, I have this bad feeling that I rushed. By now I have read users complaining about the 6900XT mostly because games would crash but that is related to their power supply that could not power it up well enough. I have also read some issues abbout the RTX series and some of the capacitors that would prevent some of the models from being oveclocked. Maybe I should buy another 6900XT, how about the Sapphire Radeon RX 6900 XT Nitro+ SE OC ?


I think that 6900XT is a fine ass GPU, any specifics on choosing this over a non SE model? The power spikes of the RX6000 series GPU can unsettle even the top-tier PSU's and bring them into OCP mode. Personally I run my 6900XT Nitro+ at a 400W PL with a Corsair RM750x, haven't had a problem with that luckily.

After I switched from Nvidia to AMD I was also reluctant to re-install my OS because of the same reasons, I ended up doing it eventually to rule out any software specific problems which could set me off and return the GPU ;-) I really fell in love with the 6900XT over any Nvidia GPU because of the level of freedom (in power level) and OC I get, in contrary to Nvidia which has pretty much maxed out the GPU power capabilities from the factory.

I'd advise a 6900XT rebuy ;-)


----------



## nyk20z3

I was in MC yesterday in Yonkers NY yesterday, just randomly dropping in as I always do. They are really charging $2600 for this when it should hold maybe a $100-$50 premium over the Strix being a triple fan radiator. Just gives you more of the sick feeling in your stomach how bad things really are, also even the regular Devil 6900xt was going for $2000 so MC is marking cards up now as well when they sold for msrp before.


----------



## rodac

marcoschaap said:


> I think that 6900XT is a fine ass GPU, any specifics on choosing this over a non SE model? The power spikes of the RX6000 series GPU can unsettle even the top-tier PSU's and bring them into OCP mode. Personally I run my 6900XT Nitro+ at a 400W PL with a Corsair RM750x, haven't had a problem with that luckily.
> 
> After I switched from Nvidia to AMD I was also reluctant to re-install my OS because of the same reasons, I ended up doing it eventually to rule out any software specific problems which could set me off and return the GPU ;-) I really fell in love with the 6900XT over any Nvidia GPU because of the level of freedom (in power level) and OC I get, in contrary to Nvidia which has pretty much maxed out the GPU power capabilities from the factory.
> 
> I'd advise a 6900XT rebuy ;-)


Thanks Marco. Very interested to have your input. The issues I read about earlier related to the power supply were only specific to the NVIDIA cards, I think that I did not make this clear in my previous post. Now there is the new 3080 TI that is out, so I also have the option to pre-order but I am not feeling very good about it although that card is a replacement for the 3090s and may be produced in large enough quantities , and, as we know cannot be used for mining.
I am not really really sure about Ray Tracing, the improvement is really minor, something that was shown in one of the videos posted by Linus Tech Tip.
So now I am tempted to purchase the Model that @*nyk20z3* just posted about*, *the fasted range topping watercooled Sapphire. What is holding me up is the pump noise at idle speed when I am not gaming and the idea that I am using up some of the 50K hours that the Astek AIO pump can handle throughout its life, most of that time will be spent without gaming but when I do, then I can have this surge of performance.
My PSU is rated at 1000 Watt, it is the Corsair HX1000i, which should be good enough and my motherboard and CPU should handle this, I have a Gigabyte X299 gaming 9 - I9 9700X (Skylake), this processor has 10 cores up to 4.2 (standard) The motherboard was engineered for gaming so should be good.
Yes, that was the problem, the fact you need to re-install Windows, the DUU may be a good tool but may not be enought to get the job done.
That is a lot of money though at $2600 equivalent in UK pounds, and I have this AIO fear although I already have had a NZXT AIO 62 that has been running fault free for now 3 years, no issue whatsoever.


----------



## rodac

LtMatt said:


> Are you outside the return window? I agree it does not seem the best as a few have had similar problems. Based on size alone it should be one of the best in theory, does sound like a mounting issue.
> 
> In my opinion the XFX Merc is by far the best air cooling solution for the 6900 XT, but i believe even that would struggle with 1.2v at full load as with an unlocked power limit (400W) and 1.175v in Ray Tracing titles it requires 80% fan speed to stay below 110c junction. You can dial that down to 75-70% if Ray Tracing is disabled.
> 
> For 24/7 i just run the card at 2310-2410Mhz in Radeon Software (2350Mhz+ actual in game core clock) and 1.050v set in Radeon Software, which is 0.950v-0.975v actual in game voltage as displayed by HWINFO64. With these settings i can run 25% fan speed at just 1200RPM and keep temps at or around 100c junction.
> 
> I keep being tempted by the Sapphire 6900 XTXU, but I know it's almost certainly a bad idea. Can anyone talk me out of it? ▷ Sapphire Radeon RX 6900 XTU ULTIMATE Toxic Wa… | OcUK (overclockers.co.uk)


Very tempted to buy the Toxic from Overclockers just like you but then the 3080 TI is out soon, really tough to choose, I like the VRAM 16 more than the VRAM 12 but then that would not be special card like the toxic, just a standard card ;-)


----------



## rodac

weleh said:


> Finally got around changing fans on the Toxic and sorting stuff on the PC...
> NF A12x25's are so good compared to the stock fans on this card...


Why did you change the fans, quite tempted to buy the Sapphire Toxic, is the pump noise bearable at idle when not gaming ?


----------



## marcoschaap

rodac said:


> Thanks Marco. Very interested to have your input. The issues I read about earlier related to the power supply were only specific to the NVIDIA cards, I think that I did not make this clear in my previous post. Now there is the new 3080 TI that is out, so I also have the option to pre-order but I am not feeling very good about it although that card is a replacement for the 3090s and may be produced in large enough quantities , and, as we know cannot be used for mining.
> I am not really really sure about Ray Tracing, the improvement is really minor, something that was shown in one of the videos posted by Linus Tech Tip.
> So now I am tempted to purchase the Model that @*nyk20z3* just posted about*, *the fasted range topping watercooled Sapphire. What is holding me up is the pump noise at idle speed when I am not gaming and the idea that I am using up some of the 50K hours that the Astek AIO pump can handle throughout its life, most of that time will be spent without gaming but when I do, then I can have this surge of performance.
> My PSU is rated at 1000 Watt, it is the Corsair HX1000i, which should be good enough and my motherboard and CPU should handle this, I have a Gigabyte X299 gaming 9 - I9 9700X (Skylake), this processor has 10 cores up to 4.2 (standard) The motherboard was engineered for gaming so should be good.
> Yes, that was the problem, the fact you need to re-install Windows, the DUU may be a good tool but may not be enought to get the job done.
> That is a lot of money though at $2600 equivalent in UK pounds, and I have this AIO fear although I already have had a NZXT AIO 62 that has been running fault free for now 3 years, no issue whatsoever.


I was unaware of any PSU problems with RTX3000 series GPU's, thought they only had issues with the POSCAP's on the back of the GPU and they obviously fixed it (or you still have to undervolt/clock the damn thing) but for most people considering an RMA for that fact will be reluctant because you probably never gonna get a replacement GPU.

Regarding RayTracing there's a few things to consider:


Fact, at this moment Nvidia's implementation of hard- and software related features for RTX (or DirectX Ray Tracing) are more mature
AMD's implementation still has to launch and given a few differences in hardware design I'd guess Nvidia will still hold the advantage for a little while
On a contrary hardcore rasterization performance for RX6000 vs RTX3000 will probably be similar along the line where the 6900 has an advantage (from my own experience) over a 3080, 6800XT over a 3070Ti etc.. I obviously don't know where the 3080Ti will land, but that leaves me very curious because the specs of the thing are almost on-par with the 3090, except for VRAM size & bandwith.

I think you ultimately have to consider for yourself where the use of some Nvidia exclusive (or more matures in case of RTX) features fits in your usage scenarios. For me the choice was simple because of VR (higher rasterization performance, more VRAM, OC capability).

Regarding your hardware, I don't see any problems with that. DDU is usefull when switching from AMD to Nvidia, it saves you an OS reinstall, but the other way around Nvidia software is a pain in the ass.

I've run multiple AIO's before switching to a full custom loop a month ago, never had any problems with that. The Astek pumps are obviously not dead silent, but if you mount your RAD correctly (and not like every moron on Reddit does it), you'll probably have no issues with it. I had 3 Corsair AIO's, and a NZXT X62 & X72, all without issues.

It is really a lot of money, but with this pandemic mess I couldn't go on holidays, worked more hours (I'm in IT so obviously understaffed and over-asked), drove less to work etc. ultimately the choice is yours to make ;-)

EDIT: Just read an article on this: 



. If this is true, I can downgrade to a 6800XT lol


----------



## LtMatt

rodac said:


> Very tempted to buy the Toxic from Overclockers just like you but then the 3080 TI is out soon, really tough to choose, I like the VRAM 16 more than the VRAM 12 but then that would not be special card like the toxic, just a standard card ;-)


Go with the 6900 XT, you know it makes sense.

Regarding the Toxic XTU, I managed to resist in the end, gonna keep my Merc 6900 XT as its a decent sample and save the money.

Instead I bought a 6700 XT Merc to play around with. It's literally the half brother of the 6900 XT, so this has satisfied my urge to have a new play toy.


----------



## rodac

LtMatt said:


> Go with the 6900 XT, you know it makes sense.
> 
> Regarding the Toxic XTU, I managed to resist in the end, gonna keep my Merc 6900 XT as its a decent sample and save the money.
> 
> Instead I bought a 6700 XT Merc to play around with. It's literally the half brother of the 6900 XT, so this has satisfied my urge to have a new play toy.


Thanks for sharing your thoughts. It makes sense, if you already have a card of the same generation. As for me I did purchase a Red Devil Ultimate first but experienced drivers issues although I had uninstalled drivers with DDU, others in this community explained that a full Windows 10 re-install is actually the only way to go when changing over from Nvidia (1080 TI) to AMD.
The downside is the price ...of course.


----------



## weleh

rodac said:


> Why did you change the fans, quite tempted to buy the Sapphire Toxic, is the pump noise bearable at idle when not gaming ?


Because Noctua's NF A12x25's work much better than the crap the Toxic ships with.

Pump noise is fine for me, but you can definitely hear it. Nothing crazy.


----------



## rodac

weleh said:


> Because Noctua's NF A12x25's work much better than the crap the Toxic ships with.
> 
> Pump noise is fine for me, but you can definitely hear it. Nothing crazy.


Thanks for your feed back. It makes sense, I purchased Noctuas before, instantly recognized them because of the colour scheme, they are expensive but high quality. 

Another question if I may - really tempted to push the order button now- does the pump ever stop running at idle when there is no load ? I read in one of the few reviews, that all the 4 fans do sometimes stop when there is no load but I expect the pump to be always on at the same flow rate.


----------



## LtMatt

weleh said:


> Because Noctua's NF A12x25's work much better than the crap the Toxic ships with.
> 
> Pump noise is fine for me, but you can definitely hear it. Nothing crazy.


I wonder if it’s an Asetek pump.


----------



## rodac

LtMatt said:


> I wonder if it’s an Asetek pump.


Yes it is an Asetek, see





High-Performing GPU Cooling from Asetek - the Leader in Liquid Cooling - Asetek


Asetek GPU coolers are designed for greater overclocking potential to boost performance over stock air-cooling solutions, and to improve acoustics for quieter operation.




www.asetek.com


----------



## ptt1982

I went even further nuts and all in the second time!

-Upgraded my B450 motherboard to X570 Aorus Ultra (got this from my brother from Europe for 90€)
-Upgraded my Crucial 2666 CL 22 16GB Ram to G.Skill 3600 CL18 32GB
-Bought 240mm Hydro X radiator as the second radiator
-Bought AM4 Hydro X CPU waterblock 
-Bought a Gen3 1TB drive. 

Waiting for everything to arrive and accompany the waterblocked 6900XT Red Devil. I had to do that, because the case would have had a water hazard waiting if I had a huge CPU heatsink and crammed and loose soft tubes hanging there. Maybe that helps with the GPU temps even further as well! 

I know there’s zero benefit from more and faster RAM, but I thought what the hell, I don’t want a weak link in my PC now that I’ve got all these fancy parts.

Now I need to figure out what to do with my Ninja Scythe 5 cooler, multiple extra fans, B450 Mobo and 16GB of RAM... Maybe I’ll build a second computer for my wife for Steam gaming, but first we need to buy a house to get more space, so I’m thinking if I should sell the parts cheap or keep them... First world problems.


----------



## weleh

rodac said:


> Thanks for your feed back. It makes sense, I purchased Noctuas before, instantly recognized them because of the colour scheme, they are expensive but high quality.
> 
> Another question if I may - really tempted to push the order button now- does the pump ever stop running at idle when there is no load ? I read in one of the few reviews, that all the 4 fans do sometimes stop when there is no load but I expect the pump to be always on at the same flow rate.


Pump runs at 100% 24/7 confirmed by a Sapphire Rep on their discord with whom I spoke to.

The rest of the fans stop since this card has fan stop mode under 50C.


----------



## rodac

weleh said:


> Pump runs at 100% 24/7 confirmed by a Sapphire Rep on their discord with whom I spoke to.
> 
> The rest of the fans stop since this card has fan stop mode under 50C.


Thanks @weleh for sharing this info, much appreciated. This is not an issue if the machine is mostly used for gaming but if gaming accounts to only 10% of use, then I am using up the card even when not gaming which defeats the purpose of that amazing card. So I would have to plug it in only when gaming  Maybe there is a way to turn off a specific PCI lane and have a low end graphics card power the screen when not gaming, as I do not have a built-in video output in my motherboard. Otherwise ,I have to accept the idea that the life of the product is 5 years which is probably more reasonable. In 5 years time, this card will no longer be competitive, but that is the argument that people who are against AIO have. Still think that this is worth it giving the superior cooling, I expect that the FPS is more stable than on air cooled. I did notice this issue when I had the air cooled Powercolor, spinning at full rev, the FPS was sometimes hesitant until the cooling could catch up, so it behaved maybe like a turbo engine with a bit of lag, not sure if what I am writing makes any sense ;-)


----------



## skline00

thomasck, I only have the RX6800 with the EK waterblock and backplate. The specs say this block also fits the reference RX6900XT.

When I play MSFS2020 for 2 hours solid the hotspot never exceeds 65C while the average temps is in the mid to higher 40s. That is with all six fans at static 1100rpms.

I would expect the 6900XT would run a bit hotter BUT the watercooling does help temps.


----------



## weleh

rodac said:


> Thanks @weleh for sharing this info, much appreciated. This is not an issue if the machine is mostly used for gaming but if gaming accounts to only 10% of use, then I am using up the card even when not gaming which defeats the purpose of that amazing card. So I would have to plug it in only when gaming  Maybe there is a way to turn off a specific PCI lane and have a low end graphics card power the screen when not gaming, as I do not have a built-in video output in my motherboard. Otherwise ,I have to accept the idea that the life of the product is 5 years which is probably more reasonable. In 5 years time, this card will no longer be competitive, but that is the argument that people who are against AIO have. Still think that this is worth it giving the superior cooling, I expect that the FPS is more stable than on air cooled. I did notice this issue when I had the air cooled Powercolor, spinning at full rev, the FPS was sometimes hesitant until the cooling could catch up, so it behaved maybe like a turbo engine with a bit of lag, not sure if what I am writing makes any sense ;-)


Well to be fair, if the pump stopped temps would get out of control so it's a necessity and running at fixed 100% is better than having it switch speeds which actually cause more harm than good.

I guess the downside is logevity but if it really comes down to that, I guess you can always switch the AIO or get an air cooler for the card or even a block. 
The rad on the toxic doesn't even look that good unfortunately.


----------



## geriatricpollywog

skline00 said:


> thomasck, I only have the RX6800 with the EK waterblock and backplate. The specs say this block also fits the reference RX6900XT.
> 
> When I play MSFS2020 for 2 hours solid the hotspot never exceeds 65C while the average temps is in the mid to higher 40s. I would expect the 6900XT would run a bit hotter BUT the watercooling does help temps.


How much did the sustained core speed increase on the RX 6800 going from air to liquid?


----------



## ZealotKi11er

0451 said:


> How much did the sustained core speed increase on the RX 6800 going from air to liquid?


Most likely not much unless it was temp throttling. Most likely stable clocks.


----------



## rodac

weleh said:


> Well to be fair, if the pump stopped temps would get out of control so it's a necessity and running at fixed 100% is better than having it switch speeds which actually cause more harm than good.
> 
> I guess the downside is logevity but if it really comes down to that, I guess you can always switch the AIO or get an air cooler for the card or even a block.
> The rad on the toxic doesn't even look that good unfortunately.


Sure, it makes sense, that must be possible to repair, but then you would need to open up the card and remove the AIO and reconnect the power, so the AIO would have to meet the exact same specifications.
Difficult choice but I cannot help but thinking that the AIO offers much better performance and also most importantly stability with more stable frame rates, and that on its own really makes me think that this is the best choice you have made.


----------



## thomasck

@skline00 hello fellow Radeon Vii user! I remember you from that forum 
Thanks for the info, I will take a look at ek block, but I am also considering the bykski one. 
Sally I didn't not have much time to play around with the card, I'm just literally just playing some games and that's it when I have time. MPT looks very different from what I'm used to with the Radeon VII.

Sent from my Pixel 2 XL using Tapatalk


----------



## By-Tor

I would like to upgrade my Vega 64 to a newer card, but those prices for the 6000 series are ******ed and out of reach of a lot of people. It seems like gaming is now for the rich only...


----------



## bulletoftime

Hi fellow RX 6900 XT owners....I have a slightly unusual question: did you ever run the 3DMark DirectX Raytracing feature test?

With 21.4.1 drivers, I see a very low score in the 3DMark DirectX Raytracing feature test: *27.41 FPS*. I see something abnormal in the monitoring logs: the GPU Memory Clock Frequency seems to drop down quite a lot and is very unstable. Only see this kind of behavior in DirectX Raytracing feature test. 

The issue is fixed if I roll back to 21.2.3. With 21.2.3 drivers I see *32.18 FPS*.

*All drivers after 21.2.3 seem to have the problem!*

Have any of you seen this issue?

My PC specs:
*GPU:* Asus TUF RX 6900 XT OC (Power Slider: +15%, Max Frequency: 2600 MHZ).
*CPU:* AMD Ryzen 7 3700X (PBO ON)
*Motherboard:* Asus Strix B550-F Gaming (WiFi)
*BIOS Version:* 1804
*RAM:* 32GB G.Skill Trident Z Neo 3800MHZ CL16 (2x16GB)
*PSU:* Seasonic FOCUS GX-1000 1000W 80+ GOLD FULLY MODULAR
*Operating System & Version:* WINDOWS 10 PRO 19043
*GPU Drivers:* Adrenalin Driver Version: 21.4.1


----------



## Oversemper

By-Tor said:


> I would like to upgrade my Vega 64 to a newer card, but those prices for the 6000 series are ******ed and out of reach of a lot of people. It seems like gaming is now for the rich only...


You can sell your vega to miners to start with. I had Radeon 7, sold it, added $200 and got myself a liquid devil 6900xt. That's because vegas have much higher hashrate for mining. At normal times I would never manage to upgrade to a new gen flagship GPU with a $200 difference, well, maybe in early 2010s when GPUs were slow and cheap.


----------



## skline00

0451 said:


> How much did the sustained core speed increase on the RX 6800 going from air to liquid?


I never measured when the RX 6800 was on air. Suffice it to say that it runs much cooler now.


----------



## rodac

weleh said:


> Because Noctua's NF A12x25's work much better than the crap the Toxic ships with.
> 
> Pump noise is fine for me, but you can definitely hear it. Nothing crazy.


Hi @weleh , I went for it, now I will probably want to replace those fans as well, you purchased 3 Noctua's NF A12x25's.
How do you wire this, I know that it may sounds basic.
Are you sure that you are not losing out in terms of cooling compared with the stock fans. Is this the noise reduction that motivated you ?
Thanks


----------



## airisom2

Managed to snag an OC Formula. I've had good success with it so far. 2880/2160/+15 in MSIAB, ~2850/~2150 ingame clocks at 1.2v. ~309w power consumption at 55C 100% fan. But this is at 1080p, so if this card acts like NV cards, power consumption should jump when I get back on 4K (it broke lol, waiting on LG C1). Really wanting to put some more volts into it because I know this card will hit 2900 with mower power. Increasing the power limit in MPT didn't really open it up, but it did smooth out the frametimes and give me slightly more consistent clocks. I'm getting about 6-7% more performance than stock. XTXH is pretty nice.

From the PCBs I've seen, the OC Formula seems to have the best one at first glance, better than the Toxic. Fun fact, the Nitro+ SE has the Toxic PCB, and the VRM/VRAM heatsink is separate from the main heatsink, making it a good candidate for an aio swap if that heatsink clears.

I'm going to take the shroud off soon and see if I can rig up some A12x25s I have lying around. The fans on this card ramp down over time for some reason despite setting it to 100%.


----------



## LtMatt

rodac said:


> Hi @weleh , I went for it, now I will probably want to replace those fans as well, you purchased 3 Noctua's NF A12x25's.
> How do you wire this, I know that it may sounds basic.
> Are you sure that you are not losing out in terms of cooling compared with the stock fans. Is this the noise reduction that motivated you ?
> Thanks


Nice. Can't wait to see how well it overclocks and what your Timespy/Firestrike scores are.


----------



## cmhacks

airisom2 said:


> Managed to snag an OC Formula. I've had good success with it so far. 2880/2160/+15 in MSIAB, ~2850/~2150 ingame clocks at 1.2v. ~309w power consumption at 55C 100% fan. But this is at 1080p, so if this card acts like NV cards, power consumption should jump when I get back on 4K (it broke lol, waiting on LG C1). Really wanting to put some more volts into it because I know this card will hit 2900 with mower power. Increasing the power limit in MPT didn't really open it up, but it did smooth out the frametimes and give me slightly more consistent clocks. I'm getting about 6-7% more performance than stock. XTXH is pretty nice.
> 
> From the PCBs I've seen, the OC Formula seems to have the best one at first glance, better than the Toxic. Fun fact, the Nitro+ SE has the Toxic PCB, and the VRM/VRAM heatsink is separate from the main heatsink, making it a good candidate for an aio swap if that heatsink clears.
> 
> I'm going to take the shroud off soon and see if I can rig up some A12x25s I have lying around. The fans on this card ramp down over time for some reason despite setting it to 100%.



How about the gigabyte aorus radeon rx 6900 xt master 16g?

Inspecting the gpuz, I have seen that it has an XTX silicon without the h. If I'm not wrong, the silicon xtxh is a renown of the xtx, am I right?

Nvidia Amd drivers automatically set 2564mhz for gpu and 2150mhz for ram. Do you think it is a good candidate for overclocking at the level you describe with your card?

Greetings and thank you very much for your contributions!


----------



## airisom2

I can't find any pcb shots of the aorus master anywhere, so I can't comment on it. XTXH has a 4ghz clockspeed limit versus 3ghz, It runs at 1.2v vs 1.16v (correct me if I'm wrong, kinda new to amd cards), and are also binned from the fabs. From what I've seen, a golden sample XTX is your average XTXH. They draw more power, but they also can take the power and scale higher with more voltage, but I'm just speculating with no hard evidence. 

I managed to get it to pull 420w in superposition extreme and my overclock failed lol. It's sitting around the low-mid 2700s in that one. RDR2 seems pretty happy in the low-mid 2800s, but that's at 1080p and 300w. Trying to increase voltage in MPT drastically lowers the core clock to something unusable, so if I want to do that, I'll have to solder a header on the backside of the card and get an elmor evc.


----------



## D1g1talEntr0py

cmhacks said:


> How about the gigabyte aorus radeon rx 6900 xt master 16g?
> 
> Inspecting the gpuz, I have seen that it has an XTX silicon without the h. If I'm not wrong, the silicon xtxh is a renown of the xtx, am I right?
> 
> Nvidia drivers automatically set 2564mhz for gpu and 2150mhz for ram. Do you think it is a good candidate for overclocking at the level you describe with your card?
> 
> Greetings and thank you very much for your contributions!


I have the Aorus RX 6900 XT Master and after I replaced the thermal pads and paste, I can get it to min: 2500MHz max: 2675MHz mem: 2150MHz with an undervolt of 1075mv. Fan is set to max of 75%. I wouldn't say it is a great overclocking card, but I might just have an average sample. But I'm very happy with it as the temperatures are great (after the new pads and paste) and in-game performance is outstanding. I might be able to squeeze some more performance out of it if I allowed the fans to spin faster, but I don't want to have a jet engine next to my ear while playing games.


----------



## ZealotKi11er

D1g1talEntr0py said:


> I have the Aorus RX 6900 XT Master and after I replaced the thermal pads and paste, I can get it to min: 2500MHz max: 2675MHz mem: 2150MHz with an undervolt of 1075mv. Fan is set to max of 75%. I wouldn't say it is a great overclocking card, but I might just have an average sample. But I'm very happy with it as the temperatures are great (after the new pads and paste) and in-game performance is outstanding. I might be able to squeeze some more performance out of it if I allowed the fans to spin faster, but I don't want to have a jet engine next to my ear while playing games.


How did you set the undervolt?


----------



## D1g1talEntr0py

ZealotKi11er said:


> How did you set the undervolt?


I just set the slider in the Radeon software to 1075mv. Any lower and I get crashes in the second TimeSpy benchmark. From what I have read, it is technically just an offset and not a voltage limit. Not sure why they set it up this way as it is a little confusing.


----------



## weleh

rodac said:


> Hi @weleh , I went for it, now I will probably want to replace those fans as well, you purchased 3 Noctua's NF A12x25's.
> How do you wire this, I know that it may sounds basic.
> Are you sure that you are not losing out in terms of cooling compared with the stock fans. Is this the noise reduction that motivated you ?
> Thanks


I connected them to the motherboard because running 3 NF A12x25's at 1400 fixed RPM makes pratically 0 noise over other case fans and is plenty of cooling for daily usage. 
However, you can buy a mini PWM to PWM cable and connect them to the GPU directly and let the card control the fans.

The noise reduction is tremendous because the Toxic fans past 40% get really loud and their cooling isn't exactly spectacular. 


10/10 mod, would always recommend.


----------



## ZealotKi11er

D1g1talEntr0py said:


> I just set the slider in the Radeon software to 1075mv. Any lower and I get crashes in the second TimeSpy benchmark. From what I have read, it is technically just an offset and not a voltage limit. Not sure why they set it up this way as it is a little confusing.


If you did 1075 mV using the slider you are not really undervolting at all. It does bring the curve down for the lower range clk but because you set higher clk speeds those will still use the rest of the voltage.


----------



## nyk20z3

If any one is aware of a Block that fits the 6900XT Strix let me know. I know it came with an aio but i am still tempted to put a block on for even better cooling.


----------



## weleh

nyk20z3 said:


> If any one is aware of a Block that fits the 6900XT Strix let me know. I know it came with an aio but i am still tempted to put a block on for even better cooling.


Absolutely not worth it unless you're already on a custom loop and just want to tidy things up.


----------



## D1g1talEntr0py

ZealotKi11er said:


> If you did 1075 mV using the slider you are not really undervolting at all. It does bring the curve down for the lower range clk but because you set higher clk speeds those will still use the rest of the voltage.


So are you saying it isn't worth messing with this then when overclocking? I know it isn't a true "undervolt" but I was able to get better 3DMark results with a lower voltage using the slider.


----------



## thomasck

weleh said:


> Absolutely not worth it unless you're already on a custom loop and just want to tidy things up.


Would you mind to elaborate? I am just about to get a block for a reference 6900xt.


----------



## blackzaru

thomasck said:


> Would you mind to elaborate? I am just about to get a block for a reference 6900xt.


It doesn't concern you. He was stating that replacing the cooling solution on an hybrid card (in this case, a card with a 240mm AIO integrated with it) for a custom loop is not useful, unless you already have the custom loop ready for it.


----------



## blackzaru

D1g1talEntr0py said:


> So are you saying it isn't worth messing with this then when overclocking? I know it isn't a true "undervolt" but I was able to get better 3DMark results with a lower voltage using the slider.


It can be worth it.

What he is saying though, is that the "max voltage" ain't working like you think it does (like an undervolt of a Nvidia card in afterburner, for example). It's not a straight stop at X voltage with the same frequency curve. Instead, it's an offset of the entire frequency curve to make sure the max voltage is what you set.

So, the stock max voltage on a 6900XT is 1.175V. If you put 1.075 as max voltage, every single point of the voltage/frequency curve will be lowered by 0.100V. So, if, let's say, your max clock is 2500Mhz at 1.075V, with your offset, if the load is lower and your clock that was previously running at 2000Mhz 1.000V stock, for that lower load, will now be trying to run it at 0.900V.

Is that a bit clearer?


----------



## D1g1talEntr0py

blackzaru said:


> It can be worth it.
> 
> What he is saying though, is that the "max voltage" ain't working like you think it does (like an undervolt of a Nvidia card in afterburner, for example). It's not a straight stop at X voltage with the same frequency curve. Instead, it's an offset of the entire frequency curve to make sure the max voltage is what you set.
> 
> So, the stock max voltage on a 6900XT is 1.175V. If you put 1.075 as max voltage, every single point of the voltage/frequency curve will be lowered by 0.100V. So, if, let's say, your max clock is 2500Mhz at 1.075V, with your offset, if the load is lower and your clock that was previously running at 2000Mhz 1.000V stock, for that lower load, will now be trying to run it at 0.900V.
> 
> Is that a bit clearer?


Yes, much clearer. Thank you! I had a feeling it worked something like that, but I wasn't sure. AMD should make that clearer in Wattman.


----------



## Bart

I decided to repaste my Alphacool GPU block using Kingpin KPX instead of whatever stuff came with the block. This improved temps quite a bit (and this was after some benchmarking to warm it up too, these aren't 'cold' numbers):


----------



## Nighthog

You can mess around with the voltage curve by editing your minimum voltage, but be careful which voltages you choose so you can at least boot into windows if using MPT.

By lowering or increasing the stock limit you edit the whole voltage range for your frequency, changes the voltage response. Tested some undervolts and it altered the whole voltage range response if you changed it around.

Sadly only the XTXH allows you the increased 1.200V maximum voltage but MPT allows you to change the minimum freely if you find it useful for OC/undervolt.

And I saw that the "Elmor EVC2" works to give you a voltage offset if you know how to use those things watching a video by buildzoid, if you are into modding these cards.


----------



## J7SC

I picked up a 6900 XT (Gigabyte Gaming OC, dual bios, 3x 8 pin PCIe) yesterday. It's for a dual mobo build geared towards both work and play, with the other system having a w-cooled 3090 Strix. It's my first AMD card since 290X Lightning, and I'm still feeling my way around. A couple of quick overclock attempts later, I seem to get closer to dial-in...so far, best results w/ undervolting to 1.1 v from 1.175 v

The card seems to have more headroom re. PL though once properly cooled, and I am wondering if any of you have successfully managed a higher / verified PL with the 'MorePower tool'. Alternatively, I may try to hunt down a higher-PL 3rd party bios. Any tips on either are appreciated  

BTW, I ordered a custom w-block for the 6900 XT as temps on the air cooler are 'ok', but only with the fans set to blasting 'mega loud'..here's an early result:


----------



## ptt1982

Update:

Did (my first) custom Loop 240mm+360mm rads incl. CPU/GPU: Red Devil junction temp dropped from thermally throttled at 110C to 60-70C on Time Spy under MPT 350W/375W and OC to the max. 

Mission accomplished!


----------



## Henry Owens

How do your guys watercooled reference temps look. I reounted with just a thin layer of kryonaught. Temps appear better but junction appears maybe higher I'm not sure. 
Overclocked 350mpt temps in Witcher 3
Around 46c 60c junction max


----------



## ptt1982

Henry Owens said:


> How do your guys watercooled reference temps look. I reounted with just a thin layer of kryonaught. Temps appear better but junction appears maybe higher I'm not sure.
> Overclocked 350mpt temps in Witcher 3
> Around 46c 60c junction max


Used the paste that came with Alphacool’s block on the basic Red Devil and in-game at 4K without Vsync games max at around 65C junction, core at 45-55C. Typically hovering around 48C/60C. Vsync on at 4K60 temps are around 35-45C/50-55C. 

Anyone here who was able to get 1.2v stable on non XH AIBs? 

There’s more performance left in the card but it’s severely voltage limited. Power doesn’t scale with performance after 400W. I need to find a way to get a better bios or somehow increase voltage (while staying stable) to run higher clocks.


----------



## jonRock1992

ptt1982 said:


> Update:
> 
> Did (my first) custom Loop 240mm+360mm rads incl. CPU/GPU: Red Devil junction temp dropped from thermally throttled at 110C to 60-70C on Time Spy under MPT 350W/375W and OC to the max.
> 
> Mission accomplished!


Good to hear! I'm in the same boat with my red devil ultimate. Thermal throttles at 110C with an overclock. Might have to get one of those water blocks sometime.


----------



## Henry Owens

ptt1982 said:


> Used the paste that came with Alphacool’s block on the basic Red Devil and in-game at 4K without Vsync games max at around 65C junction, core at 45-55C. Typically hovering around 48C/60C. Vsync on at 4K60 temps are around 35-45C/50-55C.
> 
> Anyone here who was able to get 1.2v stable on non XH AIBs?
> 
> There’s more performance left in the card but it’s severely voltage limited. Power doesn’t scale with performance after 400W. I need to find a way to get a better bios or somehow increase voltage (while staying stable) to run higher clocks.


How did you apply the paste? The EKWB manual says do do an X on the die but reading deeper on the ek website it said they recommend to spread thermal grizzly.


----------



## rodac

LtMatt said:


> Nice. Can't wait to see how well it overclocks and what your Timespy/Firestrike scores are.





weleh said:


> I connected them to the motherboard because running 3 NF A12x25's at 1400 fixed RPM makes pratically 0 noise over other case fans and is plenty of cooling for daily usage.
> However, you can buy a mini PWM to PWM cable and connect them to the GPU directly and let the card control the fans.
> The noise reduction is tremendous because the Toxic fans past 40% get really loud and their cooling isn't exactly spectacular.
> 10/10 mod, would always recommend.












Here is the new GPU ->
*Sapphire Radeon RX 6900XT Toxic Extreme*

*My first impression*
This looks fantastic. Am I happy ? yes, definitevely, the performance in unquestionably higher than standard 6900 XT GPUs, however at first, I was not get to see all of this because I had a PSU issue.
By now Sunday June 6th 2021, this issue no longer exists with the new PSU (be quiet! Dark Power Pro 12 1200W 80 Plus Titanium Modular) and I am able to utilize my new GPU Sapphire Toxic Extreme to its full capacity on my X299 motherboard.
I would probably achieve higher performance with a new aMD motherboard, that is probably going to be the next step.



*Noise first*
Not as loud as I expected at all, the pump noise is very low, maybe that is because I was able to mount the radiator above the card ? I read all the reviews and quite often, the reviewers moaned about the pump noise, but that is really acceptable in my view. This is very important for me because my rig is open, I do not even have the panels on to max air flow , so I will hear every single noise, I also listen to music with a high end headphone amp/Open headphone (Sennheiser)
My first impresssion is that the fan noise only becomes an issue when running at full power or when testing the fan RPM at max speeds, otherwise it really is totally acceptable, but then I cannot compare with Noctuas for now.
However, there is a little sizzling intermittent electronic noise I read about in a review that comes on and off depending on the load, this does not appear pump related, from what I can guess, this may be due to the design of he AMD board as I recall hearing the same exact noise on the other Red Devil board as well.

*Performance *
My PSU could *not* take the load and this resulted in the machine rebooting when running the 3Dmark benchmarking (Time Spy) under heavy load. This happened twice, lesson learned, so I then switched the BIOS to silent mode which effectively dropped the performance and a probably the Wattage load, then when I re-tested, my PC no longer rebooted, so this looks load related. When I tested the same benchmark with the Powercolor Red Devil Ultimate, this issue did not happen.
The time Spy in silent BIOS is around 16994, just under 17K, so that looks like the slowest it can get.
I read in one of the reviews (probably the most technical) that this kind of 'shut down' issue with a PSU would likely happen with this kind of high performance GPU
However, the performance even in silent BIOS mode is a little bit higher than what I could achieve with the the Red Devil in default overclock mode by about 500 points approx.
My current PSU is 4 years old, it is a Corsair HX1000i, I thought that it was good enough to handle this card, I was really taken by surprise.
I ended up purchasing a new PSU - be quiet! Dark Power Pro 12 1200W 80 Plus Titanium Modular
By now Sunday June 6th, the issue was resolved by the new PSU.

*6900 XT Readon software Bug issue ?*
Then, I spent the whole night re-installing Windows 10 (not fun ) trying to fix the Adrenalin app that reports that my new AMD cards 6900 (first the Red Devil Ultimate and then this one the Sapphire Toxic Extreme) did not meet minimum specifications, however I must report that the issue persists even after a fresh install of Windows.
As a last resort, I downloaded the Radeon bug reporting tool an sent a request to fix. I hope that this is not a compatibility issue with my motherboard, a Gigabyte Aorus Gaming 9, X299 and my processor is a 2017 processor Intel(R) Core(TM) i9-7900X CPU @ 3.30GHz, 3312 Mhz, 10 Cores.

*Some reviews*

Megatest Sapphire RX 6900 XT Toxic LE: AMD’s liquid cooled top-end








Megatest Sapphire RX 6900 XT Toxic LE: AMD’s liquid cooled top-end - HWCooling.net


Sapphire RX 6900 Toxic LE in detailThe Radeon RX 6900 XT Toxic LE in the company of other graphics cards will not be overlooked. The 360-millimeter liquid GPU cooling is a rare thing that will definitely attract attention. Still, it’s a rather controversial piece of hardware. There are several...




www.hwcooling.net





Specs








Sapphire TOXIC RX 6900 XT Extreme Edition Specs


AMD Navi 21, 2525 MHz, 5120 Cores, 320 TMUs, 128 ROPs, 16384 MB GDDR6, 2000 MHz, 256 bit




www.techpowerup.com





SAPPHIRE Toxic RX 6900 XT Limited Edition Review








SAPPHIRE Toxic RX 6900 XT Limited Edition Review - KitGuru


It's been some years since we last looked at a graphics card from Sapphire's Toxic brand, but that c




www.kitguru.net













SAPPHIRE TOXIC Radeon RX 6900 XT Limited Edition Review





SAPPHIRE TOXIC Radeon RX 6900 XT Limited Edition Review


It has been nearly 10 years since the last TOXIC graphics card from SAPPHIRE, and boy -- what a release! Fastest RX 6900 XT ever.




www.tweaktown.com





Sapphire Radeon RX 6900XT Toxic Review – is AMD’s battle bolide with all-in-one water cooling really better than the Nitro+?








Sapphire Radeon RX 6900XT Toxic Review - is AMD’s battle bolide with all-in-one water cooling really better than the Nitro+? | igor'sLAB


As a customer and connoisseur, one associates the Toxic with the most toxic and fastest variant of a graphics card series from Sapphire. There have been quite a few highly interesting graphics cards…




www.igorslab.de


----------



## LtMatt

rodac said:


> View attachment 2513244
> 
> 
> Here is the new GPU.
> 
> *My first impression*
> This looks fantastic. Am I happy ? yes, definitevely, the performance in unquestionably higher than standard 6900 XT GPUs, however but I could not get to see all of this because I have a PSU issue.
> 
> *Noise first*
> Not as loud as I expected at all, the pump noise is very low, maybe that is because I was able to mount the radiator above the card ? I read all the reviews and quite often, the reviewers moaned about the pump noise, but that is really acceptable in my view. This is very important for me because my rig is open, I do not even have the panels on to max air flow , so I will hear every single noise, I also listen to music with a high end headphone amp/Open headphone (Sennheiser)
> My first impresssion is that the fan noise only becomes an issue when running at full power or when testing the fan RPM at max speeds, otherwise it really is totally acceptable, but then I cannot compare with Noctuas for now.
> However, there is a little sizzling intermittent electronic noise I read about in a review that comes on and off depending on the load, this does not appear pump related, from what I can guess, this may be due to the design of he AMD board as I recall hearing the same exact noise on the other Red Devil board as well.
> 
> *Performance *
> Sadly, I will need to test again later because my PSU could *not* take the load and this resulted in the machine rebooting when running the 3Dmark benchmarking (Time Spy) under heavy load. This happened twice, lesson learned, so I then switched the BIOS to silent mode which effectively dropped the performance and a probably the Wattage load, then when I re-tested, my PC no longer rebooted, so this looks load related. When I tested the same benchmark with the Powercolor Red Devil Ultimate, this issue did not happen.
> The time Spy in silent BIOS is around 16994, just under 17K, so that looks like the slowest it can get.
> I read in one of the reviews (probably the most technical) that this kind of 'shut down' issue with a PSU would likely happen with this kind of high performance GPU
> However, the performance even in silent BIOS mode is a little bit higher than what I could achieve with the the Red Devil in default overclock mode by about 500 points approx.
> My current PSU is 4 years old, it is a Corsair HX1000i, I thought that it was good enough to handle this card, I was really taken by surprise.
> I ended up purchasing a new PSU - be quiet! Dark Power Pro 12 1200W 80 Plus Titanium Modular
> 
> *6900 XT Readon software Bug issue ?*
> Then, I spent the whole night re-installing Windows 10 (not fun ) trying to fix the Adrenalin app that reports that my new AMD cards 6900 (first the Red Devil Ultimate and then this one the Sapphire Toxic Extreme) did not meet minimum specifications, however I must report that the issue persists even after a fresh install of Windows.
> As a last resort, I downloaded the Radeon bug reporting tool an sent a request to fix. I hope that this is not a compatibility issue with my motherboard, a Gigabyte Aorus Gaming 9, X299 and my processor is a 2017 processor Intel(R) Core(TM) i9-7900X CPU @ 3.30GHz, 3312 Mhz, 10 Cores.
> 
> *Some reviews*
> 
> Megatest Sapphire RX 6900 XT Toxic LE: AMD’s liquid cooled top-end
> 
> 
> 
> 
> 
> 
> 
> 
> Megatest Sapphire RX 6900 XT Toxic LE: AMD’s liquid cooled top-end - HWCooling.net
> 
> 
> Sapphire RX 6900 Toxic LE in detailThe Radeon RX 6900 XT Toxic LE in the company of other graphics cards will not be overlooked. The 360-millimeter liquid GPU cooling is a rare thing that will definitely attract attention. Still, it’s a rather controversial piece of hardware. There are several...
> 
> 
> 
> 
> www.hwcooling.net
> 
> 
> 
> 
> 
> Specs
> 
> 
> 
> 
> 
> 
> 
> 
> Sapphire TOXIC RX 6900 XT Extreme Edition Specs
> 
> 
> AMD Navi 21, 2525 MHz, 5120 Cores, 320 TMUs, 128 ROPs, 16384 MB GDDR6, 2000 MHz, 256 bit
> 
> 
> 
> 
> www.techpowerup.com
> 
> 
> 
> 
> 
> SAPPHIRE Toxic RX 6900 XT Limited Edition Review
> 
> 
> 
> 
> 
> 
> 
> 
> SAPPHIRE Toxic RX 6900 XT Limited Edition Review - KitGuru
> 
> 
> It's been some years since we last looked at a graphics card from Sapphire's Toxic brand, but that c
> 
> 
> 
> 
> www.kitguru.net
> 
> 
> 
> 
> View attachment 2513245
> 
> 
> SAPPHIRE TOXIC Radeon RX 6900 XT Limited Edition Review
> 
> 
> 
> 
> 
> SAPPHIRE TOXIC Radeon RX 6900 XT Limited Edition Review
> 
> 
> It has been nearly 10 years since the last TOXIC graphics card from SAPPHIRE, and boy -- what a release! Fastest RX 6900 XT ever.
> 
> 
> 
> 
> www.tweaktown.com
> 
> 
> 
> 
> 
> Sapphire Radeon RX 6900XT Toxic Review – is AMD’s battle bolide with all-in-one water cooling really better than the Nitro+?
> 
> 
> 
> 
> 
> 
> 
> 
> Sapphire Radeon RX 6900XT Toxic Review - is AMD’s battle bolide with all-in-one water cooling really better than the Nitro+? | igor'sLAB
> 
> 
> As a customer and connoisseur, one associates the Toxic with the most toxic and fastest variant of a graphics card series from Sapphire. There have been quite a few highly interesting graphics cards…
> 
> 
> 
> 
> www.igorslab.de


Congrats, looks nice!

I have the same PSU btw and don't have any issue. It has a switch on the back to use Single/Multi rail. 

Did you check you were using a single rail as well as two independent cables from the PSU?

Your CPU may hold you back in some things. Time to upgrade to a Ryzen 5000 series to keep that GPU fed appropriately.


----------



## rodac

LtMatt said:


> Congrats, looks nice!
> I have the same PSU btw and don't have any issue. It has a switch on the back to use Single/Multi rail.
> Did you check you were using a single rail as well as two independent cables from the PSU?
> Your CPU may hold you back in some things. Time to upgrade to a Ryzen 5000 series to keep that GPU fed appropriately.


Good to hear your feed back. I checked some more detailed info from a link below about that PSU Corsair HX1000i, it looks like the one I have only has a single rail, you probably have a better one.
See for yourself, I have uploaded a screen print showing the back of the PSU and where the graphics cards is connected _*circled in blue*_.

*Corsair HX1000i 1000W 80 PLUS Platinum Power Supply Review*





Corsair HX1000i 1000W 80 PLUS Platinum Power Supply Review


Corsair added Link tech to their HX series of power supplies, giving birth to the HXi series, proving once again they have some of the best PSUs available.




www.tweaktown.com


----------



## airisom2

Got an elmor evc on the way to increase voltage and stuff. I was looking at one of buildzoids recent videos, and AMD looked out for the XOCers. There is an area where you can solder a header for an I2C connection for an Elmor. Better yet, basically all of the AIB boards are just a stretched reference with different mosfets, so the header is on basically all 6900xts. And even better yet, the solder job will clear the backplate, making for a pretty seamless installation. 

XTXH indeed has a higher voltage limit at 1.2v. That's what mine gets up to. Forcing a higher voltage through mpt defaults the max clockspeed at 500mhz, so you'll need an elmor to get more out of the card.


----------



## LtMatt

rodac said:


> Good to hear your feed back. I checked some more detailed info from a link below about that PSU Corsair HX1000i, it looks like the one I have only has a single rail, you probably have a better one.
> See for yourself, I have uploaded a screen print showing the back of the PSU and where the graphics cards is connected _*circled in blue*_.
> 
> *Corsair HX1000i 1000W 80 PLUS Platinum Power Supply Review*
> 
> 
> 
> 
> 
> Corsair HX1000i 1000W 80 PLUS Platinum Power Supply Review
> 
> 
> Corsair added Link tech to their HX series of power supplies, giving birth to the HXi series, proving once again they have some of the best PSUs available.
> 
> 
> 
> 
> www.tweaktown.com
> 
> 
> 
> 
> 
> View attachment 2513256
> 
> 
> View attachment 2513257


Ah looks like I have the HX1000 platinum.


----------



## NeeDforKill

Hello friends. I managed to place an order for Sapphire 6900XT Nitro+SE at an adequate price. As i see it's new card, i couldn't find the reviews. Can you say something about this videocard is it worth to get? I will be grateful to hear your opinion and information about this video card.
Previous my videocard was 6900XT Red Devil.
I am now on Strix 6800XT LC, which my friend should buy from me.
Thank you!


----------



## airisom2

NeeDforKill said:


> Hello friends. I managed to place an order for Sapphire 6900XT Nitro+SE at an adequate price. As i see it's new card, i couldn't find the reviews. Can you say something about this videocard is it worth to get? I will be grateful to hear your opinion and information about this video card.
> Previous my videocard was 6900XT Red Devil.
> I am now on Strix 6800XT LC, which my friend should buy from me.
> Thank you!


One of the best air cooled 6900XTs out there right now. I'd put it above the Red Devil Ultimate. It has the 6900XT Toxic PCB as well as an XTXH GPU. Factory clocks seem to be on the conservative side given it is an XTXH variant, but it'll open up with some overclocking. They probably did it to segment it from the Toxic card.


----------



## rodac

NeeDforKill said:


> Hello friends. I managed to place an order for Sapphire 6900XT Nitro+SE at an adequate price. As i see it's new card, i couldn't find the reviews. Can you say something about this videocard is it worth to get? I will be grateful to hear your opinion and information about this video card.
> Previous my videocard was 6900XT Red Devil.
> I am now on Strix 6800XT LC, which my friend should buy from me.
> Thank you!


Same here, I also rate the Sapphire highter than Powercolor. Just like you I had a Red Devil Ultimate and I RMAed it, because at the time I was not aware that the Radeon drivers and software has issues and it reported my new card as not meeting the min requirements, since I RMAed it and just purchased a Toxic Sapphire, this looks like a better card and the performance even in silent mode seems to be higher, I am still having issues but that is likely PSU related, not the card.. hopefully. The Nitro seems more popular than the Toxic, I had no choice, as for me, that is the only one that was available.


----------



## J7SC

...I've been slowly bumping up GPU and VRAM incrementally to check impact on scores on the 6900 XT...at max VRAM (2150), scores were still rising, but it's at the end of the slider - what do you folks do to test beyond that - MSI AB  ? Mixing that w/ AMD Adrenalin doesn't seem to work that well.

...on GPU, still a bit of room left, what with undervolting, but getting a w-block next week as temps need to get checked


----------



## EastCoast

Performed the washer mod on my card in the 5700 thread. Thought I would pass it along here as well. Results do look good. I only tighten the screws enough on the gpu to get good resistence. Hand tighten the rest of the screws on the back of the card. I placed washers on all 11 screws. Seeing good results on the vreg temps even after replacing thermal pads with Fujipoly.
Washer Size: M3 6mmx3mmx1mm
In order to get the washer on the gpu screws I did 2 at a time. Then tighten them in a criss cross pattern. 








AMD RX 5700 XT and 5700 Owner's Club


Miners tend to hunt for best best balance, not best performance... Overclocking and raising the limits to the max would make mining less profitable. For gaming or benching you will mostly benefit from increasing the limits (if you can properly cool the hardware) in order to hit and keep at boost...




www.overclock.net


----------



## airisom2

Yep, washers can do a very good job on some cards. Just have to be mindful of the PCB flex, and as long as it isn't too warped you should be good to go. 1mm thick is a good choice and about as thick as I'd go personally. Metal washers are preferred as nylon ones can deform and flex which loosens tension. 

It might perform well for the first week or so, then you're wondering why temps went up again. Metal washers make the temperature drop permanent. Well, that was how it was for me at least, ymmv.


----------



## EastCoast

airisom2 said:


> Yep, washers can do a very good job on some cards. Just have to be mindful of the PCB flex, and as long as it isn't too warped you should be good to go. 1mm thick is a good choice and about as thick as I'd go personally. Metal washers are preferred as nylon ones can deform and flex which loosens tension.
> 
> It might perform well for the first week or so, then you're wondering why temps went up again. Metal washers make the temperature drop permanent. Well, that was how it was for me at least, ymmv.


Thanks for that. I will keep my eye out for that and get the metal ones if I start to notice any temp issues.
I didn't know there was metal prongs/contacts on the outer portion of the screw until I took it off. Then realized I could have gotten metal ones instead.


----------



## ptt1982

Henry Owens said:


> How did you apply the paste? The EKWB manual says do do an X on the die but reading deeper on the ek website it said they recommend to spread thermal grizzly.


I spread a thin layer of the paste on the GPU. I think with a better paste, and more of it, I'd probably get a bit better results. 

So far the highest I've ever gotten for junction temp was 81C (average junction was 66C), and that was running a stress test RT ultra and everything else maxed out 4K Shadow of the Tomb Raider at the worst possible spot for 25min straight (that's around 25fps, so an unrealistic scenario). This is with MPT 350W/375W, and OC to 2670/2080, pump at 70% and fans at 60%. My original goal was to aim for less than 85C Junction Temp, and I've achieved that with the custom loop. I've got thin 30mm 240+360mm cheapest corsairs radiators and soft tubes, and a cheapish pump/reservoir combo. Most of the time in games the GPU stays around 50-55C and Junction 60-70C, spikes to 75C at worst. I'm sure I could squeeze 10C with better equipment and thermal paste etc., but I've spent already quite a lot of money to get this thing working, so I'm going to just install the new motherboard when it arrives (without dissambling the loop, but doing a leak test afterwards) and call it a day. I'm not going to touch the PC in a year after that and just enjoy my life using it...finally!

Can recommend putting the 6900xt under water, it just solves so many of the problems in regards to OC and MPT etc. Plus if there's a 1.2v bios coming out one day, the card is ready for that as well.


----------



## Henry Owens

Ok nice your temps seem a little higher but not bad. Thinner layer of paste the better


ptt1982 said:


> I spread a thin layer of the paste on the GPU. I think with a better paste, and more of it, I'd probably get a bit better results.
> 
> So far the highest I've ever gotten for junction temp was 81C (average junction was 66C), and that was running a stress test RT ultra and everything else maxed out 4K Shadow of the Tomb Raider at the worst possible spot for 25min straight (that's around 25fps, so an unrealistic scenario). This is with MPT 350W/375W, and OC to 2670/2080, pump at 70% and fans at 60%. My original goal was to aim for less than 85C Junction Temp, and I've achieved that with the custom loop. I've got thin 30mm 240+360mm cheapest corsairs radiators and soft tubes, and a cheapish pump/reservoir combo. Most of the time in games the GPU stays around 50-55C and Junction 60-70C, spikes to 75C at worst. I'm sure I could squeeze 10C with better equipment and thermal paste etc., but I've spent already quite a lot of money to get this thing working, so I'm going to just install the new motherboard when it arrives (without dissambling the loop, but doing a leak test afterwards) and call it a day. I'm not going to touch the PC in a year after that and just enjoy my life using it...finally!
> 
> Can recommend putting the 6900xt under water, it just solves so many of the problems in regards to OC and MPT etc. Plus if there's a 1.2v bios coming out one day, the card is ready for that as well.


Ok nice your temps seem a little higher but not bad. Thinner layer of paste the better. My gpu seems to benefit from higher flow rates. Do you know hot hot your coolant is getting?
.


----------



## LtMatt

airisom2 said:


> One of the best air cooled 6900XTs out there right now. I'd put it above the Red Devil Ultimate. It has the 6900XT Toxic PCB as well as an XTXH GPU. Factory clocks seem to be on the conservative side given it is an XTXH variant, but it'll open up with some overclocking. They probably did it to segment it from the Toxic card.


I know someone using that model, it only has 1.175v available as voltage so it might just be a regular XT.


----------



## lestatdk

J7SC said:


> ...I've been slowly bumping up GPU and VRAM incrementally to check impact on scores on the 6900 XT...at max VRAM (2150), scores were still rising, but it's at the end of the slider - what do you folks do to test beyond that - MSI AB  ? Mixing that w/ AMD Adrenalin doesn't seem to work that well.
> 
> ...on GPU, still a bit of room left, what with undervolting, but getting a w-block next week as temps need to get checked


Unfortunately MSI AB only allows 2150 for memory as well. Also there's no option to set the fast timing in AB. I've tried running with Adrenalin drivers minimum install + MSI AB , and so far it underperformed compared to only using Wattman for tuning. And my card is an MSI card


----------



## lestatdk

J7SC said:


> I picked up a 6900 XT (Gigabyte Gaming OC, dual bios, 3x 8 pin PCIe) yesterday. It's for a dual mobo build geared towards both work and play, with the other system having a w-cooled 3090 Strix. It's my first AMD card since 290X Lightning, and I'm still feeling my way around. A couple of quick overclock attempts later, I seem to get closer to dial-in...so far, best results w/ undervolting to 1.1 v from 1.175 v
> 
> The card seems to have more headroom re. PL though once properly cooled, and I am wondering if any of you have successfully managed a higher / verified PL with the 'MorePower tool'. Alternatively, I may try to hunt down a higher-PL 3rd party bios. Any tips on either are appreciated
> 
> BTW, I ordered a custom w-block for the 6900 XT as temps on the air cooler are 'ok', but only with the fans set to blasting 'mega loud'..here's an early result:












These are the values I've been using for a while now. I use these with +12% in Wattman as well. 

Try and run HWinfo to monitor which limit you're hitting while benchmarking.


----------



## ptt1982

Henry Owens said:


> Ok nice your temps seem a little higher but not bad. Thinner layer of paste the better
> 
> 
> Ok nice your temps seem a little higher but not bad. Thinner layer of paste the better. My gpu seems to benefit from higher flow rates. Do you know hot hot your coolant is getting?
> .


I guess they are a bit high, but I'm fine with them. I've got +15% on top of the MPT settings, so the max is around 430W. Not sure how hot the coolant is getting, I need to wait the new mobo to arrive and install that to be able to measure it. It's a budget watercooling set, but it works. I know I won't be using air anymore for sure.


----------



## airisom2

LtMatt said:


> I know someone using that model, it only has 1.175v available as voltage so it might just be a regular XT.


I was looking at the Videocardz source for the XTXH bit, but I guess that was just speculation since the Toxic ED has an XTXH and started selling recently. The press release doesn't say anything about it using an XTXH GPU, so I guess I'm spreading false rumors then 😅 

They may be using the older Toxic PCB (is there a difference?) with the standard XTX chip. Or maybe it does have an XTXH and they're bios-limiting it. Flashing a Toxic ED bios on there might give us something. 

For XTXH cards, we have so far:
Red Devil Ultimate
Xtreme Waterforce
OC Formula
Toxic Extreme Edition


----------



## J7SC

lestatdk said:


> View attachment 2513316
> 
> 
> These are the values I've been using for a while now. I use these with +12% in Wattman as well.
> 
> Try and run HWinfo to monitor which limit you're hitting while benchmarking.


Thank you  ...I'll try those...
So far, with HWInfo & GPUz, the highest 'GPU core' wattage I've seen is 320 W. It's a 3x8 pin card with 14 +3 phases (Infineon XDPE132G5D GPUv controller), and there ought to be a lot more headroom, especially with the w-block arriving next week for temp control.



Spoiler


----------



## Henry Owens

What's the max watts and amps you would give a reference card with really good cooling?


----------



## weleh

airisom2 said:


> I was looking at the Videocardz source for the XTXH bit, but I guess that was just speculation since the Toxic ED has an XTXH and started selling recently. The press release doesn't say anything about it using an XTXH GPU, so I guess I'm spreading false rumors then 😅
> 
> They may be using the older Toxic PCB (is there a difference?) with the standard XTX chip. Or maybe it does have an XTXH and they're bios-limiting it. Flashing a Toxic ED bios on there might give us something.
> 
> For XTXH cards, we have so far:
> Red Devil Ultimate
> Xtreme Waterforce
> OC Formula
> Toxic Extreme Edition


Gaming Z Trio from MSI also uses XTXH. 
Also the newest Asus Strix LC Ultra will be XTXH.


----------



## lestatdk

J7SC said:


> Thank you  ...I'll try those...
> So far, with HWInfo & GPUz, the highest 'GPU core' wattage I've seen is 320 W. It's a 3x8 pin card with 14 +3 phases (Infineon XDPE132G5D GPUv controller), and there ought to be a lot more headroom, especially with the w-block arriving next week for temp control.


My card has 3x8 pin as well.

In hwinfo go down to the bottom of the list and you can see in percentages how much you're loading the card.










I could see that it never came close to hitting the SOC limit at all. Since I'm on air I decided to stop with my current values since I don't want to get too high temps


----------



## lestatdk

Henry Owens said:


> What's the max watts and amps you would give a reference card with really good cooling?


With 2x8 pin and the PCI power you could get a max of at least 375W. Not saying you should go that high, but 150 pr 8 pin is very conservative. 
Go with small increments and test for temps.


----------



## Henry Owens

lestatdk said:


> With 2x8 pin and the PCI power you could get a max of at least 375W. Not saying you should go that high, but 150 pr 8 pin is very conservative.
> Go with small increments and test for temps.


How about 360/380a?


----------



## lestatdk

Henry Owens said:


> How about 360/380a?


It all depends on temperatures.






Radeon RX 6900 XT Power Limit


What is maximum safe Power Limit to set for 6900XT?! I have a reference design board from ASUS, and I raised it's power limit via MorePowerTool to the base of 310W, then later lowered it slightly to 292W base (I specify "base" because there's still a +15% overclock raise to that, which gives a...




community.amd.com





as that guy said the 8 pin can provide a lot more than 150 if you have a good PSU.


----------



## J7SC

lestatdk said:


> My card has 3x8 pin as well.
> 
> In hwinfo go down to the bottom of the list and you can see in percentages how much you're loading the card.
> 
> View attachment 2513328
> 
> 
> I could see that it never came close to hitting the SOC limit at all. Since I'm on air I decided to stop with my current values since I don't want to get too high temps


...I'm stepping out for a bit, but I'll run some more stress tests later w/ HWInfo...I already use it extensively for my 3x8 pin 3090 (both that and the 6900XT are for work s well) and the 3090 came with 500+ W (total board power) stock, before I loaded up a 520+ W bios (w-cooled). 

..what AMD bios flashing tool / version are you using ? Sorry for all these questions btw, haven't had an AMD GPU since R290X days..


----------



## lestatdk

J7SC said:


> ...I'm stepping out for a bit, but I'll run some more stress tests later w/ HWInfo...I already use it extensively for my 3x8 pin 3090 (both that and the 6900XT are for work s well) and the 3090 came with 500+ W (total board power) stock, before I loaded up a 520+ W bios (w-cooled).
> 
> ..what AMD bios flashing tool / version are you using ? Sorry for all these questions btw, haven't had an AMD GPU since R290X days..


Not flashing any bios. You extract the bios using GPU-z. Then go into Morepowertool and load the bios file. Make the changes and save to the SPPT which is in registry. Then just reboot for changes to take effect. Watch this video for the procedure it's very simple:


----------



## Henry Owens

lestatdk said:


> It all depends on temperatures.
> 
> 
> 
> 
> 
> 
> Radeon RX 6900 XT Power Limit
> 
> 
> What is maximum safe Power Limit to set for 6900XT?! I have a reference design board from ASUS, and I raised it's power limit via MorePowerTool to the base of 310W, then later lowered it slightly to 292W base (I specify "base" because there's still a +15% overclock raise to that, which gives a...
> 
> 
> 
> 
> community.amd.com
> 
> 
> 
> 
> 
> as that guy said the 8 pin can provide a lot more than 150 if you have a good PSU.


Temps are totally in control.


----------



## lestatdk

Henry Owens said:


> Temps are totally in control.


What are your values set to in MPT ?


----------



## Henry Owens

lestatdk said:


> What are your values set to in MPT ?


Right now experimenting with 375w/ 390a
Core clock is 2685 with 1175mv.
Passes port royal at these settings.
Testing timespy extreme I got it to pass with those settings listed with added +5% in wattman.
Gpu max temp 45c, max junction temp 62c.
Seems pretty good?


----------



## J7SC

lestatdk said:


> Not flashing any bios. You extract the bios using GPU-z. Then go into Morepowertool and load the bios file. Make the changes and save to the SPPT which is in registry. Then just reboot for changes to take effect. Watch this video for the procedure it's very simple:


Thanks again  ...I do vaguely remember that AMD cards update vbios changes in the Windows registry (it's all a bit foggy, but I think every time I changed s.th. on the 8990s or 290X, it would create a new HW entry in the registry, afair).

The reason I asked about the flashing tool (I downloaded one which claims to be up-to-date) is that the Aorus Master / Waterforce seems likely to have the same PCB and definitely the same IO layout. The Aorus probably has a higher bin and a nicer cooler (certainly the air-cooled Master does), but I figure the Aorus Master Waterforce bios would be a decent match before modding that one per the linked video, not least as I'm watercooling this Gigabyte version.

As discussed, below in the spoiler is the HWInfo on 'stock, air' etc. for a Superposition 4K run. VRAM is maxed w/ scores still showing increases before I 'ran out of slider', but GPU core is not finalized yet, subject to mounting the w-block first as Hotspot temp is already in the mid-80s C. GPUv is also undervolted to about 1.075v + - 



Spoiler


----------



## rodac

LtMatt said:


> Ah looks like I have the HX1000 platinum.


@weleh @LtMatt 
Great, tonight Sunday June 6th 2021, this issue no longer exists with the new PSU (be quiet! Dark Power Pro 12 1200W 80 Plus Titanium Modular) and I am able to utilize my new GPU Sapphire Toxic Extreme to its full capacity on my X299 motherboard. Really happy.
I would probably achieve higher performance with a new aMD motherboard, that is probably going to be the next step.


----------



## thomasck

Hi guys. Have you been using SAM? Did you get any improvement? Besides the fact it should not work with x370, I updated the mobo bios and enabled SAM.
Here are some scores. 3900X, reference 6900 xt, taichi x370, 2x8gb 3733 cl15.

no sam - no sam
ff ultra, graphics 13 611, physics 28 314, combined 6 909, total 13 355
ff extreme, graphics 27 244, physics 29 091, combined 10 250, total 23 561
ff normal, graphics 53 219, physics 29 067, combined 9 214, total 33 215
timespy extreme, graphics 8 795, cpu 6891 total 8 444
timespy normal, graphics 18 271, cpu 13 552 total 17 364

with sam - with sam
ff ultra, graphics 13 672, physics 28 217, combined 6 954, total 13 413
ff extreme, graphics 27 407, physics 29 048, combined 10 382, total 23 718
ff normal, graphics 53 654, physics 28 276, combined 9 996, total 34 144
timespy extreme, graphics 8 951, cpu 6 888 total 8 566
timespy normal, graphics 18 671, cpu 13 471 total 17 649


----------



## chispy

Interesting aio product for rx6900xt  - AMD华硕技嘉微星七彩虹3067890TIRX686900XT一分体水冷显卡散热器-淘宝网


----------



## ptt1982

Henry Owens said:


> Right now experimenting with 375w/ 390a
> Core clock is 2685 with 1175mv.
> Passes port royal at these settings.
> Testing timespy extreme I got it to pass with those settings listed with added +5% in wattman.
> Gpu max temp 45c, max junction temp 62c.
> Seems pretty good?
> View attachment 2513338


Fantastic result! I've got up to 10128 with the Red Devil @ 2670/2110, gpu max 50C/78C. MPT 340/365 +15% PL. You've beaten me in every category! I envy your low junction temp.


----------



## LtMatt

Henry Owens said:


> Right now experimenting with 375w/ 390a
> Core clock is 2685 with 1175mv.
> Passes port royal at these settings.
> Testing timespy extreme I got it to pass with those settings listed with added +5% in wattman.
> Gpu max temp 45c, max junction temp 62c.
> Seems pretty good?
> View attachment 2513338


Good score, this is the best I have managed with my Merc on air. 
AMD Radeon RX 6900 XT video card benchmark result - AMD Ryzen 9 5950X,ASUSTeK COMPUTER INC. ROG CROSSHAIR VIII HERO (3dmark.com)


----------



## Henry Owens

ptt1982 said:


> Fantastic result! I've got up to 10128 with the Red Devil @ 2670/2110, gpu max 50C/78C. MPT 340/365 +15% PL. You've beaten me in every category! I envy your low junction temp.
> 
> View attachment 2513378


Oh wow great thanks! I think my low temps are from my nice thin layer of kryonaught I applied which took me 20 minutes until I thought it was passable. And also my flowrate with two d5 pumps.


----------



## lestatdk

J7SC said:


> Thanks again  ...I do vaguely remember that AMD cards update vbios changes in the Windows registry (it's all a bit foggy, but I think every time I changed s.th. on the 8990s or 290X, it would create a new HW entry in the registry, afair).
> 
> The reason I asked about the flashing tool (I downloaded one which claims to be up-to-date) is that the Aorus Master / Waterforce seems likely to have the same PCB and definitely the same IO layout. The Aorus probably has a higher bin and a nicer cooler (certainly the air-cooled Master does), but I figure the Aorus Master Waterforce bios would be a decent match before modding that one per the linked video, not least as I'm watercooling this Gigabyte version.
> 
> As discussed, below in the spoiler is the HWInfo on 'stock, air' etc. for a Superposition 4K run. VRAM is maxed w/ scores still showing increases before I 'ran out of slider', but GPU core is not finalized yet, subject to mounting the w-block first as Hotspot temp is already in the mid-80s C. GPUv is also undervolted to about 1.075v + -


Notice how it's only PPT hitting the limit. Your TDC and SOC does not need to be increased yet.


----------



## J7SC

lestatdk said:


> Notice how it's only PPT hitting the limit. Your TDC and SOC does not need to be increased yet.


tx...yeah, for now I have to undervolt this thing significantly to keep Hotspot temps in check. 

Fortunately, 'this' just arrived about 30 min ago...Unfortunately, I won't get to mounting it for about two weeks as it is part of a dual mobo - single 'case (TT Core P8) complex build project w/ everything w-cooled.


----------



## NeeDforKill

Bois, i got my Sapphire 6900XT Nitro+ SE. So far, the impressions are contradictory.
What I liked:
1. Design - this is probably the most beautiful video card what i had in my pc. I have like all current gen card 3080/6800xt/6900xt/3090.
2. Cooling System - It has very low noise, as for me on 100% fans it like twice low noise compare to 6900XT Red Devil.
What I didn't like:
1.Overclocking for this card is a real limit. As i said i already had 6900XT Red Devil and MPT so i get with around 500 tdp in SuperPosition 4k settings ofc on 100% fans 2720mhz on core and hotspot was above 100, something like 98c. Nitro+ SE get GPU throttling so i overclock 2650mhz core 100% fans running SuperPosition 4k Nitro gets 110c hotspot and auto drop core frequency to 2500 so i get 2450-2550 range and hotspot sits on 109c. So now you understand how it feels where i can get 2700-2720 on Red Devil in Superposition and timespy Extreme and now my gpu throttling :feelsbadman:
2. My Nitro SE+ probably have toxic board, but not have a XTXH chip :unlucky:
What surprised me:
As Default specs it have Boost up to 2365 MHz. My Nitro SE+ have default Boost to 2465 Mhz - Probably real secret toxic boost 

I still thinking if this card better for gaming or even have a same performance for me in games, especially when i get few months ago Red Devil 6900XT for 1440$ and this one cost me 2150$.
For now i am downloading some games to make tests for me and compare with red devil.

This is my 3dmarkTimeSpy Extreme, default settings expect disabled cpu test. What you can say about Sensor temps? for me it's looks pretty nice.


----------



## LtMatt

NeeDforKill said:


> Bois, i got my Sapphire 6900XT Nitro+ SE. So far, the impressions are contradictory.
> What I liked:
> 1. Design - this is probably the most beautiful video card what i had in my pc. I have like all current gen card 3080/6800xt/6900xt/3090.
> 2. Cooling System - It has very low noise, as for me on 100% fans it like twice low noise compare to 6900XT Red Devil.
> What I didn't like:
> 1.Overclocking for this card is a real limit. As i said i already had 6900XT Red Devil and MPT so i get with around 500 tdp in SuperPosition 4k settings ofc on 100% fans 2720mhz on core and hotspot was above 100, something like 98c. Nitro+ SE get GPU throttling so i overclock 2650mhz core 100% fans running SuperPosition 4k Nitro gets 110c hotspot and auto drop core frequency to 2500 so i get 2450-2550 range and hotspot sits on 109c. So now you understand how it feels where i can get 2700-2720 on Red Devil in Superposition and timespy Extreme and now my gpu throttling :feelsbadman:
> 2. My Nitro SE+ probably have toxic board, but not have a XTXH chip :unlucky:
> What surprised me:
> As Default specs it have Boost up to 2365 MHz. My Nitro SE+ have default Boost to 2465 Mhz - Probably real secret toxic boost
> 
> I still thinking if this card better for gaming or even have a same performance for me in games, especially when i get few months ago Red Devil 6900XT for 1440$ and this one cost me 2150$.
> For now i am downloading some games to make tests for me and compare with red devil.
> 
> This is my 3dmarkTimeSpy Extreme, default settings expect disabled cpu test. What you can say about Sensor temps? for me it's looks pretty nice.
> View attachment 2513445


Interesting, I thought the cooler on the Sapphire would have been better based on user feedback here of Red Devil junction temperatures.


----------



## HyperC

Sorry guys been busy last 2 months have been insane and never got around to modding my card yet... But it does look promising shortly have I missed anything important?


----------



## BIaze

Strix 6900XT LC
4x Arctic P12 PWMs on the radiator, blower fan is 100%(and so are the arctics)

Furmark 11k res/2x MSAA

power limit is as shown, I wonder if this is normal or something's wrong with VRM cooling

on stock out of the box setting i'm hitting mid 70s on VRM and the asus' official distributor here says it's normal


----------



## LtMatt

BIaze said:


> View attachment 2513480
> 
> 
> Strix 6900XT LC
> 4x Arctic P12 PWMs on the radiator, blower fan is 100%(and so are the arctics)
> 
> Furmark 11k res/2x MSAA
> 
> power limit is as shown, I wonder if this is normal or something's wrong with VRM cooling
> 
> on stock out of the box setting i'm hitting mid 70s on VRM and the asus' official distributor here says it's normal


580W bloody hell! Why are you running Furmark? 

Those temps look okay considering you are pushing almost 600W through the GPU.


----------



## blackzaru

LtMatt said:


> 580W bloody hell! Why are you running Furmark?
> 
> Those temps look okay considering you are pushing almost 600W through the GPU.


That's peak, not constant. Quick reminder that both AMD's and NVIDIA's (but particularly the latter) have produce cards that have a pattern of cards drawing sudden loads of extremely high-peak power (up to 30% above average power draw), hence why they recommend 750W to 850W PSUs for *STOCK* cards, let alone OCed ones.

580W was his peak, 472W is the average in his run, given that his current and immediate temperatures are at 44 degrees on an AIO solution, this means that he reset the data logging after beginning the test, thus, under load, and, given how we know AIOs and waterblocks, in general, cool down a GPU real fast, 44 degrees means that he just stopped the load, and took a screenshot immediately.

So, the real power consumptions is more likely to be:

30W idling;
475W average under max load (here Furmark power-virus, which is hella unrealistic, so you might expect something more in line with 400-425W in more "normal" max loads);
580W peak, instantenous, and momentarily, load.
Which is in no way 580-600W constant. In fact, 580W constant (average) load on a 6900 XT would definitively need to have a voltage mod or set it with an Elmorelabs EVC, and very likely custom cooling, to be attained, given that no cards will normally pull 485-510 Amps on 1.175 to 1.200V (to obtain 580-600W) constantly and maintain it, without hardware mods.


----------



## BIaze

LtMatt said:


> 580W bloody hell! Why are you running Furmark?
> 
> Those temps look okay considering you are pushing almost 600W through the GPU.


just for testing purposes really, didn't run it for extended periods of time either




blackzaru said:


> That's peak, not constant. Quick reminder that both AMD's and NVIDIA's (but particularly the latter) have produce cards that have a pattern of cards drawing sudden loads of extremely high-peak power (up to 30% above average power draw), hence why they recommend 750W to 850W PSUs for *STOCK* cards, let alone OCed ones.
> 
> 580W was his peak, 472W is the average in his run, given that his current and immediate temperatures are at 44 degrees on an AIO solution, this means that he reset the data logging after beginning the test, thus, under load, and, given how we know AIOs and waterblocks, in general, cool down a GPU real fast, 44 degrees means that he just stopped the load, and took a screenshot immediately.
> 
> So, the real power consumptions is more likely to be:
> 
> 30W idling;
> 475W average under max load (here Furmark power-virus, which is hella unrealistic, so you might expect something more in line with 400-425W in more "normal" max loads);
> 580W peak, instantenous, and momentarily, load.
> Which is in no way 580-600W constant. In fact, 580W constant (average) load on a 6900 XT would definitively need to have a voltage mod or set it with an Elmorelabs EVC, and very likely custom cooling, to be attained, given that no cards will normally pull 485-510 Amps on 1.175 to 1.200V (to obtain 580-600W) constantly and maintain it, without hardware mods.


is increasing max voltage from 1.175 to say, 1.2(or more) safe for the 6900XT?


----------



## BIaze

~merged~


----------



## blackzaru

BIaze said:


> just for testing purposes really, didn't run it for extended periods of time either
> 
> 
> 
> is increasing max voltage from 1.175 to say, 1.2(or more) safe for the 6900XT?


6900 XTXH (the infamous "Ultimate" cards) already have the possibility to run at 1.2V. For now, a regular 6900 XT is software locked at 1.175V, although, with hardware mods or possible future software solutions, we might be able to push 1.2V on it.

Given that XTXH cards can use 1.2V, and that their only difference is that they are better binned versions of the 6900 XT regular cards, with memory and core clocks multipliers allowed to be set higher, along with a bump from 1.175 to 1.2V, I don't see how something that's allowed on what is essentially the same chip, would be harmful to it.

TLDR answer: if you find a way to do it, then yes, pretty sure this is safe.


----------



## blackzaru

NeeDforKill said:


> He just probably changed those values by MPT. I am on Red Devil pull ~600w on Time Spy Extreme to achive stable 2700MHz during full test.


You need 600W to achieve 2700 MHz stable on TS Extreme?

On my waterblocked reference 6900XT, my 2618 MHz average clock frequency run , was pulling 350-360W average... Are you sure you are not talking about momentarily peak draws? Because, you are talking about almost double the power draw for a 3% gain in clock frequency...


----------



## NeeDforKill

blackzaru said:


> You need 600W to achieve 2700 MHz stable on TS Extreme?
> 
> On my waterblocked reference 6900XT, my 2618 MHz average clock frequency run , was pulling 350-360W average... Are you sure you are not talking about momentarily peak draws? Because, you are talking about almost double the power draw for a 3% gain in clock frequency...


Yes, i will try to find screenshots. But i posted them in telegram channel what was deleted. That was like additional 100w per 1% of performance.


----------



## blackzaru

NeeDforKill said:


> Yes, i will try to find screenshots. But i posted them in telegram channel what was deleted. That was like additional 100w per 1% of performance.


I'd surely like to see that!


----------



## Thanh Nguyen

Alphacool, Corsair and EK. Which one is a better block guys?


----------



## blackzaru

Thanh Nguyen said:


> Alphacool, Corsair and EK. Which one is a better block guys?


Out of those 3, Alphacool. And I say this,despite running an EK block.

Although, if you are okay with waiting, Watercool will certainly release blocks soon, and they are most likely to be the most premium blocks out there. (most likely)


----------



## NeeDforKill

blackzaru said:


> I'd surely like to see that!


I am sorry, that's only what i able to find. Spent like 30 mins, to looks all my phones, google photos, backups. On this screenshot is TimeSpy Extreme in "hard places" especially in test 2 it pull from card 443w on 2650MHz.
Someone here can try(i can't cos nitro+se just throttling cos of 110c hotspot).
What i did:
1. Setup MTP 700w, so we not have tdp limit at all.
2. Put 2700+Mhz Core, aswell as i did memory overclock
3. Put 100% fans with disabled Zero speed
4. Run 3dMark Time Spy Extreme and Port Royal.


----------



## LtMatt

blackzaru said:


> That's peak, not constant. Quick reminder that both AMD's and NVIDIA's (but particularly the latter) have produce cards that have a pattern of cards drawing sudden loads of extremely high-peak power (up to 30% above average power draw), hence why they recommend 750W to 850W PSUs for *STOCK* cards, let alone OCed ones.
> 
> 580W was his peak, 472W is the average in his run, given that his current and immediate temperatures are at 44 degrees on an AIO solution, this means that he reset the data logging after beginning the test, thus, under load, and, given how we know AIOs and waterblocks, in general, cool down a GPU real fast, 44 degrees means that he just stopped the load, and took a screenshot immediately.
> 
> So, the real power consumptions is more likely to be:
> 
> 30W idling;
> 475W average under max load (here Furmark power-virus, which is hella unrealistic, so you might expect something more in line with 400-425W in more "normal" max loads);
> 580W peak, instantenous, and momentarily, load.
> Which is in no way 580-600W constant. In fact, 580W constant (average) load on a 6900 XT would definitively need to have a voltage mod or set it with an Elmorelabs EVC, and very likely custom cooling, to be attained, given that no cards will normally pull 485-510 Amps on 1.175 to 1.200V (to obtain 580-600W) constantly and maintain it, without hardware mods.


I am glad to hear it. Nonetheless I know how much wattage Furmark can pull, I can't fathom why anyone uses it.


----------



## blackzaru

NeeDforKill said:


> I am sorry, that's only what i able to find. Spent like 30 mins, to looks all my phones, google photos, backups. On this screenshot is TimeSpy Extreme in "hard places" especially in test 2 it pull from card 443w on 2650MHz.
> Someone here can try(i can't cos nitro+se just throttling cos of 110c hotspot).
> What i did:
> 1. Setup MTP 700w, so we not have tdp limit at all.
> 2. Put 2700+Mhz Core, aswell as i did memory overclock
> 3. Put 100% fans with disabled Zero speed
> 4. Run 3dMark Time Spy Extreme and Port Royal.
> 
> View attachment 2513489


Oh well, thanks for the time you took mate. 443W being your peak load, I guess we can safely assume (by both looking at the graph, and by common sense) that you were pulling 400W (or a bit less) on average load. Given that you are running at 32MHz higher than I did for my run, it's about 40W increase for 32 MHz for the "average load, thus around 1.25W per MHz increase. I know this is not linear in any way, but, in order for your average load, at 2700MHz, to be 600W, you would need to basically go from 1.25W per MHz increase, up to 4.00W per MHz increase (200W difference for 50MHz), which is a 3.2 times ratio, over a very short span of frequency. So, I do still keep my doubts about that 600W sustained average load. Although, given the 443W peak load that you showed, 600W peak is actually believable (as the other user who showed 580W peak with 472-475W average). Using that other user's results, and assuming your cards perform somewhat similarly, you would land in the 490W area for constant load, which would put you at a 1.80W per MHz increase for a 2700MHz run. Which is a bit easier to wrap my head around coming from a 1.25W per MHz increase, rather than a sudden "wall" of 4.00W per MHz increase.

So, I might be wrong, but, my guess would be that, for your 2700MHz run, you had a peak load of 600W, with an average load of around 490-495W. Which is still an insane amount of wattage to cool with anything but an AIO, custom waterloop, or more exotic cooling.

Anyway, gratz on getting a stable 2700MHz run, my card really tops out around the 2600-2650MHz range.


----------



## BIaze

blackzaru said:


> 6900 XTXH (the infamous "Ultimate" cards) already have the possibility to run at 1.2V. For now, a regular 6900 XT is software locked at 1.175V, although, with hardware mods or possible future software solutions, we might be able to push 1.2V on it.
> 
> Given that XTXH cards can use 1.2V, and that their only difference is that they are better binned versions of the 6900 XT regular cards, with memory and core clocks multipliers allowed to be set higher, along with a bump from 1.175 to 1.2V, I don't see how something that's allowed on what is essentially the same chip, would be harmful to it.
> 
> TLDR answer: if you find a way to do it, then yes, pretty sure this is safe.


oh welp, i thought MPT already allows that


----------



## blackzaru

LtMatt said:


> I am glad to hear it. Nonetheless I know how much wattage Furmark can pull, I can't fathom why anyone uses it.


I wholeheartedly agree with you. Furmark was the hallmark of "power benching" back in the days (ie Maxwell era), but, with cards being able to pull more and more wattage nowadays, I literally see it a lot more as a potential danger than anything else. The risks are minimal, but, it's no wonder why it is now designed as a "powervirus".

The same way Prime95 small FFT is overkill and actually dangerous to use in prolonged tests for CPUs. Those are tests that, even the most extreme of case usages, are not even coming close to. They are called torture tests for a reason: they operate way beyond the line of anything else that component might be put through in any realistic scenario.


----------



## blackzaru

BIaze said:


> oh welp, i thought MPT already allows that


For now, MPT allows to play with the maximum TDP on both the GPU chip and SOC, the maximum Amps on both the GPU chip and SOC, on maximum voltage (won't work above 1.175V on XT cards, pretty much everyone reports driver crashes or black screens on non-XTXH cards when trying to go passed it, and 1.200V on XTXH cards), minimum and maximum (up to 3000MHz for XT cards, 4000MHz for XTXH cards) on core frequency, and memory clock (2150MHz for XT cards (although most people report a "true" frequency of 2138MHz) and up to 2624MHz for XTXH cards).


----------



## NeeDforKill

blackzaru said:


> Oh well, thanks for the time you took mate. 443W being your peak load, I guess we can safely assume (by both looking at the graph, and by common sense) that you were pulling 400W (or a bit less) on average load. Given that you are running at 32MHz higher than I did for my run, it's about 40W increase for 32 MHz for the "average load, thus around 1.25W per MHz increase. I know this is not linear in any way, but, in order for your average load, at 2700MHz, to be 600W, you would need to basically go from 1.25W per MHz increase, up to 4.00W per MHz increase (200W difference for 50MHz), which is a 3.2 times ratio, over a very short span of frequency. So, I do still keep my doubts about that 600W sustained average load. Although, given the 443W peak load that you showed, 600W peak is actually believable (as the other user who showed 580W peak with 472-475W average). Using that other user's results, and assuming your cards perform somewhat similarly, you would land in the 490W area for constant load, which would put you at a 1.80W per MHz increase for a 2700MHz run. Which is a bit easier to wrap my head around coming from a 1.25W per MHz increase, rather than a sudden "wall" of 4.00W per MHz increase.
> 
> So, I might be wrong, but, my guess would be that, for your 2700MHz run, you had a peak load of 600W, with an average load of around 490-495W. Which is still an insane amount of wattage to cool with anything but an AIO, custom waterloop, or more exotic cooling.
> 
> Anyway, gratz on getting a stable 2700MHz run, my card really tops out around the 2600-2650MHz range.


That's all like benchmark testing. As you can see on gpu-z graph on TimeSpy Extreme test 2 it get 5 times this 443w peaks and they takes longer than 1 sec. You will not get this consumption in anygame(probably) even peak. So yea, average is lower. I say that 600w in terms on stability clock, so it will just run full test on 2700 and will not drop MHz. If we make our tdp limit to 500w, it will still run 90% of test on 2700Mhz, but we will get frequency drop on that "peeks" where it want 600w for 2700Mhz.


----------



## chispy

Guys if anyone with an air cool Red Devil rx6900xt / xtx-h needs a water block do not order from Alphacool as they were not able to fulfill orders for the last four months and they have just post pone again the shipping dates to a might or might not ship ( place holder date that it will change again as it is a place holder date and not confirm ) to July 2, 2021 , meaning 6+ months since they started taking money from people without delivering the water blocks on pre-orders. I have just cancel my pre-order with them and bought a Bykski Red Devil rx6900xt water block and it is in stock right now and they ship inmediatly , mine already ship  . There are a few places to get the Byski block for the Red Devil:

From here: Bykski RX 6900XT GPU Water Block For Powercolor RX 6900XT 6800XT Red Devil / Red Dragon , Graphic Card Cooler A-PC6900XT-X

And here: 121.72US $ 20% OFF|Bykski Water Block use for PowerColor Red Devil AMD Radeon RX6900XT/6800XT GPU /Video Card /Full Cover Copper Radiator/RGB Light|Fans & Cooling| - AliExpress

I have order and bought many water blocks from those 2 stores and never had a problem with them and they ship process and ship very fast their packages.

*Just a heads up on the bad situation that got worst at alphacool with an undefinite delivery date of red devil water blocks. Also their customer support sucks and are not answering emails so that you know in advance.


----------



## ZealotKi11er

chispy said:


> Guys if anyone with an air cool Red Devil rx6900xt / xtx-h needs a water block do not order from Alphacool as they were not able to fulfill orders for the last four months and they have just post pone again the shipping dates to a might or might not ship ( place holder date that it will change again as it is a place holder date and not confirm ) to July 2, 2021 , meaning 6+ months since they started taking money from people without delivering the water blocks on pre-orders. I have just cancel my pre-order with them and bought a Bykski Red Devil rx6900xt water block and it is in stock right now and they ship inmediatly , mine already ship  . There are a few places to get the Byski block for the Red Devil:
> 
> From here: Bykski RX 6900XT GPU Water Block For Powercolor RX 6900XT 6800XT Red Devil / Red Dragon , Graphic Card Cooler A-PC6900XT-X
> 
> And here: 121.72US $ 20% OFF|Bykski Water Block use for PowerColor Red Devil AMD Radeon RX6900XT/6800XT GPU /Video Card /Full Cover Copper Radiator/RGB Light|Fans & Cooling| - AliExpress
> 
> I have order and bought many water blocks from those 2 stores and never had a problem with them and they ship process and ship very fast their packages.
> 
> *Just a heads up on the bad situation that got worst at alphacool with an undefinite delivery date of red devil water blocks. Also their customer support sucks and are not answering emails so that you know in advance.


No reason to overspend for other blocks but Bykski at this point. I was able to get mine in Nov 2020 for refence 6800XT.


----------



## BIaze

blackzaru said:


> For now, MPT allows to play with the maximum TDP on both the GPU chip and SOC, the maximum Amps on both the GPU chip and SOC, on maximum voltage (won't work above 1.175V on XT cards, pretty much everyone reports driver crashes or black screens on non-XTXH cards when trying to go passed it, and 1.200V on XTXH cards), minimum and maximum (up to 3000MHz for XT cards, 4000MHz for XTXH cards) on core frequency, and memory clock (2150MHz for XT cards (although most people report a "true" frequency of 2138MHz) and up to 2624MHz for XTXH cards).


that's quite disappointing honestly, with all these "locks/limits" on the non XTXH


----------



## ZealotKi11er

BIaze said:


> that's quite disappointing honestly, with all these "locks/limits" on the non XTXH


Clk limit is not really that important. If you can go over 2800MHz you have a golden card already. Also voltage limit is there to allow or board designs and warranty. Its too hard to cover warranty for caards that are overvolted.


----------



## Nighthog

Basically AMD is selling "locked" gpu's and only the 6900XTH is "unlocked" for a price premium.

It's the same stuff with their Ryzen cpu's, the PBO implementation is software locked/limited to what AMD will allow. There would be more performance if they would allow user control for more variables.


----------



## BIaze

ZealotKi11er said:


> Clk limit is not really that important. If you can go over 2800MHz you have a golden card already. Also voltage limit is there to allow or board designs and warranty. Its too hard to cover warranty for caards that are overvolted.


don't think I can, i'm maxing out at 2750 on the radeon software thing and going 2800 will crash


----------



## Nighthog

BIaze said:


> don't think I can, i'm maxing out at 2750 on the radeon software thing and going 2800 will crash


My 6900XT will crash above 2650Mhz... Not gotten it to run TS yet above that frequency.


----------



## ZealotKi11er

Nighthog said:


> Basically AMD is selling "locked" gpu's and only the 6900XTH is "unlocked" for a price premium.
> 
> It's the same stuff with their Ryzen cpu's, the PBO implementation is software locked/limited to what AMD will allow. There would be more performance if they would allow user control for more variables.


What I was trying to say is 6800XT/6900XT are not really locked. XTXH is binned so you will never get there, only lucky with normal 6800XT/6900XT. The only GPU that is "locked" is 6800 non XT with only 1.025V/2600MHz limits. The thing is that AMD sells these dies or a cost to AIBs. AIB would easily take a 6800 and OC it like 6800XT/6900XT and sell for premium while AMD makes less money. The segmentation AMD has in place is to keep fair AIB practices not to limit customers. I am sure they will learn from this and give users unlimited everything but void the warranty.


----------



## J7SC

ZealotKi11er said:


> No reason to overspend for other blocks but Bykski at this point. I was able to get mine in Nov 2020 for refence 6800XT.


I've had some Bykski as OEM on RTX2k cards for over two years w/o issues. On the other hand, a fairly new EKWB block and backplate for a NV RTX3K had all kinds of issues (some already RMAed) such as peeling Nickel plating, a raised machine-head imprint right above the GPU die area, RGB LEDs 'on the blink' etc.

So when Bykski came up as the only available w-block option for the (fairly rare) Gigabyte 6900 XT Gaming OC (triple 8 pin), I ordered it late last Thursday and it arrived yesterday, backplate included as part of the package...once mounted ( 1-2 weeks) I will update on performance etc, but machining quality looks to be excellent.


----------



## jonRock1992

So I ended up buying this water block for my red devil ultimate: Bykski RX 6900XT GPU Water Block For Powercolor RX 6900XT 6800XT Red Devil / Red Dragon , Graphic Card Cooler A-PC6900XT-X


----------



## Godhand007

Hi Guys,

New here. I have a question about memory OC on 6900XT. I have heard that maxing it out to 2150MHz actually lowers performance. It doesn't seem to be the case for various 3d mark tests. What are your experiences regarding memory OC?

Here are my OC settings BTW (Reference card on air). Everything is stable with hours of various 3d marks tests/games.


----------



## airisom2

What are you guys getting in Superposition 4K Optimized? I got 17134, but I'm cpu bottlenecked. 2820/2150/mpt. ~400w


----------



## jimpsar

airisom2 said:


> What are you guys getting in Superposition 4K Optimized? I got 17134, but I'm cpu bottlenecked. 2820/2150/mpt. ~400w


I am around 18300-18500 as I recall.


----------



## jonRock1992

I can get around 2760 MHz (actual clock speed) stable with my red devil ultimate in games without ray tracing at around 100C junction. I can only get around 2700MHz (actual clock speed) in really demanding games like Resi 8 with ray tracing at 110C junction. How well do you think this card will run under water? I got a Byksi full-cover waterblock, a 28mm thick 360mm rad with noctua nf-a12x25 fans, and a Bykski B-TANK-DDC-MI pump/res combo. This is just dedicated to the GPU.


----------



## Godhand007

airisom2 said:


> What are you guys getting in Superposition 4K Optimized? I got 17134, but I'm cpu bottlenecked. 2820/2150/mpt. ~400w


Around the same.


----------



## Godhand007

jonRock1992 said:


> I can get around 2760 MHz (actual clock speed) stable with my red devil ultimate in games without ray tracing at around 100C junction. I can only get around 2700MHz (actual clock speed) in really demanding games like Resi 8 with ray tracing at 110C junction. How well do you think this card will run under water? I got a Byksi full-cover waterblock, a 28mm thick 360mm rad with noctua nf-a12x25 fans, and a Bykski B-TANK-DDC-MI pump/res combo. This is just dedicated to the GPU.


I can get 2700MHz plus in Cyberpunk2077 with RT ultra on my reference card (on air). It's not stable in various 3d mark stress tests though. From what I have seen (looking at other users on water),
it's generally going to give you a cooler card with more stable frequencies.


----------



## CantingSoup

Bykski has released a block for the phantom gaming 6800/6900xt cards.








Amazon.com: Bykski GPU Waterblock GPU Water Cooler GPU Liquid Cooling Block for ASROCK Radeon RX 6800 XT RX 6900 XT Phantom Gaming D 16G OC Taichi X 16G OC (12V 4Pin RGB Lights) : Electronics


Buy Bykski GPU Waterblock GPU Water Cooler GPU Liquid Cooling Block for ASROCK Radeon RX 6800 XT RX 6900 XT Phantom Gaming D 16G OC Taichi X 16G OC (12V 4Pin RGB Lights): Water Cooling Systems - Amazon.com ✓ FREE DELIVERY possible on eligible purchases



www.amazon.com


----------



## Nighthog

For anyone interested TimeSpy can use ~400-420W for the demo part if you allow it to gulp wattage @ modest ~2650Mhz speeds. while the benchmarks themselves are happy to use maximum 300W average with ~370W maximum peak.

Not that allowing more wattage made it any more extra stable if you have a bad sample.


----------



## J7SC

...been playing around with the 3x8 pin Gigabyte OC and using full power - unfortunately it's really noisy on air and temps are nothing to write home about (especially Hotspot), but looking forward to mount the Bykski block next week. Once that is done, I will mod the bios re. power limit.

*Quick question: W/ Navi21s that are maxed at 1.175V, can More Power / RBE push beyond that, or is that a hard lock (which is what I have heard) ? *

...error free to 2785 MHz GPU and 2150 VRAM, though this is in light rendering only on stock bios, not serious bench runs...which will come after w-cooling and bios mods. However, for those mods, the GPU seems to be 'decent', especially for the price I paid.











*EDIT:*
...had a chance to run multiple 3D, including Unigine 4k, w/HWInfo open for effective clocks....this is bone stock w/ air cooler, and results are repeatable.Ambient temp was about 18 C. Needless to add that I'm quite impressed with this Gigabyte OC sample as it was one of the '''cheapest''' 3x8 pin 6900 XT PCBs out there (not to mention the only one available to me then)...the air-cooler kind of sucks and/or is really noisy, but I got the Bykski block just waiting to be installed...with that plus a power-modded vBios, I look forward to - may be - get to 2800 MHz at full 3D


----------



## Godhand007

J7SC said:


> ...been playing around with the 3x8 pin Gigabyte OC and using full power - unfortunately it's really noisy on air and temps are nothing to write home about (especially Hotspot), but looking forward to mount the Bykski block next week. Once that is done, I will mod the bios re. power limit.
> 
> *Quick question: W/ Navi21s that are maxed at 1.175V, can More Power / RBE push beyond that, or is that a hard lock (which is what I have heard) ? *
> 
> ...error free to 2785 MHz GPU and 2150 VRAM, though this is in light rendering only on stock bios, not serious bench runs...which will come after w-cooling and bios mods. However, for those mods, the GPU seems to be 'decent', especially for the price I paid.
> 
> View attachment 2513830
> 
> 
> 
> *EDIT:*
> ...had a chance to run multiple 3D, including Unigine 4k, w/HWInfo open for effective clocks....this is bone stock w/ air cooler, and results are repeatable.Ambient temp was about 18 C. Needless to add that I'm quite impressed with this Gigabyte OC sample as it was one of the '''cheapest''' 3x8 pin 6900 XT PCBs out there (not to mention the only one available to me then)...the air-cooler kind of sucks and/or is really noisy, but I got the Bykski block just waiting to be installed...with that plus a power-modded vBios, I look forward to - may be - get to 2800 MHz at full 3D
> 
> View attachment 2513913


Not to diss on your overclocking efforts but unless those clocks are stable for more than 30 minutes on TimeSpy tests both 1 and 2 (looped separately), I wouldn't call those successful. With the exception of the TimeSpy stress tests, my card can sustain 2700MHz stable in CyberPunk, Metro EE, and various other benchmarks including Heaven, etc.

If you have one of the new H chips then you can push voltage up to 1.2 v otherwise you are locked to 1175mv. Also, You don't need a bios mod to increase power limits. MPT can do that for you.


----------



## weleh

2700 Mhz on Heaven can be done by any card to be honest.

I can game at 2700-2800 24/7 but Time Spy will not pass this ever. 2660-2760 is my Time Spy stable clocks on Wattman iirc.
The less demanding the more clocks I can do. On Firestrike I can bench at almost 2900 on my non XTXH card. 
On Port Royal, due to RT bottleneck, I need to set 2900 too so the card even reaches 2500+ Mhz.


----------



## Godhand007

weleh said:


> 2700 Mhz on Heaven can be done by any card to be honest.
> 
> I can game at 2700-2800 24/7 but Time Spy will not pass this ever. 2660-2760 is my Time Spy stable clocks on Wattman iirc.
> The less demanding the more clocks I can do. On Firestrike I can bench at almost 2900 on my non XTXH card.
> On Port Royal, due to RT bottleneck, I need to set 2900 too so the card even reaches 2500+ Mhz.


Do you mean passing one benchmark run or stress test? One benchmark run can be done on 50-100 MHz over actual stable frequency.


----------



## J7SC

Godhand007 said:


> Not to diss on your overclocking efforts but unless those clocks are stable for more than 30 minutes on TimeSpy tests both 1 and 2 (looped separately), I wouldn't call those successful. With the exception of the TimeSpy stress tests, my card can sustain 2700MHz stable in CyberPunk, Metro EE, and various other benchmarks including Heaven, etc.
> 
> If you have one of the new H chips then you can push voltage up to 1.2 v otherwise you are locked to 1175mv. Also, You don't need a bios mod to increase power limits. MPT can do that for you.





weleh said:


> 2700 Mhz on Heaven can be done by any card to be honest.
> 
> I can game at 2700-2800 24/7 but Time Spy will not pass this ever. 2660-2760 is my Time Spy stable clocks on Wattman iirc.
> The less demanding the more clocks I can do. On Firestrike I can bench at almost 2900 on my non XTXH card.
> On Port Royal, due to RT bottleneck, I need to set 2900 too so the card even reaches 2500+ Mhz.


I seem to have hit a bit of a nerve here, which was not my intent...I'm just thrilled that my 6900 XT (choice of one out of one at the store, plus a decent MSRP) isn't a dud, and it seems to scale very nicely with extra voltage. I do appreciate the confirmation re. the 1.175 V cap I had read about. I also have the 'More Power Tool' downloaded after a previous post by another gent here...just waiting to finish a complex 3-system, 5-mobo update build and related w-cooling. The 3950X/6900XT combo is earmarked as my new 'daily slugger', with a ~ 80% productivity- 20% entertainment brief so I won't push it too hard re. PL mods and/or new bios (which also is an option, given related models). The 5950X / 3090 Strix setup in the same case / system is 70 % entertainment - 30% productivity and gets more OC attention...

FYI, the results I posted above were not a single 'brief run' but part of seven or so sequential Unigine runs (Valley 4k and Superposition 4k, 8k) to 'get some heat' build up...I simply haven't loaded any other bench yet, including the advanced 3DM suite. That said, I'm no stranger to 3DM per spoiler...but as mentioned before, this is my first AMD GPU in seven or so years, so I still have lots to learn... 


Spoiler


----------



## Godhand007

J7SC said:


> I seem to have hit a bit of a nerve here, which was not my intent...I'm just thrilled that my 6900 XT (choice of one out of one at the store, plus a decent MSRP) isn't a dud, and it seems to scale very nicely with extra voltage. I do appreciate the confirmation re. the 1.175 V cap I had read about. I also have the 'More Power Tool' downloaded after a previous post by another gent here...just waiting to finish a complex 3-system, 5-mobo update build and related w-cooling. The 3950X/6900XT combo is earmarked as my new 'daily slugger', with a ~ 80% productivity- 20% entertainment brief so I won't push it too hard re. PL mods and/or new bios (which also is an option, given related models). The 5950X / 3090 Strix setup in the same case / system is 70 % entertainment - 30% productivity and gets more OC attention...
> 
> FYI, the results I posted above were not a single 'brief run' but part of seven or so sequential Unigine runs (Valley 4k and Superposition 4k, 8k) to 'get some heat' build up...I simply haven't loaded any other bench yet, including the advanced 3DM suite. That said, I'm no stranger to 3DM per spoiler...but as mentioned before, this is my first AMD GPU in seven or so years, so I still have lots to learn...
> 
> 
> Spoiler
> 
> 
> 
> 
> View attachment 2513929


No worries mate, It's all good. These cards are good, very good for what they are and you can easily squeeze 10% performance from them if you are will to go that far. But even if you want a balanced OC you should definitely look towards MPT as frametime stability is something that would increase a lot with increased power limits.


----------



## Kir

Did I just get a bad 6900 XT sample? It's an AMD reference card and It seems I can neither UV nor OC and the hot spot temp is always really high. I ran lots of time spy runs (23) with different settings. I have a big excel sheet (can link if wanted), but to summarize.
Everything stock, I get ~18300 GPU score with a max hot spot temp. of 98°C. (I did hit 107°C while playing Assassins Creed Origins though)
Lowest I can undervolt with everything else stock is 1130mV. Max hot spot 97°C. 18.7k gpu score. (Pity since UV seems to really up the score)
Highest I can overclock with everything else stock incl. 1175mV is 2575 which had absolutely no effect on the score.
I basically can't touch the power limit slider, because the hot spot temp exceeds 110°C. I was able to finish a run with 2464 max clock (default), 1130mV, PL+15% and gpu fans at 100%. Result was 19261 gpu score and 111°C max hot spot.
VRAM tuning I tested from 2130-2150 only decreases my score, with 2130 giving the highest _shrug_.

Biggest temperature difference between GPU Temp and Hot Spot I measured was 39°C (72 vs 111).

Ambient temps are 22-23°C and my case is a be quiet! Pure Base 500DX (3 case fans all running at max rpm (~1000)).
Aside from not being able to UV or OC much at all, the temps just seem insanely high compared to what other people have been posting. Honestly, this has just been very frustrating so far.

Am I just cherry picking results from others or are these actually bad results? I barely played any games because this is just annoying me so much.


----------



## J7SC

Godhand007 said:


> No worries mate, It's all good. These cards are good, very good for what they are and you can easily squeeze 10% performance from them if you are will to go that far. But even if you want a balanced OC you should definitely look towards MPT as frametime stability is something that would increase a lot with increased power limits.


...yeah, just doing baselines at stock before w-cooling and PL mods. I like what I see so far, though, especially re. scaling as I went from 1.075v to 1.175v ...and this 6900 XT seems to be more efficient than 500 +- W custom 3090s



Kir said:


> Did I just get a bad 6900 XT sample? It's an AMD reference card and It seems I can neither UV nor OC and the hot spot temp is always really high. I ran lots of time spy runs (23) with different settings.
> 
> 
> Spoiler
> 
> 
> 
> I have a big excel sheet (can link if wanted), but to summarize.
> Everything stock, I get ~18300 GPU score with a max hot spot temp. of 98°C. (I did hit 107°C while playing Assassins Creed Origins though)
> Lowest I can undervolt with everything else stock is 1130mV. Max hot spot 97°C. 18.7k gpu score. (Pity since UV seems to really up the score)
> Highest I can overclock with everything else stock incl. 1175mV is 2575 which had absolutely no effect on the score.
> I basically can't touch the power limit slider, because the hot spot temp exceeds 110°C. I was able to finish a run with 2464 max clock (default), 1130mV, PL+15% and gpu fans at 100%. Result was 19261 gpu score and 111°C max hot spot.
> VRAM tuning I tested from 2130-2150 only decreases my score, with 2130 giving the highest _shrug_.
> 
> Biggest temperature difference between GPU Temp and Hot Spot I measured was 39°C (72 vs 111).
> 
> Ambient temps are 22-23°C and my case is a be quiet! Pure Base 500DX (3 case fans all running at max rpm (~1000)).
> Aside from not being able to UV or OC much at all, the temps just seem insanely high compared to what other people have been posting. Honestly, this has just been very frustrating so far.
> 
> Am I just cherry picking results from others or are these actually bad results? I barely played any games because this is just annoying me so much.


...hard to do a 'remote' analysis, but I get nervous w/ Hotspot temps above 85 C or so no matter what the chip...it can obviously survive more, but OCing and longevity are not helped by that. If I would have to guess, based on the temps you are reporting vs ambient, your card might need a cooler re-mount / re-paste. I had factory-fresh cards before which had a crooked mount on the cooler, and/or some thermal pads not even making contact (never mind proper contact).One quick question: just for testing, have you disabled 0-speed fan before bench runs to check oc ?


----------



## NeeDforKill

Kir said:


> Did I just get a bad 6900 XT sample? It's an AMD reference card and It seems I can neither UV nor OC and the hot spot temp is always really high. I ran lots of time spy runs (23) with different settings. I have a big excel sheet (can link if wanted), but to summarize.
> Everything stock, I get ~18300 GPU score with a max hot spot temp. of 98°C. (I did hit 107°C while playing Assassins Creed Origins though)
> Lowest I can undervolt with everything else stock is 1130mV. Max hot spot 97°C. 18.7k gpu score. (Pity since UV seems to really up the score)
> Highest I can overclock with everything else stock incl. 1175mV is 2575 which had absolutely no effect on the score.
> I basically can't touch the power limit slider, because the hot spot temp exceeds 110°C. I was able to finish a run with 2464 max clock (default), 1130mV, PL+15% and gpu fans at 100%. Result was 19261 gpu score and 111°C max hot spot.
> VRAM tuning I tested from 2130-2150 only decreases my score, with 2130 giving the highest _shrug_.
> 
> Biggest temperature difference between GPU Temp and Hot Spot I measured was 39°C (72 vs 111).
> 
> Ambient temps are 22-23°C and my case is a be quiet! Pure Base 500DX (3 case fans all running at max rpm (~1000)).
> Aside from not being able to UV or OC much at all, the temps just seem insanely high compared to what other people have been posting. Honestly, this has just been very frustrating so far.
> 
> Am I just cherry picking results from others or are these actually bad results? I barely played any games because this is just annoying me so much.


Question is bad for what? what are you trying to do? Benchmarks have nothing to do with games. Not many games can give you any profit from higher overclock. You can try for example Watch Dogs legion, where difference between 2300Mhz and 2700Mhz is 2 fps. Other is hard thing when you overclock to high values is find to stable frequency for daily usage. You can sit and finetune whole day to complete all benchmarks without any problem and after you enter in Warzone you frequency will be unstable, same thing for Raytracing games big chance is your good frequency what works in everygame will crash if you will enable RTX ingame.

About temps i have same 105-110c hotspot in some games like Outriders on 330-340w and probably it can eat more. I have Sapphire Nitro+ SE. Same was on ref sapphire 6800XT where it just sit always on 105-107c hotspot.




J7SC said:


> I get nervous w/ Hotspot temps above 85 C or so no matter what the chip...it can obviously survive more


LoL 105+ that's normal temps for amd gpus - it's official information from AMD Stuff about those hotspot temps - that's why all refference card sitting on 105+ hotspot temps on 40% of fan speed. 
Probably Nvidia if had sensor on hotspot will have a same temps.


----------



## lestatdk

Kir said:


> Did I just get a bad 6900 XT sample? It's an AMD reference card and It seems I can neither UV nor OC and the hot spot temp is always really high. I ran lots of time spy runs (23) with different settings. I have a big excel sheet (can link if wanted), but to summarize.
> Everything stock, I get ~18300 GPU score with a max hot spot temp. of 98°C. (I did hit 107°C while playing Assassins Creed Origins though)
> Lowest I can undervolt with everything else stock is 1130mV. Max hot spot 97°C. 18.7k gpu score. (Pity since UV seems to really up the score)
> Highest I can overclock with everything else stock incl. 1175mV is 2575 which had absolutely no effect on the score.
> I basically can't touch the power limit slider, because the hot spot temp exceeds 110°C. I was able to finish a run with 2464 max clock (default), 1130mV, PL+15% and gpu fans at 100%. Result was 19261 gpu score and 111°C max hot spot.
> VRAM tuning I tested from 2130-2150 only decreases my score, with 2130 giving the highest _shrug_.
> 
> Biggest temperature difference between GPU Temp and Hot Spot I measured was 39°C (72 vs 111).
> 
> Ambient temps are 22-23°C and my case is a be quiet! Pure Base 500DX (3 case fans all running at max rpm (~1000)).
> Aside from not being able to UV or OC much at all, the temps just seem insanely high compared to what other people have been posting. Honestly, this has just been very frustrating so far.
> 
> Am I just cherry picking results from others or are these actually bad results? I barely played any games because this is just annoying me so much.


I got 18979 with stock settings and OC'ed I get 20928. My average clock speed is 2526 MHz . So I don't understand how your overclocked setting did not improve the score at all ? Seems really strange. My card is not the best overclocker compared to the others in here, but it'll have to do


----------



## J7SC

NeeDforKill said:


> (...) LoL 105+ that's normal temps for amd gpus - it's official information from AMD Stuff about those hotspot temps - that's why all refference card sitting on 105+ hotspot temps on 40% of fan speed.
> Probably Nvidia if had sensor on hotspot will have a same temps.


LoL what ? I already stated that ' Hotspot temps can obviously survive more', but there's such a thing as boost algorithms with temps as one of the inputs. I know from my own tests w/ this new 6900 XT that lower temps resulted in better clocks, as much as I'm still at the beginning of that curve. Or you could just check HWBot / top-end 6900 XT results re. their cooling  > here


----------



## NeeDforKill

J7SC said:


> LoL what ? I already stated that ' Hotspot temps can obviously survive more', but there's such a thing as boost algorithms with temps as one of the inputs. I know from my own tests w/ this new 6900 XT that lower temps resulted in better clocks, as much as I'm still at the beginning of that curve. Or you could just check HWBot / top-end 6900 XT results re. their cooling  > here


Because you tell to guy "If I would have to guess, based on the temps you are reporting vs ambient, your card might need a cooler re-mount / re-paste. I had factory-fresh cards before which had a crooked mount on the cooler, and/or some thermal pads not even making contact (never mind proper contact). "
He have perfectly fine temps as for reference card. I told you till 110c junction temps everythign fine, it will shoutdown your pc only when hits 118c. Nothing wrong with his temps and card - so why you recommend to guy re-mount cooler and repast his card. Your advice is wrong - because he will lose warranty.
You wrong about boost temps - GPU start trottling only when hits 110 junction temp, so if you will stay for example on 108c you will still get full boost without any throttling.


----------



## J7SC

NeeDforKill said:


> Because you tell to guy "If I would have to guess, based on the temps you are reporting vs ambient, your card might need a cooler re-mount / re-paste. I had factory-fresh cards before which had a crooked mount on the cooler, and/or some thermal pads not even making contact (never mind proper contact). "
> He have perfectly fine temps as for reference card. I told you till 110c junction temps everythign fine, it will shoutdown your pc only when hits 118c. Nothing wrong with his temps and card - so why you recommend to guy re-mount cooler and repast his card. Your advice is wrong - because he will lose warranty.
> You wrong about boost temps - GPU start trottling only when hits 110 junction temp, so if you will stay for example on 108c you will still get full boost without any throttling.


Whatever - I was trying to help 'Kir' as he wondered about his 6900 XT GPU clocks and temps...I showed HWinfo effective clocks of my all-stock card that is almost 20 C lower @ Hotspot...I also know that the stock cooler on my card is no better (if not worse) than the reference cooler...besides, I assure you with over 30 active GPUs here that many come from the factory w/ questionable mounts and/or thermal material.

As to AMD GPU, temps and potential throttling, check out Phoronix...


----------



## weleh

J7SC said:


> I seem to have hit a bit of a nerve here, which was not my intent...I'm just thrilled that my 6900 XT (choice of one out of one at the store, plus a decent MSRP) isn't a dud, and it seems to scale very nicely with extra voltage. I do appreciate the confirmation re. the 1.175 V cap I had read about. I also have the 'More Power Tool' downloaded after a previous post by another gent here...just waiting to finish a complex 3-system, 5-mobo update build and related w-cooling. The 3950X/6900XT combo is earmarked as my new 'daily slugger', with a ~ 80% productivity- 20% entertainment brief so I won't push it too hard re. PL mods and/or new bios (which also is an option, given related models). The 5950X / 3090 Strix setup in the same case / system is 70 % entertainment - 30% productivity and gets more OC attention...
> 
> FYI, the results I posted above were not a single 'brief run' but part of seven or so sequential Unigine runs (Valley 4k and Superposition 4k, 8k) to 'get some heat' build up...I simply haven't loaded any other bench yet, including the advanced 3DM suite. That said, I'm no stranger to 3DM per spoiler...but as mentioned before, this is my first AMD GPU in seven or so years, so I still have lots to learn...
> 
> 
> Spoiler
> 
> 
> 
> 
> View attachment 2513929


No nerve hit at all.
Just letting you know how these cards work that's all.

Some workloads will push the card harder and whatever seems stable is not afterall.


----------



## Kir

Post is needing mod approval for some reason. Sorry for late reply.



J7SC said:


> it can obviously survive more, but OCing and longevity are not helped by that. If I would have to guess, based on the temps you are reporting vs ambient, your card might need a cooler re-mount / re-paste. I had factory-fresh cards before which had a crooked mount on the cooler, and/or some thermal pads not even making contact (never mind proper contact).One quick question: just for testing, have you disabled 0-speed fan before bench runs to check oc ?


Yeah, OCing is basically out of the question. I was wondering if re-mounting and/or repasting would help, but I guess I also don't want to void the warranty (if that voids it).
I did not disable 0-speed fan. Does that affect anything? I'm not sure what that would change.



NeeDforKill said:


> Question is bad for what? [...]
> About temps i have same 105-110c hotspot in some games like Outriders on 330-340w and probably it can eat more. I have Sapphire Nitro+ SE. Same was on ref sapphire 6800XT where it just sit always on 105-107c hotspot.
> 
> LoL 105+ that's normal temps for amd gpus - it's official information from AMD Stuff about those hotspot temps - that's why all reference card sitting on 105+ hotspot temps on 40% of fan speed.
> Probably Nvidia if had sensor on hotspot will have a same temps.


Bad sample in general. Even if higher clocks only make a difference in some games, the "problem" is I can't increase clocks beyond 2575 even at max voltage, whether it's benchmark or an actual game, whereas other people seem to reach higher clocks at lower voltage (just some random ones from this thread #1,840, #1,903, #1,911). On top of that it _seems_ like everyone is increasing the power limit or using the more power tool when that's basically impossible for me, because the temps are so high.
My drivers crash when I go above 110°C. Not always, but if it reaches that temp it usually crashes.

So, having little room for OC, little room for UV and high temps by default makes me wonder if I have a bad sample compared to others. The high temps probably bothers me the most. There's so much hot air coming from my PC, it gets uncomfortable.

330-340 W? Yeah, I can't feed my card that much power. 290W already gets me to 110°C hot spot in games and benchmarks at 2464 max clock , 1175mV and 100% fan speed.

It's just not what I was expecting. I've read in other forums about how people UV and OC so much and being limited by the power limit... this is the complete opposite.



lestatdk said:


> I got 18979 with stock settings and OC'ed I get 20928. My average clock speed is 2526 MHz . So I don't understand how your overclocked setting did not improve the score at all ? Seems really strange. My card is not the best overclocker compared to the others in here, but it'll have to do


Crazy, I ran stock 3 times and got 18327, 18298 and 18280 (ignore CPU score in these). I just reran at 2575 and got 18267. In both cases I'm still using 500 min and no PL adjustment.
What are your temps like? What does it OC to?


----------



## Kir

Sorry, not ignoring messages, but my post, which I'm sure is perfectly fine, is awaiting moderator approval... hope this post doesn't get the same fate.


----------



## J7SC

Kir said:


> Sorry, not ignoring messages, but my post, which I'm sure is perfectly fine, is awaiting moderator approval... hope this post doesn't get the same fate.


..."while you wait", one simple test you can do is to leave the side panel off your BeQuiet case and re-run your tests, noting GPU temps, clocks and scores as you try different settings. Extra fresh air / external fan can also be useful. This is obviously not specific to 6900 XT, and just a way to eliminate a few variables from the equation, such as potential internal case airflow issues.


----------



## Godhand007

Kir said:


> Sorry, not ignoring messages, but my post, which I'm sure is perfectly fine, is awaiting moderator approval... hope this post doesn't get the same fate.


I would say ignore the negative nancies about OC and its benefits. It is sometimes difficult to find a stable OC for daily usage but it is worth it if you are willing to put in an effort. I get about a 10% boost on an average over stock with MPT and higher clocks. Of course, some games will scale better than another but that's the same with different GPU models as well. 3090 can do ~7% to ~15% better over a 3080 depending upon a game. You are in an OC forum, so I am guessing you want to do more than just put the card on stock.


----------



## Kir

J7SC said:


> ..."while you wait", one simple test you can ...


Any specific settings I should test? It seems like it doesn't make a difference? At first I did a run stock fans, but that seems unpractical for comparisons, but it ran into the temp limit either way.
Next two I did were simply at constant 50% fan speed. No zero fan mode. I wrote that in my hidden reply, but what did you mean by disabling it before runs to check OC?










Full sheet



Godhand007 said:


> I would say ignore the negative nancies about OC and its benefits. It is sometimes difficult to find a stable OC for daily usage but it is worth it if you are willing to put in an effort. I get about a 10% boost on an average over stock with MPT and higher clocks. Of course, some games will scale better than another but that's the same with different GPU models as well. 3090 can do ~7% to ~15% better over a 3080 depending upon a game. You are in an OC forum, so I am guessing you want to do more than just put the card on stock.


Yeah, I was definitely looking forward to UV/OC, but as it stands, it seems I have little headroom in any direction. Increasing power quickly gets me in throttling temps. I was even considering liquid cooling, but with the clock already reaching a limit at around [email protected], it would seem a bit silly. At least I could get the temps down.

Edit: Added two more results to the picture.


----------



## J7SC

Kir said:


> Any specific settings I should test? It seems like it doesn't make a difference? At first I did a run stock fans, but that seems unpractical for comparisons, but it ran into the temp limit either way.
> Next two I did were simply at constant 50% fan speed. No zero fan mode. I wrote that in my hidden reply, *but what did you mean by disabling it before runs to check OC?*
> 
> View attachment 2514003
> 
> 
> Full sheet
> 
> (...)


...looks like you have a really nice, methodical approach per your spreadsheet above  ! Whenever I test out any GPU, ie. with an open case (or side panel removed), I want to make sure that the 'starting positions' are the same...thus 'zero speed' fan option is disabled, and fans are running at 100% (loud / annoying, I know). After each run, I let the GPU cool down to its minimum temp floor - this way, I get apples to apples comps. You can take that even further by rebooting the system every once in a while and waiting until Windows 10 has 'settled'. *EDIT: *As suggested before in this thread*, *It's a good idea to have the latest HWInfo open when testing, to see '_effective_' clocks, temps and power consumption. 

....also good to see that you're trying out undervolting per your spreadsheet; on my GPU, I went as low as 1.075V (from 1.175V) and some benches actually really like that...while it may also mean lower clocks, it leaves a bigger chunk of the overall power budget available for performance.

...you may have already seen this, but in case you haven't, it is quite helpful re. 6900XT reference cards


----------



## J7SC

'double post '


----------



## Kir

J7SC said:


> ...looks like you have a really nice, methodical approach per your spreadsheet above  ! Whenever I test out any GPU, ie. with an open case (or side panel removed), I want to make sure that the 'starting positions' are the same...thus 'zero speed' fan option is disabled, and fans are running at 100% (loud / annoying, I know). After each run, I let the GPU cool down to its minimum temp floor - this way, I get apples to apples comps. You can take that even further by rebooting the system every once in a while and waiting until Windows 10 has 'settled'. *EDIT: *As suggested before in this thread*, *It's a good idea to have the latest HWInfo open when testing, to see '_effective_' clocks, temps and power consumption.
> 
> ....also good to see that you're trying out undervolting per your spreadsheet; on my GPU, I went as low as 1.075V (from 1.175V) and some benches actually really like that...while it may also mean lower clocks, it leaves a bigger chunk of the overall power budget available for performance.
> 
> ...you may have already seen this, but in case you haven't, it is quite helpful re. 6900XT reference cards


Thanks  Wanted to make sure I have a good overview.
Ah, I see, I was not considering the starting temp, but I'll keep that in mind.
And yeah, I'm using HWInfo for all my data.

Yeah, lowering the Voltage by 50mV gave me a score boost of ~400 points, but sadly 1125mV seems to be the lower limit.
I probably saw that video forever ago, but at that point it was mostly for entertainment, so I'll rewatch it, thanks!

Actually mid watching... these bench settings/results are insane... 115% PWR, 2550-2650 CLK Offset, Auto Fan 1692, 1050mv and all of this at 77/96 temps? I doubt it's just the specific benchmark (Port Royal). I'll have to try it out myself.
Even at stock the temps are 61/76??? Am I missing something? (at around 8:20)


----------



## lestatdk

Kir said:


> Any specific settings I should test? It seems like it doesn't make a difference? At first I did a run stock fans, but that seems unpractical for comparisons, but it ran into the temp limit either way.
> Next two I did were simply at constant 50% fan speed. No zero fan mode. I wrote that in my hidden reply, but what did you mean by disabling it before runs to check OC?
> 
> View attachment 2514009
> 
> 
> Full sheet
> 
> 
> 
> Yeah, I was definitely looking forward to UV/OC, but as it stands, it seems I have little headroom in any direction. Increasing power quickly gets me in throttling temps. I was even considering liquid cooling, but with the clock already reaching a limit at around [email protected], it would seem a bit silly. At least I could get the temps down.
> 
> Edit: Added two more results to the picture.


Your GPU temp looks a bit high. I get 58 degrees max . But my Junction temp is similar to yours at 113 max


----------



## Kir

lestatdk said:


> Your GPU temp looks a bit high. I get 58 degrees max . But my Junction temp is similar to yours at 113 max


Huh, even with an open case and 100% fan speed I can't get below 70°C. But 113 Junction at the same time... damn.
Is this with +15% power limit, auto fans and stock Voltage? Do you know your temps/score for that benchmark at all stock?
The temps and result in the Gamers Nexus video just seem so much better.


----------



## lestatdk

Kir said:


> Huh, even with an open case and 100% fan speed I can't get below 70°C. But 113 Junction at the same time... damn.
> Is this with +15% power limit, auto fans and stock Voltage? Do you know your temps/score for that benchmark at all stock?
> The temps and result in the Gamers Nexus video just seem so much better.


These are my numbers OC'ed. Let me try and do a run with stock settings


----------



## lestatdk

lestatdk said:


> These are my numbers OC'ed. Let me try and do a run with stock settings


With stock settings I get GPU max temp 69 degrees, junction max temp 102 degrees.


----------



## lestatdk

Also my GPU fans never really spun up at all during this run. Guess it was not high enough temp to get much out of the zero rpm zone


----------



## Kir

lestatdk said:


> With stock settings I get GPU max temp 69 degrees, junction max temp 102 degrees.
> Also my GPU fans never really spun up at all during this run. Guess it was not high enough temp to get much out of the zero rpm zone


Thanks a lot. Interesting. Mine definitely spin up to at least 1600rpm. Could I bother you for one more test? 😅 All stock again, but PL+15% and 100% fan (if you can), if not then auto. With resulting score.


----------



## lestatdk

Kir said:


> Thanks a lot. Interesting. Mine definitely spin up to at least 1600rpm. Could I bother you for one more test? 😅 All stock again, but PL+15% and 100% fan (if you can), if not then auto. With resulting score.





Kir said:


> Thanks a lot. Interesting. Mine definitely spin up to at least 1600rpm. Could I bother you for one more test? 😅 All stock again, but PL+15% and 100% fan (if you can), if not then auto. With resulting score.













GPU temp max is 60 degrees, junction temp max is 111 degrees.

+12% power and fans at 100%


----------



## Kir

lestatdk said:


> View attachment 2514029
> 
> 
> 
> GPU temp max is 60 degrees, junction temp max is 111 degrees.
> 
> +12% power and fans at 100%


Thanks a lot!

Same settings, but PL+15% (unless that was a typo). Will redo with 12% and update post.










Nearly 1.5k GPU score difference  . Temps were 71/111°C

Edit: +12% power is exactly the same. Can't make use of the extra power since it is throttling from high temps anyway.


----------



## Kir

lestatdk said:


> ...


btw, what are your OC settings?


----------



## J7SC

lestatdk said:


> With stock settings I get GPU max temp 69 degrees, junction max temp 102 degrees.





Kir said:


> Thanks a lot. Interesting. Mine definitely spin up to at least 1600rpm. Could I bother you for one more test? 😅 All stock again, but PL+15% and 100% fan (if you can), if not then auto. With resulting score.


...per earlier post, this was with fans at 100% (loud!) for multiple 4K runs on the Gigabyte Gaming OC 6900XT...forget the nominal / effective clocks for a moment and just focus on temps up top...this was during colder weather and it is also an open testbench that has a 120mm fan helping to cool the mobo VRM and system RAM, and it hits the back of the GPU as well. If you have an extra fan laying around to cool the back of the card, it might be interesting to see what the deltas are, especially with an open side panel / good airflow...another delta to watch is the Hotspot compared to GPU temp. FYI, on my 3090 Strix OC, the EK water block has a machining issue and the Hotspot delta compared to GPU temp kept on rising - good thing I caught it as it was a sign that there was a mounting / TIM issue



Spoiler


----------



## lestatdk

Kir said:


> btw, what are your OC settings?


These are my wattman settings










and my MPT settings:


----------



## Kir

J7SC said:


> ...per earlier post, this was with fans at 100% (loud!) for multiple 4K runs on the Gigabyte Gaming OC 6900XT...forget the nominal / effective clocks for a moment and just focus on temps up top...this was during colder weather and it is also an open testbench that has a 120mm fan helping to cool the mobo VRM and system RAM, and it hits the back of the GPU as well. If you have an extra fan laying around to cool the back of the card, it might be interesting to see what the deltas are, especially with an open side panel / good airflow...another delta to watch is the Hotspot compared to GPU temp. FYI, on my 3090 Strix OC, the EK water block has a machining issue and the Hotspot delta compared to GPU temp kept on rising - good thing I caught it as it was a sign that there was a mounting / TIM issue
> 
> 
> 
> Spoiler
> 
> 
> 
> 
> View attachment 2514034


I guess your Custom Card spins 1000rpm higher at 100% fan speed, but damn 79°C hot spot is pretty good. I sadly don't have any extra fans, but running with the side panel of has so far made no difference.
Delta for GPU temp and hot spot temp though... the delta does increase when e.g. increasing PL. 74/98 stock vs 77/113. A non throttling run I had e.g. was 74/108. So hot spot definitely is increasing more than GPU Temp. Not sure if significantly though. If only remounting/pasting didn't void the warranty.


----------



## Godhand007

Kir said:


> I guess your Custom Card spins 1000rpm higher at 100% fan speed, but damn 79°C hot spot is pretty good. I sadly don't have any extra fans, but running with the side panel of has so far made no difference.
> Delta for GPU temp and hot spot temp though... the delta does increase when e.g. increasing PL. 74/98 stock vs 77/113. A non throttling run I had e.g. was 74/108. So hot spot definitely is increasing more than GPU Temp. Not sure if significantly though. If only remounting/pasting didn't void the warranty.


I would suggest one thing. Just copy the settings from *lestatdk *for both MPT and Wattman _word to word _and see what you get. Also, there is a bug wherein for reference RX 6900 XT cards fan speed only goes up to 93% of advertised max speeds. I am running my card with the case open and it can reach max junction temps even on 100% fan speed depending upon the load.

One more thing to add here, don't get too impressed by what people might be posting on forums about their OCs. I have seen people touting/showing off their 2700 MHz plus OCs after testing with some generic game that they play most of the time. Most of the reference or non-H chips can do around ~2600 Mhz (actual clocks) or less with proper testing.
Refer to this 



 Liquid cool card. They would have pushed it a lot harder for that price if they could.


----------



## J7SC

Kir said:


> I guess your Custom Card spins 1000rpm higher at 100% fan speed, but damn 79°C hot spot is pretty good. I sadly don't have any extra fans, but running with the side panel of has so far made no difference.
> Delta for GPU temp and hot spot temp though... the delta does increase when e.g. increasing PL. 74/98 stock vs 77/113. A non throttling run I had e.g. was 74/108. So hot spot definitely is increasing more than GPU Temp. Not sure if significantly though. If only remounting/pasting didn't void the warranty.


...the price I pay for the good temps and related clocks is noise (for now, w-block is already here). This would not be bearable in a regular gaming setup . As to re-pasting, the 'right-to-repair' rules in the US are already pretty strong, and I thought Europe was not far behind. Depending how all that works out, you could consider a water-block down the line, especially if you bump PL up via MPT.


----------



## Kir

Ughhhh... again awaiting mod approval, just because I corrected a typo... so I'm just reposting this time.



Godhand007 said:


> I would suggest one thing. Just copy the settings from *lestatdk *for both MPT and Wattman _word to word _and see what you get. Also, there is a bug wherein for reference RX 6900 XT cards fan speed only goes up to 93% of advertised max speeds. I am running my card with the case open and it can reach max junction temps even on 100% fan speed depending upon the load.
> 
> One more thing to add here, don't get too impressed by what people might be posting on forums about their OCs. I have seen people touting/showing off their 2700 MHz plus OCs after testing with some generic game that they play most of the time. Most of the reference or non-H chips can do around ~2600 Mhz (actual clocks) or less with proper testing.
> Refer to this
> 
> 
> 
> Liquid cool card. They would have pushed it a lot harder for that price if they could.



Okay, I copied it exactly aaaaaand crash within the first few seconds. Then I increased the Voltage to 1175mV, but same thing happened. Then I additionally decreased max clock to 2575 which then ran. GPU Score was 17485... the lowest GPU Score I have yet to record in all of my runs haha. Also not sure why 2150 VRAM produces worse results. 2070 did achieve better results than stock though.

Also the set clocks don't actually do anything, because the effective clocks are always lower. That's why I get the same result when I just leave it at stock clocks.

But basically, his settings didn't do much, which I was thinking should/would happen. My card already overheats with just pure Wattman extra power. It draws 380 something Watts for a second, before hot spot reaches 110°C and then power drops hard.










Yeah, I'm not expecting to hit 2700 MHz, but I was hoping to to be able to do a little more. It seems like all I can do is undervolt it slightly to 1130mV. Any kind of OC means 110+ temps and I don't want to constantly run at the limit.



J7SC said:


> ...the price I pay for the good temps and related clocks is noise (for now, w-block is already here). This would not be bearable in a regular gaming setup . As to re-pasting, the 'right-to-repair' rules in the US are already pretty strong, and I thought Europe was not far behind. Depending how all that works out, you could consider a water-block down the line, especially if you bump PL up via MPT.


Apparently it's not so clear and varies from manufacturer to manufacturer, but you should always have 2 year warranty and if you remount/repaste it should be fine as long as they don't come to the conclusion that you caused the damage.


----------



## Godhand007

Kir said:


> Ughhhh... again awaiting mod approval, just because I corrected a typo... so I'm just reposting this time.
> 
> 
> 
> 
> Okay, I copied it exactly aaaaaand crash within the first few seconds. Then I increased the Voltage to 1175mV, but same thing happened. Then I additionally decreased max clock to 2575 which then ran. GPU Score was 17485... the lowest GPU Score I have yet to record in all of my runs haha. Also not sure why 2150 VRAM produces worse results. 2070 did achieve better results than stock though.
> 
> Also the set clocks don't actually do anything, because the effective clocks are always lower. That's why I get the same result when I just leave it at stock clocks.
> 
> But basically, his settings didn't do much, which I was thinking should/would happen. My card already overheats with just pure Wattman extra power. It draws 380 something Watts for a second, before hot spot reaches 110°C and then power drops hard.
> 
> View attachment 2514047
> 
> 
> Yeah, I'm not expecting to hit 2700 MHz, but I was hoping to to be able to do a little more. It seems like all I can do is undervolt it slightly to 1130mV. Any kind of OC means 110+ temps and I don't want to constantly run at the limit.


Hmm. Something does seem amiss. I am going to ask few basic questions here, bear with me. What are your max fan speeds (in RPM)? Have you tried clicking on _Write SPPT_ in MPT after punching in settings? 
Your CPU score seems a bit low for 5900x, is there something running in the backgroud, or is the RAM not set to XMP? Have you tried running the benchmark with Power Options selected to _High Performance_ in Windows?


----------



## Thanh Nguyen

What kinda of clock speed I can get with a watercool reference 6900xt guys? Any mod bios or just stock bios to unlock more power?


----------



## Kir

Godhand007 said:


> Hmm. Something does seem amiss. I am going to ask few basic questions here, bear with me. What are your max fan speeds (in RPM)? Have you tried clicking on _Write SPPT_ in MPT after punching in settings?
> Your CPU score seems a bit low for 5900x, is there something running in the backgroud, or is the RAM not set to XMP? Have you tried running the benchmark with Power Options selected to _High Performance_ in Windows?


Sure, thanks for the help. At 100% I get about 2930rpm. Yeah, I hit "Write SPPT" and restarted and it worked as far as I can tell, since the reported GPU PPT Limit in HWInfo was now something like 390W (with +12% from Wattman), which it also hits for a short moment before throttling because of high temps. All the MPT does is allow higher Wattage though, right? So considering no MPT and only Wattman +15% brings me to temps beyond 110°C already, giving it more power won't really do anything, right?

What CPU Score are you referring to? From the links in my spreadsheet or the 13660 in the picture last page? I ran all early tests with one CCD deactivated, because I was still waiting on my cooler. But I think 13660 is normal for a stock 5900x? Yup, XMP is active and I also have tried high performance.


----------



## Kir

Thanh Nguyen said:


> What kinda of clock speed I can get with a watercool reference 6900xt guys? Any mod bios or just stock bios to unlock more power?


You'll want to use the "More Power Tool" aka MPT to unlock more power.


----------



## J7SC

I tried @lestatdk 's MPT settings (330W/350A), then installed Timespy...haven't tried different clocks or undervolting yet after the MPT change, but it seems to work great, even though my card is a 3x8 pin. I did notice that GPUv readouts are a bit weird in HWInfo since the MPT change (GPU core V shows below Wattman, GPUz).

Apart from changing those two values in MPT, I haven't touched anything else (have to dig more into the MPT docs), but it's a decent start. When w-cooling is installed, I might push MPT to 360W/3xxA 

FYI, for this run, the 3950X was on bios default but with 32GB of RAM at IF1900 / DDR4 3800. Highest Timespy CPU score I have seen on this chip is 16381 (@ 4.450 Giggles all-core)


----------



## lestatdk

J7SC said:


> I tried @lestatdk 's MPT settings (330W/350A), then installed Timespy...haven't tried different clocks or undervolting yet after the MPT change, but it seems to work great, even though my card is a 3x8 pin. I did notice that GPUv readouts are a bit weird in HWInfo since the MPT change (GPU core V shows below Wattman, GPUz).
> 
> Apart from changing those two values in MPT, I haven't touched anything else (have to dig more into the MPT docs), but it's a decent start. When w-cooling is installed, I might push MPT to 360W/3xxA
> 
> FYI, for this run, the 3950X was on bios default but with 32GB of RAM at IF1900 / DDR4 3800. Highest Timespy CPU score I have seen on this chip is 16381 (@ 4.450 Giggles all-core)
> 
> View attachment 2514055


That's a nice score there. It looks weird how your CPU clock is fluctuating so much. But the score is good


----------



## lestatdk

Kir said:


> Ughhhh... again awaiting mod approval, just because I corrected a typo... so I'm just reposting this time.
> 
> 
> 
> 
> Okay, I copied it exactly aaaaaand crash within the first few seconds. Then I increased the Voltage to 1175mV, but same thing happened. Then I additionally decreased max clock to 2575 which then ran. GPU Score was 17485... the lowest GPU Score I have yet to record in all of my runs haha. Also not sure why 2150 VRAM produces worse results. 2070 did achieve better results than stock though.
> 
> Also the set clocks don't actually do anything, because the effective clocks are always lower. That's why I get the same result when I just leave it at stock clocks.
> 
> But basically, his settings didn't do much, which I was thinking should/would happen. My card already overheats with just pure Wattman extra power. It draws 380 something Watts for a second, before hot spot reaches 110°C and then power drops hard.
> 
> View attachment 2514047
> 
> 
> Yeah, I'm not expecting to hit 2700 MHz, but I was hoping to to be able to do a little more. It seems like all I can do is undervolt it slightly to 1130mV. Any kind of OC means 110+ temps and I don't want to constantly run at the limit.
> 
> 
> Apparently it's not so clear and varies from manufacturer to manufacturer, but you should always have 2 year warranty and if you remount/repaste it should be fine as long as they don't come to the conclusion that you caused the damage.


It looks like your memory does not want to OC at all . Better to leave it close to the default.


----------



## Kir

J7SC said:


> I tried @lestatdk 's MPT settings (330W/350A), then installed Timespy...haven't tried different clocks or undervolting yet after the MPT change, but it seems to work great, even though my card is a 3x8 pin. I did notice that GPUv readouts are a bit weird in HWInfo since the MPT change (GPU core V shows below Wattman, GPUz).
> 
> Apart from changing those two values in MPT, I haven't touched anything else (have to dig more into the MPT docs), but it's a decent start. When w-cooling is installed, I might push MPT to 360W/3xxA
> 
> FYI, for this run, the 3950X was on bios default but with 32GB of RAM at IF1900 / DDR4 3800. Highest Timespy CPU score I have seen on this chip is 16381 (@ 4.450 Giggles all-core)
> 
> View attachment 2514055


That's really nice.
I don't know too much about MPT yet either, but what I've read multiple times is that you really only want to touch those two values you already changed.



lestatdk said:


> It looks like your memory does not want to OC at all . Better to leave it close to the default.


True. Better have it close to default, just like everything else 😅
I guess this brings me back to my initial question and so far enforcing my belief of having a below average binned 6900 XT. Maybe it's because I have a reference card. Or maybe just bad luck 🤷‍♂️
I did only pay MSRP, so I guess I should be happy for even getting one and running default/stock settings, but I'm not. I want more lol


----------



## lestatdk

Kir said:


> That's really nice.
> I don't know too much about MPT yet either, but what I've read multiple times is that you really only want to touch those two values you already changed.
> 
> 
> 
> True. Better have it close to default, just like everything else 😅
> I guess this brings me back to my initial question and so far enforcing my belief of having a below average binned 6900 XT. Maybe it's because I have a reference card. Or maybe just bad luck 🤷‍♂️
> I did only pay MSRP, so I guess I should be happy for even getting one and running default/stock settings, but I'm not. I want more lol


Usually reference cards perform very well. I just think you've been unlucky in the silicon draw 

One good thing about reference cards is there's plenty of options for water cooling blocks


----------



## weleh

If you want a non H chip that does well you need to pick cards that have strong factory overclocks because that's a pretty good indicative of the quality of the silicon.
For instance, my Toxic comes with a boost clock of 2660 Mhz and 2100 Mhz memory OC (16.8 Gbps). This is guaranteed so I already knew the card was higher than average but still, there are better cards here that aren't H chips but that's rare.

In fact, I've seen only 1 bench here in Time Spy higher than my Toxic non H chip (22.200 Graphic Score). 

Being on water helps a ton too because hotspot temp is nowhere near throttle limit. 

Though, I have seen a case like yours, where the card barely beat 3080's on synthetics. All in all I wouldn't stress too much about synthethics because the difference between 2500 and 2700 is like 1 to 3 frames most of the time??


----------



## ilmazzo

Guys, asking for a friend....

Which is the average lowest voltage level in idle if there is any? He wants to go lower than what wattman is letting him to go down on the core voltage but I wonder if there is a hard lower limit from where you can't go lower otherwise driver will just skip the user MPT setting and load the default ones or just crashes...

thanks in advance!


----------



## J7SC

lestatdk said:


> That's a nice score there. It *looks weird how your CPU clock is fluctuating so much*. But the score is good


...tx. That CPU fluctuation is normal for 16-core AM4 (my 5950X does that also). It's when they're left on auto and the 3950X will boost past 4750 until it gets reigned in by the GHz police. Below is the same default GPU setting as before but with the 3950X on all-core 4400; that's all this little AIO can take....nice and smooth  ...once CPU and GPU are properly water-cooled, it's time to push this setup more.

...on the 6900XT w/ MPT 330W/350A, definitely voltage readout and may be voltage control are behaving a bit weirdly - is that normal after MPT changes ? I'm wondering if it relates to the Giga 3x8 pin arrangement and a VRM that's different from the reference card 













Kir said:


> That's really nice.
> I don't know too much about MPT yet either, but what I've read multiple times is that you really only want to touch those two values you already changed.
> 
> True. Better have it close to default, just like everything else 😅
> I guess this brings me back to my initial question and so far enforcing my belief of having a below average binned 6900 XT. Maybe it's because I have a reference card. Or maybe just bad luck 🤷‍♂️
> I did only pay MSRP, so I guess I should be happy for even getting one and running default/stock settings, but I'm not. I want more lol


...at the end of the day, the clock differences will not be noticeable in every-day apps. That said, what gives me pause is the huge delta between overall GPU temp and Hotspot temp you're seeing. On my card, the delta is rarely more than 20 C, often less.

...I've had four new IDENTICAL GPUs before purchased at the same time (incl. for workstations) and when taking them apart for w-cooling, some of them had some serious TIM imprint issues, others had a slightly crooked mount of the cooler, and/or some select loose screws. I realize that taking apart a brand-new (and hard-to-replace) 6900XT is not high on your agenda re. warranty etc., but have you considered emailing the EU HQ of your vendor and ask if you can re-paste / re-pad, given the issues you're seeing ? I've done that before and as long as you're straight-up with them ("no, I have no idea how the flux and shunt mods got on there") they tend to be helpful more often than not.

With Covid-19 impacts, IMO, I see more quality issues in various assembled computer components


----------



## weleh

ilmazzo said:


> Guys, asking for a friend....
> 
> Which is the average lowest voltage level in idle if there is any? He wants to go lower than what wattman is letting him to go down on the core voltage but I wonder if there is a hard lower limit from where you can't go lower otherwise driver will just skip the user MPT setting and load the default ones or just crashes...
> 
> thanks in advance!


Minimum voltage on 6900XT is 0.7V


----------



## lestatdk

J7SC said:


> ...tx. That CPU fluctuation is normal for 16-core AM4 (my 5950X does that also). It's when they're left on auto and the 3950X will boost past 4750 until it gets reigned in by the GHz police. Below is the same default GPU setting as before but with the 3950X on all-core 4400; that's all this little AIO can take....nice and smooth  ...once CPU and GPU are properly water-cooled, it's time to push this setup more.
> 
> ...on the 6900XT w/ MPT 330W/350A, definitely voltage readout and may be voltage control are behaving a bit weirdly - is that normal after MPT changes ? I'm wondering if it relates to the Giga 3x8 pin arrangement and a VRM that's different from the reference card
> 
> 
> 
> 
> 
> 
> ...at the end of the day, the clock differences will not be noticeable in every-day apps. That said, what gives me pause is the huge delta between overall GPU temp and Hotspot temp you're seeing. On my card, the delta is rarely more than 20 C, often less.
> 
> ...I've had four new IDENTICAL GPUs before purchased at the same time (incl. for workstations) and when taking them apart for w-cooling, some of them had some serious TIM imprint issues, others had a slightly crooked mount of the cooler, and/or some select loose screws. I realize that taking apart a brand-new (and hard-to-replace) 6900XT is not high on your agenda re. warranty etc., but have you considered emailing the EU HQ of your vendor and ask if you can re-paste / re-pad, given the issues you're seeing ? I've done that before and as long as you're straight-up with them ("no, I have no idea how the flux and shunt mods got on there") they tend to be helpful more often than not.
> 
> With Covid-19 impacts, IMO, I see more quality issues in various assembled computer components


This is my run. I'm not seeing those CPU fluctuations at all.


----------



## jonRock1992

Kir said:


> That's really nice.
> I don't know too much about MPT yet either, but what I've read multiple times is that you really only want to touch those two values you already changed.
> 
> 
> 
> True. Better have it close to default, just like everything else 😅
> I guess this brings me back to my initial question and so far enforcing my belief of having a below average binned 6900 XT. Maybe it's because I have a reference card. Or maybe just bad luck 🤷‍♂️
> I did only pay MSRP, so I guess I should be happy for even getting one and running default/stock settings, but I'm not. I want more lol


I'd rather have a below average one at MSRP than the $2300 Red Devil Ultimate that I got. Mine constantly thermal throttles. So I just spent another $300 for a custom water loop for this card.


----------



## lestatdk

jonRock1992 said:


> I'd rather have a below average one at MSRP than the $2300 Red Devil Ultimate that I got. Mine constantly thermal throttles. So I just spent another $300 for a custom water loop for this card.


I can't even get a water block for my card


----------



## Kir

lestatdk said:


> Usually reference cards perform very well. I just think you've been unlucky in the silicon draw
> 
> One good thing about reference cards is there's plenty of options for water cooling blocks


Yeah, just not sure if it's worth it with this specific one. I don't know anymore lol


J7SC said:


> ...at the end of the day, the clock differences will not be noticeable in every-day apps. That said, what gives me pause is the huge delta between overall GPU temp and Hotspot temp you're seeing. On my card, the delta is rarely more than 20 C, often less.
> 
> ...I've had four new IDENTICAL GPUs before purchased at the same time (incl. for workstations) and when taking them apart for w-cooling, some of them had some serious TIM imprint issues, others had a slightly crooked mount of the cooler, and/or some select loose screws. I realize that taking apart a brand-new (and hard-to-replace) 6900XT is not high on your agenda re. warranty etc., but have you considered emailing the EU HQ of your vendor and ask if you can re-paste / re-pad, given the issues you're seeing ? I've done that before and as long as you're straight-up with them ("no, I have no idea how the flux and shunt mods got on there") they tend to be helpful more often than not.
> 
> With Covid-19 impacts, IMO, I see more quality issues in various assembled computer components


Yeah, true. My settings that give me 1000points higher score in Time Spy lead about 3-5% FPS increase in the few games I tested. Undervolting though drops the temps so much. In Assassins Creed Origins I can do 1110mV which drops the junction temp from 100°C to about 92°C at 1800rpm. Yeah, contacting them seems like a good idea.



weleh said:


> If you want a non H chip that does well you need to pick cards that have strong factory overclocks because that's a pretty good indicative of the quality of the silicon.
> For instance, my Toxic comes with a boost clock of 2660 Mhz and 2100 Mhz memory OC (16.8 Gbps). This is guaranteed so I already knew the card was higher than average but still, there are better cards here that aren't H chips but that's rare.
> 
> In fact, I've seen only 1 bench here in Time Spy higher than my Toxic non H chip (22.200 Graphic Score).
> 
> Being on water helps a ton too because hotspot temp is nowhere near throttle limit.
> 
> Though, I have seen a case like yours, where the card barely beat 3080's on synthetics. All in all I wouldn't stress too much about synthethics because the difference between 2500 and 2700 is like 1 to 3 frames most of the time??


Yeah, I'm probably being crazy considering returning the card to get an aftermarket one. It really irks me that I'm just limited in every regard. But it's true, the FPS difference is not that big in actual games, so I'm just being silly.



jonRock1992 said:


> I'd rather have a below average one at MSRP than the $2300 Red Devil Ultimate that I got. Mine constantly thermal throttles. So I just spent another $300 for a custom water loop for this card.


Isn't that a really highly binned version? If it actually throttles at stock, I'd think that's a good enough reason to RMA, not?


----------



## J7SC

lestatdk said:


> This is my run. I'm not seeing those CPU fluctuations at all.
> 
> View attachment 2514080


The AMD 16-cores with dual physical chiplets are binned differently and also have extra boost steps (> heat of 16c/32t). Below is a quick run I did with the 5950X/3090 on the other system, it too squiggles, though not as much as the 3950X does, because the former has a custom loop for the CPU...as shown before, if I lock all-core, it becomes smooth...the 3950X/6900XT just needs to get the full custom loop-treatment installed (all parts are here)












lestatdk said:


> I can't even get a water block for my card



Have you checked Bykski ? They seem to have a lot of different 6090 XT blocks for both reference and even rarer custom PCBs...below is the one for my Gigab. Gaming OC, waiting for my attention...


----------



## Kir

J7SC said:


> The AMD 16-cores with dual physical chiplets are binned differently and also have extra boost steps (> heat of 16c/32t). Below is a quick run I did with the 5950X/3090 on the other system, it too squiggles, though not as much as the 3950X does, because the former has a custom loop for the CPU...as shown before, if I lock all-core, it becomes smooth...the 3950X/6900XT just needs to get the full custom loop-treatment installed (all parts are here)
> 
> View attachment 2514095
> 
> 
> 
> 
> 
> Have you checked Bykski ? They seem to have a lot of different 6090 XT blocks for both reference and even rarer custom PCBs...below is the one for my Gigab. Gaming OC, waiting for my attention...
> 
> View attachment 2514096


Btw, mine looks similar to your 5950x. Fluctuating a bit more


----------



## J7SC

Kir said:


> Btw, mine looks similar to your 5950x. Fluctuating a bit more
> 
> View attachment 2514097


...yeah, typical dual chiplet stuff - I bet if you lock your CPU at all-core, it will smooth out. 

BTW, the temp graph for your GPU is 'telling'...I don't think you actually have a bad sample, just a cooling issue (possibly related to mounting, TIM etc)


----------



## Kir

J7SC said:


> ...yeah, typical dual chiplet stuff - I bet if you lock your CPU at all-core, it will smooth out.
> 
> BTW, the temp graph for your GPU is 'telling'...I don't think you actually have a bad sample, just a cooling issue (possibly related to mounting, TIM etc)


Where do you see the temp graph 

I just redid a run with 100% fans, 2460 max clk (effective clock was actually between 2200 and 2350 though), 1150mV and +10%PL. Max hot spot was 110°C. Mostly hovering right under it.










Let me know if you want me to try any specific settings


----------



## J7SC

Kir said:


> Where do you see the temp graph
> 
> I just redid a run with 100% fans, 2460 max clk (effective clock was actually between 2200 and 2350 though), 1150mV and +10%PL. Max hot spot was 110°C. Mostly hovering right under it.
> 
> View attachment 2514100
> 
> 
> Let me know if you want me to try any specific settings


...sorry, I misread one of the colours at 4K - however I had checked back w/ your table that had GPU score at 19155, gpu temp at 72 and hot spot at 113. You clocks were ok there, but the delta of 41 C between GPU temp and hot spot looked odd. My first TS run on this last night had GPU max temp at 60 and hot spot at 81, later runs w/ MBT 71 and 93 respectively


----------



## Kir

J7SC said:


> ...sorry, I misread one of the colours at 4K - however I had checked back w/ your table that had GPU score at 19155, gpu temp at 72 and hot spot at 113. You clocks were ok there, but the delta of 41 C between GPU temp and hot spot looked odd. My first TS run on this last night had GPU max temp at 60 and hot spot at 81, later runs w/ MBT 71 and 93 respectively


Yeah, my deltas seem off. 60 and 81... impressive.


----------



## gtz

Kir said:


> Where do you see the temp graph
> 
> I just redid a run with 100% fans, 2460 max clk (effective clock was actually between 2200 and 2350 though), 1150mV and +10%PL. Max hot spot was 110°C. Mostly hovering right under it.
> 
> View attachment 2514100
> 
> 
> Let me know if you want me to try any specific settings


I have been reading your past few post. I agree with others not to really compare scores, however your card seems to be running hot and as a result under performing. Maybe the graphite pad over the core is misaligned. Your card is performing worse than my 6800XT. 

Below is what I get in timespy.










AMD Radeon RX 6800 XT video card benchmark result - Intel Core i9-9980XE Processor,ASRock X299 Taichi CLX (3dmark.com) 

Might want to check to see if the heatsink is making appropriate contact with the card.


----------



## J7SC

Kir said:


> Yeah, my deltas seem off. 60 and 81... impressive.


...while you try to figure out what to do about the temp issue, have a look at the ATI Tool > here ...it is 'ancient', when ATI was not part of AMD yet, but it is part of a suite of softwares that allows you to evaluate your clocks (GPU, VRAM) via 'check for artifacts while only putting a very light load on and thus not overpower everything with high temps. The ATI Tool is not so useful for anything else anymore but if you want to find out the quality of your Chip + VRAM w/o cooling getting in the way, it should help


----------



## Kir

gtz said:


> I have been reading your past few post. I agree with others not to really compare scores, however your card seems to be running hot and as a result under performing. Maybe the graphite pad over the core is misaligned. Your card is performing worse than my 6800XT.
> 
> Below is what I get in timespy.
> 
> View attachment 2514103
> 
> 
> AMD Radeon RX 6800 XT video card benchmark result - Intel Core i9-9980XE Processor,ASRock X299 Taichi CLX (3dmark.com)
> 
> Might want to check to see if the heatsink is making appropriate contact with the card.


Thanks for the input. Yeah, that GPU score is currently unreachable for me. I will contact the support to see if it's okay for me to open it up. I'm guessing this is not reason enough for them to replace it.



J7SC said:


> ...while you try to figure out what to do about the temp issue, have a look at the ATI Tool > here ...it is 'ancient', when ATI was not part of AMD yet, but it is part of a suite of softwares that allows you to evaluate your clocks (GPU, VRAM) via 'check for artifacts while only putting a very light load on and thus not overpower everything with high temps. The ATI Tool is not so useful for anything else anymore but if you want to find out the quality of your Chip + VRAM w/o cooling getting in the way, it should help


Nice, I'll check it out.


----------



## CS9K

Kir said:


> Isn't that a really highly binned version? If it actually throttles at stock, I'd think that's a good enough reason to RMA, not?


Bin quality is *NEVER* a reason to RMA, ultimate edition or no. That is abusing the system, and drives costs up for all of us if they do approve the RMA.



jonRock1992 said:


> _I'd rather have a below average one at MSRP_ than the $2300 Red Devil Ultimate that I got. Mine constantly thermal throttles. So I just spent another $300 for a custom water loop for this card.


This is what I've been telling people that see the AIB RX 6900 XT's sitting on the shelves at Micro Center. AIB pricing is out of control right now, and let the AIB RDNA2 cards rot on the shelves for all I care, the markup they're asking is NOT worth it.


----------



## Kir

CS9K said:


> Bin quality is *NEVER* a reason to RMA, ultimate edition or no. That is abusing the system, and drives costs up for all of us if they do approve the RMA.


The RMA was in reference to his card throttling at stock, not binning. I was just surprised that such a premium card isn't perfect.


----------



## Kir

J7SC said:


> ...while you try to figure out what to do about the temp issue, have a look at the ATI Tool > here ...it is 'ancient', when ATI was not part of AMD yet, but it is part of a suite of softwares that allows you to evaluate your clocks (GPU, VRAM) via 'check for artifacts while only putting a very light load on and thus not overpower everything with high temps. The ATI Tool is not so useful for anything else anymore but if you want to find out the quality of your Chip + VRAM w/o cooling getting in the way, it should help


Hmm, I can set the clock all the way up to around min 2630 - max 2730 (need to set min clock, otherwise it stays low), but the effective clock is always just 1000 MHz. Does that mean anything? I have the impression that since the effective clock is still 1000MHz, it might not really mean much? Idk


----------



## Thanh Nguyen

Guys I have a new 6900xt reference and alphacool waterblock. Does it stand a chance when head to head with a 3090 or 3080ti? If not I may sell it and get other card.


----------



## 99belle99

Here is my reference 6900 XT Timespy run.









I scored 18 852 in Time Spy


AMD Ryzen 7 3700X, AMD Radeon RX 6900 XT x 1, 16384 MB, 64-bit Windows 10}




www.3dmark.com


----------



## gtz

Thanh Nguyen said:


> Guys I have a new 6900xt reference and alphacool waterblock. Does it stand a chance when head to head with a 3090 or 3080ti? If not I may sell it and get other card.


In raw power the 6800XT/6900XT are impressive cards. Nvidia excels in ray tracing and DLLS. I don't own a 3080Ti or 3090 but my 6800XT is on average 20% faster than my 3070.


----------



## Godhand007

Kir said:


> Sure, thanks for the help. At 100% I get about 2930rpm. Yeah, I hit "Write SPPT" and restarted and it worked as far as I can tell, since the reported GPU PPT Limit in HWInfo was now something like 390W (with +12% from Wattman), which it also hits for a short moment before throttling because of high temps. All the MPT does is allow higher Wattage though, right? So considering no MPT and only Wattman +15% brings me to temps beyond 110°C already, giving it more power won't really do anything, right?
> 
> What CPU Score are you referring to? From the links in my spreadsheet or the 13660 in the picture last page? I ran all early tests with one CCD deactivated, because I was still waiting on my cooler. But I think 13660 is normal for a stock 5900x? Yup, XMP is active and I also have tried high performance.


Like others have pointed out, It could be a case of just bad _Thermals _due to improper heatsink setup, etc, or just a bad binn. You could as a final test try it in another system (if you have any spare)?


----------



## Godhand007

weleh said:


> If you want a non H chip that does well you need to pick cards that have strong factory overclocks because that's a pretty good indicative of the quality of the silicon.
> For instance, my Toxic comes with a boost clock of 2660 Mhz and 2100 Mhz memory OC (16.8 Gbps). This is guaranteed so I already knew the card was higher than average but still, there are better cards here that aren't H chips but that's rare.
> 
> In fact, I've seen only 1 bench here in Time Spy higher than my Toxic non H chip (22.200 Graphic Score).
> 
> Being on water helps a ton too because hotspot temp is nowhere near throttle limit.
> 
> Though, I have seen a case like yours, where the card barely beat 3080's on synthetics. All in all I wouldn't stress too much about synthethics because the difference between 2500 and 2700 is like 1 to 3 frames most of the time??


I have my reference card stable in everything including all TimeSpy stress test at the same frequency as yours. However, the TimeSpy GT2 stress test (looping enabled) fails sometimes for these settings while at other times it runs for more than 40 loops without any issue for the same time. Also, the failure is generic "an error occurred". Can I request one thing? Can you do a run on your GPU with on TimpeSPY GT2 test on loop and share your results.


----------



## CantingSoup

Got the 6900XT Phantom Gaming and bykski waterblock on the way. Hope I get a good sample 🙂


----------



## knightriot

My new boy !!!!


----------



## CantingSoup

knightriot said:


> View attachment 2514123
> 
> My new boy !!!!


Which model is that? TUF? I know it says Strix, but I thought they have 3 pcie connectors.


----------



## lestatdk

J7SC said:


> The AMD 16-cores with dual physical chiplets are binned differently and also have extra boost steps (> heat of 16c/32t). Below is a quick run I did with the 5950X/3090 on the other system, it too squiggles, though not as much as the 3950X does, because the former has a custom loop for the CPU...as shown before, if I lock all-core, it becomes smooth...the 3950X/6900XT just needs to get the full custom loop-treatment installed (all parts are here)
> 
> 
> 
> 
> 
> 
> Have you checked Bykski ? They seem to have a lot of different 6090 XT blocks for both reference and even rarer custom PCBs...below is the one for my Gigab. Gaming OC, waiting for my attention...


They don't make one for my card (yet) . It's an MSI 6900XT Gaming X Trio


----------



## knightriot

CantingSoup said:


> Which model is that? TUF? I know it says Strix, but I thought they have 3 pcie connectors.


6900 tuf oc my friend 😁


----------



## jonRock1992

Kir said:


> Isn't that a really highly binned version? If it actually throttles at stock, I'd think that's a good enough reason to RMA, not?


It doesn't throttle at stock, but it gets really toasty. Only when I start overclocking. This is an overclocking card though. It runs at 2700MHz at 110C junction 99% utilization with ray tracing though, so imagine what it can do under water.


----------



## Kir

jonRock1992 said:


> It doesn't throttle at stock, but it gets really toasty. Only when I start overclocking. This is an overclocking card though. It runs at 2700MHz at 110C junction 99% utilization with ray tracing though, so imagine what it can do under water.


Oh, I see. That's pretty sweet. I'm sure you'll have lots of fun with it.


----------



## chispy

Well , after an horrible experience with Alphacool making people pay for a water block ( Red Devil 6900xt FC Block ) that they do not even know when it will be in stock and ship to clients and changing delivering dates every week with no real date of shipping since 5 months ago + customer service not answering emails i decided to cancel my preorder with them. 0_o just a heads up stay away from false shipping dates from alphacool as they will take your money and make you wait months uppon months put infinite here ...

I order a Byski water block and it arrived in 6 days from Formula Mod. I'm very impress with the quality of the Red Devil rx6900xt water block from Byski , beatiful machining with no smudges ( aheemm EK blocks lol ) or visible work on the nickel plated block just 100% perfect without imperfections. It came with a very thick backplate , rgb , rgb controller , good quality small screwdriver , extra screws and a lot of extra thermal pads. Absolutely loving it !

This is my third Byski block for this generation of gpus ( rtx3090 , rtx 3070 and now rx6900xt ) and they have raise the bar so high than EK and Alphacool looks like low cost cheap options but in reality this Byski block is a lot cheaper than both EK and Alphacool as they are asking way too much money for what they offer. I will post temperatures testing later on my now water cool Red Devil rx6900 xtx-h ultimate. Here is a picture of the water block just finished installing it ( I forgot to rip the plastic shrink covering this top part of the block hence it may look wavy , with dust or imperfect but is the plastic protecting the block ).









Bykski RX 6900XT GPU Water Block For Powercolor RX 6900XT 6800XT Red Devil / Red Dragon , Graphic Card Cooler A-PC6900XT-X


Buy Bykski RX 6900XT GPU Water Block For Powercolor RX 6900XT 6800XT Red Devil / Red Dragon , Graphic Card Cooler A-PC6900XT-X and FormulaMod, Bykski, GPU Water Block, WaterCooling, etc.




www.formulamod.com






Installation guide and support page at Byski - Bykski A-PC6900XT-X GPU BLOCK PowerColor RX6900/6800 XT Red Devil


----------



## MickeyPadge

Alphacool do seem to lack any kind of customer service skills! Is that block sealed OK? seems to be something (liquid) on the wrong side of the gasket, pressed between the plexi?

On another note, I had a brain wave regarding my ASRock Phantom OC and Corsair block. I was never really happy with the temps, they always seemed high compared to others, so I did two things.

Firstly, I remounted the block using liquid metal (I know lol) but carefully protected the areas around the GPU.

Secondly, before remounting, I pressed down on all the (pre installed) thermal pads, flattening them, I had a hunch they were too thick and may be causing a low pressure mount.

Low and behold my temps dropped massively! Under load I haven't seen over 45/65c (core/junction, time spy extreme/port royal) and it's the hottest day of the year!

Anyone using the corsair block, flatten those pads! (and if crazy like me, use liquid metal)

Cheers!


----------



## jonRock1992

chispy said:


> Well , after an horrible experience with Alphacool making people pay for a water block ( Red Devil 6900xt FC Block ) that they do not even know when it will be in stock and ship to clients and changing delivering dates every week with no real date of shipping since 5 months ago + customer service not answering emails i decided to cancel my preorder with them. 0_o just a heads up stay away from false shipping dates from alphacool as they will take your money and make you wait months uppon months put infinite here ...
> 
> I order a Byski water block and it arrived in 6 days from Formula Mod. I'm very impress with the quality of the Red Devil rx6900xt water block from Byski , beatiful machining with no smudges ( aheemm EK blocks lol ) or visible work on the nickel plated block just 100% perfect without imperfections. It came with a very thick backplate , rgb , rgb controller , good quality small screwdriver , extra screws and a lot of extra thermal pads. Absolutely loving it !
> 
> This is my third Byski block for this generation of gpus ( rtx3090 , rtx 3070 and now rx6900xt ) and they have raise the bar so high than EK and Alphacool looks like low cost cheap options but in reality this Byski block is a lot cheaper than both EK and Alphacool as they are asking way too much money for what they offer. I will post temperatures testing later on my now water cool Red Devil rx6900 xtx-h ultimate. Here is a picture of the water block just finished installing it ( I forgot to rip the plastic shrink covering this top part of the block hence it may look wavy , with dust or imperfect but is the plastic protecting the block ).
> 
> 
> 
> 
> 
> 
> 
> 
> 
> Bykski RX 6900XT GPU Water Block For Powercolor RX 6900XT 6800XT Red Devil / Red Dragon , Graphic Card Cooler A-PC6900XT-X
> 
> 
> Buy Bykski RX 6900XT GPU Water Block For Powercolor RX 6900XT 6800XT Red Devil / Red Dragon , Graphic Card Cooler A-PC6900XT-X and FormulaMod, Bykski, GPU Water Block, WaterCooling, etc.
> 
> 
> 
> 
> www.formulamod.com
> 
> 
> 
> 
> 
> 
> Installation guide and support page at Byski - Bykski A-PC6900XT-X GPU BLOCK PowerColor RX6900/6800 XT Red Devil
> 
> 
> 
> View attachment 2514141


That's the same one I ordered, also from formula mod. Mine hasn't arrived yet, but I'm pumped (pun intended) to get it all set up.


----------



## weleh

Anyone soldered an Elmor EVC controller to these cards? 
I'm thinking about buying one and doing the mod on my Toxic since these cards have a ****ton o vcore droop under load.


----------



## J7SC

chispy said:


> (...)
> I order a Byski water block and it arrived in 6 days from Formula Mod. I'm very impress with the quality of the Red Devil rx6900xt water block from Byski , beatiful machining with no smudges ( aheemm EK blocks lol ) or visible work on the nickel plated block just 100% perfect without imperfections. It came with a very thick backplate , rgb , rgb controller , good quality small screwdriver , extra screws and a lot of extra thermal pads. Absolutely loving it !
> (...)


...I was also really impressed with the Bykski finish (compared to my EK block per spoiler...caution: R-rated)...I got my Bykski for the Giga Gaming OC 3x8 from Bykski.US (fast delivery, even w/international border and weekend in-between)


Spoiler























 


MickeyPadge said:


> (...)
> On another note, I had a brain wave regarding my ASRock Phantom OC and Corsair block. I was never really happy with the temps, they always seemed high compared to others, so I did two things.(...) Anyone using the corsair block, flatten those pads! (and if crazy like me, use liquid metal)
> Cheers!


...I'm not surprised that the ASRock Phantom OC is a bit of a 'heater', even with a water block...I like to check out various bios from the TPU VGA bios database, and this is the MPT view of what was in the verified section for the ASRock Phantom OC:











...that also brings up another 'side-issue'. I've seen MPT guides on YT that refer to the 'TDC Limit GFX (A)' as a 'W' watt number...

Anyway, while on the topic of MPT, I take it the other adjustment options in such as 'Maximum Voltage GFX (mV)' should not be touched ?

Finally, what with various water blocks for different gen and vendor cards in my builds, it can get quite frustrating to find the right 'thermal pad' thickness. While I have lots of different sizes of Thermalright Odyssey pads (12.8 W/mk) on hand, I just received this 'thermal putty' below...haven't tried it yet myself, but thermal putty (this one with a 10.0 W/mK rating) can be very useful for things like VRAM and VRM as they are compliant and adapt to the available space but stay put...this one came recommended by another 3090 user.


----------



## MickeyPadge

That BIOS looks like the updated "ultimate" version of the card. 1.2v core voltage and a higher stock power limit than my normal card 

My card is way, way cooler now, the corsair pads really need flattening out for optimal contact. If that is true just for those that rigged the block to fit the ASRock card, or the reference card too, I could not say. 

But anyone that has this block, I would highly recommend remounting with flatter pads! 

I have used similar putty in my laptop, worked very well


J7SC said:


> ...I was also really impressed with the Bykski finish (compared to my EK block per spoiler...caution: R-rated)...I got my Bykski for the Giga Gaming OC 3x8 from Bykski.US (fast delivery, even w/international border and weekend in-between)
> 
> 
> Spoiler
> 
> 
> 
> 
> View attachment 2514158
> 
> 
> View attachment 2514159
> 
> 
> 
> 
> 
> 
> ...I'm not surprised that the ASRock Phantom OC is a bit of a 'heater', even with a water block...I like to check out various bios from the TPU VGA bios database, and this is the MPT view of what was in the verified section for the ASRock Phantom OC:
> 
> View attachment 2514160
> 
> 
> 
> ...that also brings up another 'side-issue'. I've seen MPT guides on YT that refer to the 'TDC Limit GFX (A)' as a 'W' watt number...
> 
> Anyway, while on the topic of MPT, I take it the other adjustment options in such as 'Maximum Voltage GFX (mV)' should not be touched ?
> 
> Finally, what with various water blocks for different gen and vendor cards in my builds, it can get quite frustrating to find the right 'thermal pad' thickness. While I have lots of different sizes of Thermalright Odyssey pads (12.8 W/mk) on hand, I just received this 'thermal putty' below...haven't tried it yet myself, but thermal putty (this one with a 10.0 W/mK rating) can be very useful for things like VRAM and VRM as they are compliant and adapt to the available space but stay put...this one came recommended by another 3090 user.
> 
> View attachment 2514162


----------



## HyperC

weleh said:


> Anyone soldered an Elmor EVC controller to these cards?
> I'm thinking about buying one and doing the mod on my Toxic since these cards have a ****ton o vcore droop under load.


I got time and will finally do mine Thursday or Friday been a long wait .. think I might Notch out my back plate on my waterblock I'll take photos and 🙏 it all goes good


----------



## Godhand007

HyperC said:


> I got time and will finally do mine Thursday or Friday been a long wait .. think I might Notch out my back plate on my waterblock I'll take photos and 🙏 it all goes good


Risky business but might be worth it if can run higher clocks 24/7 stable.


----------



## kratosatlante

J7SC said:


> ...I was also really impressed with the Bykski finish (compared to my EK block per spoiler...caution: R-rated)...I got my Bykski for the Giga Gaming OC 3x8 from Bykski.US (fast delivery, even w/international border and weekend in-between)
> 
> 
> Spoiler
> 
> 
> 
> 
> View attachment 2514158
> 
> 
> View attachment 2514159
> 
> 
> 
> 
> 
> 
> ...I'm not surprised that the ASRock Phantom OC is a bit of a 'heater', even with a water block...I like to check out various bios from the TPU VGA bios database, and this is the MPT view of what was in the verified section for the ASRock Phantom OC:
> 
> View attachment 2514160
> 
> 
> 
> ...that also brings up another 'side-issue'. I've seen MPT guides on YT that refer to the 'TDC Limit GFX (A)' as a 'W' watt number...
> 
> Anyway, while on the topic of MPT, I take it the other adjustment options in such as 'Maximum Voltage GFX (mV)' should not be touched ?
> 
> Finally, what with various water blocks for different gen and vendor cards in my builds, it can get quite frustrating to find the right 'thermal pad' thickness. While I have lots of different sizes of Thermalright Odyssey pads (12.8 W/mk) on hand, I just received this 'thermal putty' below...haven't tried it yet myself, but thermal putty (this one with a 10.0 W/mK rating) can be very useful for things like VRAM and VRM as they are compliant and adapt to the available space but stay put...this one came recommended by another 3090 user.
> 
> View attachment 2514162


this screen of more powertool is from ASRock Phantom OC ?, acept 1.2v?


----------



## J7SC

kratosatlante said:


> this screen of more powertool is from ASRock Phantom OC ?, acept 1.2v?


...for the ASRock RX 6900 XT OC Formula variant (1.2v), via input from Techpowerup's VGA bios > here


----------



## ptt1982

Quick update on my Alphacool 6900xt red devil temps. Mostly stays around 42-52C gpu and 55-65C hot spot, but I noticed a really strange jump to 87C max on hot spot when using HWinfo for the whole day when gaming, working and doing a mixture of other stuff with the computer. 

I wonder how accurate HWinfo is. Average hotspot was around 58C though... hmm... HW Monitor shows 65C max hot spot. CPU max was 68C as well, whereas HW Monitor shows 55C. MSI Afterburner shows GPU being always under 68C hot spot when gaming. Just in case, I checked if there has been a leak, but everything looks good.

Has anyone here had any strange peak temp bugs or jumps with waterblocked 6900xt using HWinfo64?


----------



## kratosatlante

J7SC said:


> ...for the ASRock RX 6900 XT OC Formula variant (1.2v), via input from Techpowerup's VGA bios > here


work well the bios ASRock RX 6900 XT OC Formula variant , in asrok phantom?


----------



## J7SC

kratosatlante said:


> work well the bios ASRock RX 6900 XT OC Formula variant , in asrok phantom?


...not sure as I don't have those cards, may depend on whether it has the 'H' variant (the 1.2v stock one)


----------



## ZealotKi11er

Has the TimeSpy issue been resolved with overclock? I am able to get 2850MHz with the older driver.


----------



## J7SC

ZealotKi11er said:


> Has the TimeSpy issue been resolved with overclock? I am able to get 2850MHz with the older driver.


What driver # are you thinking of ? Tx


----------



## ptt1982

ZealotKi11er said:


> Has the TimeSpy issue been resolved with overclock? I am able to get 2850MHz with the older driver.


I think it still persists with the latest driver. I get best results with the latest driver though, around 21K @2620 core / 2110mhz vram. With the previous one I had 2656mhz as the max, but graphics score was lower. Which old driver are you using?


----------



## CantingSoup

I’m thinking of replaceming the pads of the asrock phantom gaming 6900xt bykski block and backplate with thermalright odyssey 12.8W/m-k pads when I receive it. Do you think 1.5mm pads should suffice?


----------



## ZealotKi11er

ptt1982 said:


> I think it still persists with the latest driver. I get best results with the latest driver though, around 21K @2620 core / 2110mhz vram. With the previous one I had 2656mhz as the max, but graphics score was lower. Which old driver are you using?


I am using 21.3.1. I am getting 22.1K with 2850MHz Core. I will give the new driver a try and see if there is more performance even if I have to lower core clock.


----------



## HyperC

Okay, so finally got it done still playing around with settings nothing crazy mounted the EVC2 on the backplate, I will fix and clean up the wires later


----------



## J7SC

HyperC said:


> Okay, so finally got it done still playing around with settings nothing crazy mounted the EVC2 on the backplate, I will fix and clean up the wires later
> View attachment 2514478
> View attachment 2514479


How are you cooling your card, what with extra voltskis coming up ?


----------



## HyperC

J7SC said:


> How are you cooling your card, what with extra voltskis coming up ?


water


----------



## lawson67

Hi guys looking for some insight i have a powercolor RX 6800 XT Red Devil the card looked like it has a great cooler on it so i bought it hoping to get extra OC out of it but i can tell you that the heatsink is crap, stock voltage's and clocks it will hit about 106c on hot spot on time spy, so i tried to OC it using using MPT as taking the voltage down on the slider does nothing with an overclock of 2650mhz, it will still use all the full voltage range of 1150v without MPT, so set maximum GFX voltage in MPT @1120v and then i set the slider to 975v, it will pass time timespy extreme stress test all day long but on the timespy benchmark if i bench it maybe 6 times in a row it will pass but it might have an error occurred maybe once out of the 6 runs on the second graphic test my score is around 17000 with powerlimit set to 0 temps are around 97c, setting power limit at 2650mhz pushed temps up to around 106c so i don't bother using any powerlimit

But here is the thing that is confusing me, i just for fun set a min frequency of 2350 and max of 2450 in wattman then using the same MPT @1120V and leaving the slider at 975v and wow hot spot temp dropped into the low 80's and my timespy benchmark score actually went even higher than when i set it to 2650mhz and its pulling about 30 -40 watts less power, on the tomb raider benchmark i only lost one FPS @ 4k on my OLED tv compared to my 2650mhz, yet i lost 6 fps @ 1440p on my monitor which i don't care about anyhow as i only game on my OLED tv so i thought wow i will just use it at 2450mhz as it runs way cooler and and time spy bench has got better and i have only lost one FPS on Tomb raider @4k on my OLED tv

So i wondered if i even need to use MPT at all now, so i took a note of max vcore voltage in HWINFO64 after a run of a tomb raider benchmark using MPT set at maximum GFX voltage @1120v which was 0.954v then i deleted the SPPT in MPT rebooted and ran a tomb raider bench again and max vcore was indeed the same in HWINFO64 (0.954v) so i thought great i dont need MPT and i am happy to just leave here for 24/7, but here is the bit that REALLY is confusing me!

Now i have deleted the SPPT in MPT so the card is set back to max "standard" Vcore voltage of 1150v with the slider still set in wattman @ 975v i CAN NOT run a successful bench of timespy it will crash in seconds on the first graphic test which i have found in the past is the easier one of the two to pass!,

Yet if i use MPT again and set back max vcore voltage to 1120v with the slider STILL set at 975v it passes timespy bench every time and also the timespy extreme stress test every run with a 99.8% pass
yet with stock voltage set by NOT using MPT it will fail every time with the slider still set @ 975v, BTW there are no crashes in any game it seems only time spy likes MPT set at 1120v in MPT for it to pass any of its runs

So can anyone give me any insight as to why this could be happening? as in my mind what difference in having max voltage set back to standard when its not pulling anymore voltage in tomb raider benchmark or under load regardless of using MPT or not?, any insight would be helpful cheers

I am using drivers 21.4.1 and my system is a brand new build using a brand new Cosiar RM 850 X and a fresh install of windows


----------



## Godhand007

lawson67 said:


> Hi guys looking for some insight i have a powercolor RX 6800 XT Red Devil the card looked like it has a great cooler on it so i bought it hoping to get extra OC out of it but i can tell you that the heatsink is crap, stock voltage's and clocks it will hit about 106c on hot spot on time spy, so i tried to OC it using using MPT as taking the voltage down on the slider does nothing with an overclock of 2650mhz, it will still use all the full voltage range of 1150v without MPT, so set maximum GFX voltage in MPT @1120v and then i set the slider to 975v, it will pass time timespy extreme stress test all day long but on the timespy benchmark if i bench it maybe 6 times in a row it will pass but it might have an error occurred maybe once out of the 6 runs on the second graphic test my score is around 17000 with powerlimit set to 0 temps are around 97c, setting power limit at 2650mhz pushed temps up to around 106c so i don't bother using any powerlimit


Lots to unpack here. First sharing my experience with 3DMark TimeSpy tests. I can pass TimeSpy stress test with certain clocks but fail in TimeSpy GT2 test run in a loop. It might pass for a setting for 30-40 loops and then fail at the 6th loop for the same setting the next run. In summary, various 3DMark tests are good for checking clock stability but they are *not precise and consistent*. A good OC will not fail TimeSpy stress test but a bad OC might or might not. Run TimeSpyGT2 in loop to get the most accurate result regarding clock stability as far as 3dMark/TimeSpy is concerned.

About your junction temps, the default fan curve is very conservative for all GPUs these days and biased towards being quiet rather than keeping the card cool. Adjust fan curves and you should see a lot of improvement


> But here is the thing that is confusing me, i just for fun set a min frequency of 2350 and max of 2450 in wattman then using the same MPT @1120V and leaving the slider at 975v and wow hot spot temp dropped into the low 80's and my timespy benchmark score actually went even higher than when i set it to 2650mhz and its pulling about 30 -40 watts less power, on the tomb raider benchmark i only lost one FPS @ 4k on my OLED tv compared to my 2650mhz, yet i lost 6 fps @ 1440p on my monitor which i don't care about anyhow as i only game on my OLED tv so i thought wow i will just use it at 2450mhz as it runs way cooler and and time spy bench has got better and i have only lost one FPS on Tomb raider @4k on my OLED tv
> 
> So i wondered if i even need to use MPT at all now, so i took a note of max vcore voltage in HWINFO64 after a run of a tomb raider benchmark using MPT set at maximum GFX voltage @1120v which was 0.954v then i deleted the SPPT in MPT rebooted and ran a tomb raider bench again and max vcore was indeed the same in HWINFO64 (0.954v) so i thought great i dont need MPT and i am happy to just leave here for 24/7, but here is the bit that REALLY is confusing me!
> 
> Now i have deleted the SPPT in MPT so the card is set back to max "standard" Vcore voltage of 1150v with the slider still set in wattman @ 975v i CAN NOT run a successful bench of timespy it will crash in seconds on the first graphic test which i have found in the past is the easier one of the two to pass!,
> 
> Yet if i use MPT again and set back max vcore voltage to 1120v with the slider STILL set at 975v it passes timespy bench every time and also the timespy extreme stress test every run with a 99.8% pass
> yet with stock voltage set by NOT using MPT it will fail every time with the slider still set @ 975v, BTW there are no crashes in any game it seems only time spy likes MPT set at 1120v in MPT for it to pass any of its runs
> 
> So can anyone give me any insight as to why this could be happening? as in my mind what difference in having max voltage set back to standard when its not pulling anymore voltage in tomb raider benchmark or under load regardless of using MPT or not?, any insight would be helpful cheers
> 
> I am using drivers 21.4.1 and my system is a brand new build using a brand new Cosiar RM 850 X and a fresh install of windows


Higher clocks require higher watts to work properly. I have seen temprature and power requirements increase drastically for more than 2600MHz in-game clocks. The closer you are to default frequencies the cooler the card. Instability in high core clocks due to lack of power (you have not mentioned if you increased power limits) can lead to lower minimum FPS and lower scores.
Not sure about what's happing with MPT and volategs but I don't see a harm in keeping MPT settings if it helps you get good temperatures.


----------



## lawson67

Godhand007 said:


> Lots to unpack here. First sharing my experience with 3DMark TimeSpy tests. I can pass TimeSpy stress test with certain clocks but fail in TimeSpy GT2 test run in a loop. It might pass for a setting for 30-40 loops and then fail at the 6th loop for the same setting the next run. In summary, various 3DMark tests are good for checking clock stability but they are *not precise and consistent*. A good OC will not fail TimeSpy stress test but a bad OC might or might not. Run TimeSpyGT2 in loop to get the most accurate result regarding clock stability as far as 3dMark/TimeSpy is concerned.
> 
> About your junction temps, the default fan curve is very conservative for all GPUs these days and biased towards being quiet rather than keeping the card cool. Adjust fan curves and you should see a lot of improvement
> 
> 
> Higher clocks require higher watts to work properly. I have seen temprature and power requirements increase drastically for more than 2600MHz in-game clocks. The closer you are to default frequencies the cooler the card. Instability in high core clocks due to lack of power (you have not mentioned if you increased power limits) can lead to lower minimum FPS and lower scores.
> Not sure about what's happing with MPT and volategs but I don't see a harm in keeping MPT settings if it helps you get good temperatures.


Thanks the reply i don't touch the power limit as is only sends temps up for me and i get the best ever time spy extreme results at 2450mhz with 99.8% results however with my 2650mhz it was never that good and once it failed due to the fact it was under 97% but never a crash and that only happened once and the next run which was done straight after was fine again, as for fans i have them set at about 70% which is not to loud for me at all, i am indeed very happy to leave them at that due to much lower temps and not much of a change in loss of frames in games @4k compared to my 2650mhz OC, setting MPT back to stock voltage and the resulting instability is really confusing though and it was that that made me post to see if anyone could shed some insight

Furthermore i only found out about the SAM feature yesterday and turned that on in the bios and got 3 extra frames with tomb raider @ 4k and 9 extra FPS @1440p so that was a nice surprise

As for 3Dmark how do you loop a run of the second Time spy graphic test as that's what i think you are referring too when you say loop TimeSpyGT2 ?, i dont see an option for me to do that in my Advance 3dmark or am i looking in the wrong place?


----------



## jonRock1992

lawson67 said:


> Hi guys looking for some insight i have a powercolor RX 6800 XT Red Devil the card looked like it has a great cooler on it so i bought it hoping to get extra OC out of it but i can tell you that the heatsink is crap, stock voltage's and clocks it will hit about 106c on hot spot on time spy, so i tried to OC it using using MPT as taking the voltage down on the slider does nothing with an overclock of 2650mhz, it will still use all the full voltage range of 1150v without MPT, so set maximum GFX voltage in MPT @1120v and then i set the slider to 975v, it will pass time timespy extreme stress test all day long but on the timespy benchmark if i bench it maybe 6 times in a row it will pass but it might have an error occurred maybe once out of the 6 runs on the second graphic test my score is around 17000 with powerlimit set to 0 temps are around 97c, setting power limit at 2650mhz pushed temps up to around 106c so i don't bother using any powerlimit
> 
> But here is the thing that is confusing me, i just for fun set a min frequency of 2350 and max of 2450 in wattman then using the same MPT @1120V and leaving the slider at 975v and wow hot spot temp dropped into the low 80's and my timespy benchmark score actually went even higher than when i set it to 2650mhz and its pulling about 30 -40 watts less power, on the tomb raider benchmark i only lost one FPS @ 4k on my OLED tv compared to my 2650mhz, yet i lost 6 fps @ 1440p on my monitor which i don't care about anyhow as i only game on my OLED tv so i thought wow i will just use it at 2450mhz as it runs way cooler and and time spy bench has got better and i have only lost one FPS on Tomb raider @4k on my OLED tv
> 
> So i wondered if i even need to use MPT at all now, so i took a note of max vcore voltage in HWINFO64 after a run of a tomb raider benchmark using MPT set at maximum GFX voltage @1120v which was 0.954v then i deleted the SPPT in MPT rebooted and ran a tomb raider bench again and max vcore was indeed the same in HWINFO64 (0.954v) so i thought great i dont need MPT and i am happy to just leave here for 24/7, but here is the bit that REALLY is confusing me!
> 
> Now i have deleted the SPPT in MPT so the card is set back to max "standard" Vcore voltage of 1150v with the slider still set in wattman @ 975v i CAN NOT run a successful bench of timespy it will crash in seconds on the first graphic test which i have found in the past is the easier one of the two to pass!,
> 
> Yet if i use MPT again and set back max vcore voltage to 1120v with the slider STILL set at 975v it passes timespy bench every time and also the timespy extreme stress test every run with a 99.8% pass
> yet with stock voltage set by NOT using MPT it will fail every time with the slider still set @ 975v, BTW there are no crashes in any game it seems only time spy likes MPT set at 1120v in MPT for it to pass any of its runs
> 
> So can anyone give me any insight as to why this could be happening? as in my mind what difference in having max voltage set back to standard when its not pulling anymore voltage in tomb raider benchmark or under load regardless of using MPT or not?, any insight would be helpful cheers
> 
> I am using drivers 21.4.1 and my system is a brand new build using a brand new Cosiar RM 850 X and a fresh install of windows


The stock Red Devil heatsinks are definitely crap. I bought a waterblock for my 6900 XT Red Devil Ultimate. It looks like they should be good in theory, but maybe there is some sort of mounting problem with them.


----------



## Godhand007

lawson67 said:


> Thanks the reply i don't touch the power limit as is only sends temps up for me and i get the best ever time spy extreme results at 2450mhz with 99.8% results however with my 2650mhz it was never that good and once it failed due to the fact it was under 97% but never a crash and that only happened once and the next run which was done straight after was fine again, as for fans i have them set at about 70% which is not to loud for me at all, i am indeed very happy to leave them at that due to much lower temps and not much of a change in loss of frames in games @4k compared to my 2650mhz OC, setting MPT back to stock voltage and the resulting instability is really confusing though and it was that that made me post to see if anyone could shed some insight


The reason you failed TimeSpy stress test with 2650mhz is : required power was not being supplied which leads to --> clock instability which in turn leads to unstable fps --> which leads to TimeSpy marking your OC as fail.


> Furthermore i only found out about the SAM feature yesterday and turned that on in the bios and got 3 extra frames with tomb raider @ 4k and 9 extra FPS @1440p so that was a nice surprise
> 
> As for 3Dmark how do you loop a run of the second Time spy graphic test as that's what i think you are referring too when you say loop TimeSpyGT2 ?, i dont see an option for me to do that in my Advance 3dmark or am i looking in the wrong place?


There you go.


----------



## scajjr2

I came into some cash the other day and decided to order an Asus ROG Strix 6900XT LC. Should be here in a couple days. Looking forward to see how it compares to the evga 2080ti hybrid it'll be replacing.

sam


----------



## LtMatt

scajjr2 said:


> I came into some cash the other day and decided to order an Asus ROG Strix 6900XT LC. Should be here in a couple days. Looking forward to see how it compares to the evga 2080ti hybrid it'll be replacing.
> 
> sam


Did you order the XTX or XTHU version? That latter should have a advertised boost speed of 2525Mhz.


----------



## weleh

HyperC said:


> Okay, so finally got it done still playing around with settings nothing crazy mounted the EVC2 on the backplate, I will fix and clean up the wires later
> View attachment 2514478
> View attachment 2514479


Any testing done yet?
Also is this a XTX or XTXH card?


----------



## airisom2

So, I was having some issues with random shutdowns, probably OCP tripping because of the card. Setting the minimum clockspeed to 100mhz lower than what the max is set to has so far fixed it for me. Clockspeeds and voltages aren't jumping all over the place and I haven't run into a shutdown since. Might be worth a try for anyone else having power spike issues.


----------



## jonRock1992

This just happened to me today. I was like, "great...now I need a better PSU." Trying to find ways to avoid replacing it.


----------



## J7SC

...I haven't experienced any OCP's yet, but once I added some 'MPT spice' to the PL, I took out the Corsair 850W and replaced it with an older BeQuiet DarkPro 1200W (CPU is AMD 16c) I had laying around.


----------



## airisom2

AMD's power management is pretty aggressive to say the least. No reason to drop a card from 2500mhz+ to double digits and back up within milliseconds. That's the problem. By the time the sensors pick up on it, it's drawing enough current to trigger OCP. Forcing the minimum clocks higher stabilizes the clockspeed and voltage swings. Well, that's my theory. 

I was able to produce a scenario to create OCP pretty reliably, and after upping the minimum clockspeed I haven't had it reproduce since (knock on wood).


----------



## weleh

I'm on a 750W Seasonic GX Focus Gold and haven't had any issues with OCP on my 6800XT, 6900XT or 3080 all of them running high PL bios.


----------



## ptt1982

Did some testing again during a bit cooler nights here in Tokyo, got my new Port Royal record at 2730mhz with the normal Red Devil under water. I could run it at 2735mhz but it produced slightly lower score due to instability. Timespy still crashes at anything above 2625mhz with the latest driver.


----------



## weleh

At the moment I have the highest PR score on HWBOT.









Weleleh`s 3DMark - Port Royal score: 12456 marks with a Radeon RX 6900 XT


The Radeon RX 6900 XTscores getScoreFormatted in the 3DMark - Port Royal benchmark. Welelehranks #275 worldwide and #5 in the hardware class. Find out more at HWBOT.




hwbot.org


----------



## tk31

Just a quick question to anyone that knows. Has anyone been able to not run "Radeon Software/Adrenaline" or disable its OC somehow so MSI AB works? Been bashing my head against a brick wall to try get MSI AB to function properly with my card. Trying to undervolt it past the minimum 825mV it allows


----------



## weleh

You can probably do it if you install the strip down version of the driver and use AB to overclock.


----------



## tk31

weleh said:


> You can probably do it if you install the strip down version of the driver and use AB to overclock.


Thanks for the suggestion! Seen reports of it running alongside Adrenalin and no luck finding any driver only installations so.. guess i have to live with it


----------



## weleh

tk31 said:


> Thanks for the suggestion! Seen reports of it running alongside Adrenalin and no luck finding any driver only installations so.. guess i have to live with it


New AMD drivers give you the option to install full, lite or strip down version now.


----------



## Godhand007

airisom2 said:


> AMD's power management is pretty aggressive to say the least. No reason to drop a card from 2500mhz+ to double digits and back up within milliseconds. That's the problem. By the time the sensors pick up on it, it's drawing enough current to trigger OCP. Forcing the minimum clocks higher stabilizes the clockspeed and voltage swings. Well, that's my theory.
> 
> I was able to produce a scenario to create OCP pretty reliably, and after upping the minimum clockspeed I haven't had it reproduce since (knock on wood).


Interesting! One would assume that good-quality PSUs would have this situation covered already.


----------



## Godhand007

tk31 said:


> Just a quick question to anyone that knows. Has anyone been able to not run "Radeon Software/Adrenaline" or disable its OC somehow so MSI AB works? Been bashing my head against a brick wall to try get MSI AB to function properly with my card. Trying to undervolt it past the minimum 825mV it allows


One thing to remember here is that you can't set minimum clocks in MSI AB for RX 6900 XT (at least the version available till a few days ago).


----------



## airisom2

Godhand007 said:


> Interesting! One would assume that good-quality PSUs would have this situation covered already.


Yeah, I'd think so too, but I guess it isn't as dry cut as that. I believe the SS Prime series is one of the better ones out there, but here I was with a titanium 1kw unit OCP'ing on me. I have a P2 1200 laying around somewhere, but it should work with a 1kw unit, thus the experimenting.

Also, this could be on a card to card basis. Different clockspeed & voltage curves, different temperatures, VRM efficiency & transient response, input/output filtering, capacitance, switching frequency, power limits, vbios differences, XTX vs XTXH, driver version, etc. Lots of variables here that could create a perfect storm on some systems.


----------



## Godhand007

airisom2 said:


> Yeah, I'd think so too, but I guess it isn't as dry cut as that. I believe the SS Prime series is one of the better ones out there, but here I was with a titanium 1kw unit OCP'ing on me. I have a P2 1200 laying around somewhere, but it should work with a 1kw unit, thus the experimenting.
> 
> Also, this could be on a card to card basis. Different clockspeed & voltage curves, different temperatures, VRM efficiency & transient response, input/output filtering, capacitance, switching frequency, power limits, vbios differences, XTX vs XTXH, driver version, etc. Lots of variables here that could create a perfect storm on some systems.


Nah, It's supposed to be clear-cut for high-end branded PSUs. Older PSUs, even with higher wattage might not be able to work properly with newer cards. I have seen many people on various forums saying that they had 5+ years olds PSUs with the latest GPUs and CPUs along with various levels of OCs. This might be the reason for people facing issues.

About your second point, that's a more generic PC thing wherein any specific component combination can cause system issues. We can't say it is because of a particular GPU variant.


----------



## ptt1982

Godhand007 said:


> One thing to remember here is that you can't set minimum clocks in MSI AB for RX 6900 XT (at least the version available till a few days ago).


Right! I wonder if that is why I get these drops to 23mhz when I have AB and RivaTuner showing me the stats during gameplay despite having minimum clocks set high in adrenaline…?

I had some OCP problems with the latest driver, but setting the minimum clock high everything works better. No large symptoms, but I can hear the OCP switch turning on. I think the drivers are still WIP, especially given OC results vary wildly per AIB card.

Running a 850W Seasonic Focus+ Platinum.


----------



## lestatdk

tk31 said:


> Just a quick question to anyone that knows. Has anyone been able to not run "Radeon Software/Adrenaline" or disable its OC somehow so MSI AB works? Been bashing my head against a brick wall to try get MSI AB to function properly with my card. Trying to undervolt it past the minimum 825mV it allows


When you install the driver select either minimum install or driver only install. But remember some features are not available in AB, like the fast timing setting for memory. Also no minimum limit on core frequency. I have run with just AB like this and it works fine, however I didn't manage to figure out if I could set memory timing to fast in any way without the full wattman install


----------



## ZealotKi11er

ptt1982 said:


> Right! I wonder if that is why I get these drops to 23mhz when I have AB and RivaTuner showing me the stats during gameplay despite having minimum clocks set high in adrenaline…?
> 
> I had some OCP problems with the latest driver, but setting the minimum clock high everything works better. No large symptoms, but I can hear the OCP switch turning on. I think the drivers are still WIP, especially given OC results vary wildly per AIB card.
> 
> Running a 850W Seasonic Focus+ Platinum.



Is fps effected by these drop?


----------



## Godhand007

ptt1982 said:


> Right! I wonder if that is why I get these drops to 23mhz when I have AB and RivaTuner showing me the stats during gameplay despite having minimum clocks set high in adrenaline…?
> 
> I had some OCP problems with the latest driver, but setting the minimum clock high everything works better. No large symptoms, but I can hear the OCP switch turning on. I think the drivers are still WIP, especially given OC results vary wildly per AIB card.
> 
> Running a 850W Seasonic Focus+ Platinum.


MPT + Wattaman/Radeon software is the way to go for OCing RDNA2 cards for now.


----------



## ptt1982

ZealotKi11er said:


> Is fps effected by these drop?


Actually yes, for example in Days Gone, the stutters happen when the mhz drops to 23mhz or something like that. Using MPT 350/370, core 2670 mem 2110. Temps very well in check. I think the driver is all over the place. FSR should release today, so perhaps the new driver with it gives better results.


----------



## Godhand007

ptt1982 said:


> Actually yes, for example in Days Gone, the stutters happen when the mhz drops to 23mhz or something like that. Using MPT 350/370, core 2670 mem 2110. Temps very well in check. I think the driver is all over the place. FSR should release today, so perhaps the new driver with it gives better results.


Are those cocks stable for stress tests (3Mark GT2 etc,)? I haven't seen such a variance in stable clocks for stress testing and clocks which work in games for GPUs before. For me, clocks are 2700Mhz+ stable in CyberPunk2077 (and other games) for hours but error out for 2600Mhz in TimeSpy GT2 (loop).


----------



## EastCoast




----------



## blackzaru

EastCoast said:


>


Thank you for this insightful comment.


----------



## EastCoast

blackzaru said:


> Thank you for this insightful comment.


It was the only reaction reading the 1st few words of the post above.


----------



## ptt1982

Godhand007 said:


> Are those cocks stable for stress tests (3Mark GT2 etc,)? I haven't seen such a variance in stable clocks for stress testing and clocks which work in games for GPUs before. For me, clocks are 2700Mhz+ stable in CyberPunk2077 (and other games) for hours but error out for 2600Mhz in TimeSpy GT2 (loop).


I would have to test a couple of more games, but it seems it's mostly tied to Day Gone actually, but some games do drop really seriously the mhz even at 2620mhz which is 100% stable GT2 Loop 30min. With Days Gone the stutters occur at the same time. Now, I've installed the new drivers which were released a couple of minutes ago, and will have to test again!


----------



## ptt1982

New driver 21.6.1 version findings on Red Devil XTX under water:

1) I was able to increase TS score by 830 points, and put +40mhz on the core as well, now at 2660mhz. My last best was 21K, now went past 21.8K. That's a solid 3.75% gain.
2) Similarly, Port Royal tests are able to clock higher +15mhz (2745mhz) and now able to achieve 11418 score.

Impressive stuff from AMD.


----------



## Godhand007

ptt1982 said:


> I would have to test a couple of more games, but it seems it's mostly tied to Day Gone actually, but some games do drop really seriously the mhz even at 2620mhz which is 100% stable GT2 Loop 30min. With Days Gone the stutters occur at the same time. Now, I've installed the new drivers which were released a couple of minutes ago, and will have to test again!


I was mostly interested in stable clocks for stress tests as some games can run 2700MHz plus without issues. I just tested with 2650MHz max clocks (in-game ~2600MHz) along with the latest driver and was able to pass 20 loops of GT2 (so more than 30 minutes I think) but still not sure if it is stable as the next run with the same settings could fail due to inconsistent nature of 3Dmark tests.


----------



## Godhand007

One good news with new drivers, mem clocks are now staying at lower frequencies even with a 144 Hz refresh rate.


----------



## ptt1982

Godhand007 said:


> I was mostly interested in stable clocks for stress tests as some games can run 2700MHz plus without issues. I just tested with 2650MHz max clocks (in-game ~2600MHz) along with the latest driver and was able to pass 20 loops of GT2 (so more than 30 minutes I think) but still not sure if it is stable as the next run with the same settings could fail due to inconsistent nature of 3Dmark tests.


I did see some type of behavior like that in stress tests now that I think of it. I think with TS I had to increase the minimum clock drastically to make it stable. This was unique to the 5.2 driver, and earlier drivers didn't exhibit that behavior.


----------



## Godhand007

ptt1982 said:


> I did see some type of behavior like that in stress tests now that I think of it. I think with TS I had to increase the minimum clock drastically to make it stable. This was unique to the 5.2 driver, and earlier drivers didn't exhibit that behavior.


Well crap, I just saw a driver reset with these clocks for the same settings in GT2 which ran fine previously. Going back to 2600 MHz (~2550 In-game) clocks. I hate that there is no software to stress test RDNA2 GPUs with consistency.


----------



## ZealotKi11er

Godhand007 said:


> Well crap, I just saw a driver reset with these clocks for the same settings in GT2 which ran fine previously. Going back to 2600 MHz (~2550 In-game) clocks. I hate that there is no software to stress test RDNA2 GPUs with consistency.


If that was the case it would be easy job for amd/nvidia for 1 software to do clock checking.


----------



## weleh

Anyone checked benchmarks with the new driver?

People on reddit reporting higher scores at the same settings.


----------



## lestatdk

weleh said:


> Anyone checked benchmarks with the new driver?
> 
> People on reddit reporting higher scores at the same settings.


I just got a new record Timespy score without doing my MPT tweaks  Might not do MPT at all, this is awesome


----------



## Godhand007

ZealotKi11er said:


> If that was the case it would be easy job for amd/nvidia for 1 software to do clock checking.


I am not talking about Nvidia or AMD GPUs of the prior generation. It is also understood (if one has not been living under a rock) that what AMD/Nvidia/Intel and other companies use for internal testing is not something that would be available for us. Neither are those companies going to rely completely on an external software for their testing.


----------



## thomasck

Did a full run again with stock 21.6.1.
3900X Stock
6900 XT Stock, reference model
Taichi x370/2x8GB 3733.
For the Graphics score they were lower in any Fire Strike bench, and higher in both Time Spy bench.


----------



## jonRock1992

J7SC said:


> ...I haven't experienced any OCP's yet, but once I added some 'MPT spice' to the PL, I took out the Corsair 850W and replaced it with an older BeQuiet DarkPro 1200W (CPU is AMD 16c) I had laying around.


So far it's only happened to me with Resident Evil Village and Metro Exodus Enhanced. Those are the most graphically demanding games.


----------



## airisom2

Very interesting video from buildzoid


----------



## LtMatt

weleh said:


> Anyone checked benchmarks with the new driver?
> 
> People on reddit reporting higher scores at the same settings.


I got 1000 lower graphics points bizarrely vs 21.4.1 on Firestrike standard and Extreme. Lost around 150Mhz on my overclocks too.

Seems an outlier compared to other feedback, not been able to pinpoint the cause might have to try a clean install of Windows though only did that a few months ago.


airisom2 said:


> Very interesting video from buildzoid


Can you summarise the video?


----------



## lestatdk

Damn, couldn't help it just wanted to see how much better it is with the new driver + MPT.

This is my first total score above 19k, my first graphics score above 21k. Considering this is on air I'd say it's pretty good  Graphics score up around 600 compared to previous version driver. Wtg AMD:


----------



## airisom2

LtMatt said:


> Can you summarise the video?


At stock clocks, the 6900XT's voltage will drop from peak voltage down to say 0.85v at regular intervals. The lower the minimum clock is, the lower the voltage will droop. The time it is at that voltage is in the microsecond range and then it jumps back up to 3d voltage. So basically, the card is at idle clocks/voltages every few microseconds even during load due to the minimum clocks. Buildzoid didn't mention clockspeeds, but if the voltage is drooping down that far, and if the clocks are tied to the voltage, then your clocks are dropping down as well. 

Moving the minimum clock slider higher lessens that problem, and when you put the minimum clocks to be 100mhz less than the maximum, then the issue is effectively gone and the card remains at it's 3d voltage. 

Moving the frequency slider around moves the voltage around, and the voltage slider acts like an offset slider. Even when overclocked to 2734mhz, the oscilloscope is only showing 1.18v, and at lower clocks, it's only pulling 1.07v. So, amd is reporting the wrong voltage in their sw, and it's more inline with what HWInfo's sensors are reporting. So for XTXH owners, you're not going to hit 1.2v unless you have some really good cooling that will allow to push the clock/voltage curve up to 1.2v or you're running an elmor evc. 

Also, MPT's new beta allows you to adjust the static voltage curve offset on these cards now.


----------



## Godhand007

airisom2 said:


> At stock clocks, the 6900XT's voltage will drop from peak voltage down to say 0.85v at regular intervals. The lower the minimum clock is, the lower the voltage will droop. The time it is at that voltage is in the microsecond range and then it jumps back up to 3d voltage. So basically, the card is at idle clocks/voltages every few microseconds even during load due to the minimum clocks. Buildzoid didn't mention clockspeeds, but if the voltage is drooping down that far, and if the clocks are tied to the voltage, then your clocks are dropping down as well.
> 
> Moving the minimum clock slider higher lessens that problem, and when you put the minimum clocks to be 100mhz less than the maximum, then the issue is effectively gone and the card remains at it's 3d voltage.
> 
> Moving the frequency slider around moves the voltage around, and the voltage slider acts like an offset slider. Even when overclocked to 2734mhz, the oscilloscope is only showing 1.18v, and at lower clocks, it's only pulling 1.07v. So, amd is reporting the wrong voltage in their sw, and it's more inline with what HWInfo's sensors are reporting. So for XTXH owners, you're not going to hit 1.2v unless you have some really good cooling that will allow to push the clock/voltage curve up to 1.2v or you're running an elmor evc.
> 
> Also, MPT's new beta allows you to adjust the static voltage curve offset on these cards now.


Thanks for the summary. So for most Overclockers, they were not facing this issue (due to higher min clocks). Few questions here, if you or others can answer:-

1.This must have an impact on performance, right?
2. HWInfo sensors report diiffernt volatges?
3. _MPT's static voltage curve offset_, Has anyone tried playing around with this and gained higher OC?


----------



## CantingSoup

airisom2 said:


> At stock clocks, the 6900XT's voltage will drop from peak voltage down to say 0.85v at regular intervals. The lower the minimum clock is, the lower the voltage will droop. The time it is at that voltage is in the microsecond range and then it jumps back up to 3d voltage. So basically, the card is at idle clocks/voltages every few microseconds even during load due to the minimum clocks. Buildzoid didn't mention clockspeeds, but if the voltage is drooping down that far, and if the clocks are tied to the voltage, then your clocks are dropping down as well.
> 
> Moving the minimum clock slider higher lessens that problem, and when you put the minimum clocks to be 100mhz less than the maximum, then the issue is effectively gone and the card remains at it's 3d voltage.
> 
> Moving the frequency slider around moves the voltage around, and the voltage slider acts like an offset slider. Even when overclocked to 2734mhz, the oscilloscope is only showing 1.18v, and at lower clocks, it's only pulling 1.07v. So, amd is reporting the wrong voltage in their sw, and it's more inline with what HWInfo's sensors are reporting. So for XTXH owners, you're not going to hit 1.2v unless you have some really good cooling that will allow to push the clock/voltage curve up to 1.2v or you're running an elmor evc.
> 
> Also, MPT's new beta allows you to adjust the static voltage curve offset on these cards now.


Just so I’m understanding this, setting the minimum clocks 100mhz from the max should eliminate the voltage spikes?


----------



## blackzaru

So, installed drivers 21.6.1, and tested it on my 6900XT (Reference Card with EKWB waterblock).

Results went like this:

+2% on Port Royal
+4% on TimeSpy Extreme
+6% on Timespy
-1.5% to -2% on Firestrike, Firestrike Extreme, and Firestrike Ultra
More detailed results of my Port Royal, TimeSpy Extreme, and TimeSpy runs:

*PORT ROYAL:*




















*TIMESPY EXTREME:*




















*TIMESPY:*




















I hope it proves useful to some of you. Have a nice day!


----------



## airisom2

Godhand007 said:


> Thanks for the summary. So for most Overclockers, they were not facing this issue (due to higher min clocks). Few questions here, if you or others can answer:-
> 
> 1.This must have an impact on performance, right?
> 2. HWInfo sensors report diiffernt volatges?
> 3. _MPT's static voltage curve offset_, Has anyone tried playing around with this and gained higher OC?


1. I think so. I did a Superposition run, and I got an extra 100 points or so, but I'll have to retest. 
2. For me, HWInfo reports lower voltages than GPU-Z, MSIAB, and AMD's logging. Other will show 1.2v while HWInfo reports 1.18v I believe. I'll have to verify when I get back.
3. I'd play around with it if I knew what to do there. There is a possibility of ramping up the voltage curve earlier so it'll be at 1.2v, but I'm not sure what to do there. I wish it was a graph view or something. 

But fortunately, these cards can take an elmor evc, and soldering the wires is pretty easy. I'll get around to it one day haha. Not much motivation to do it since there is no block for my card, so I wouldn't gain much due to thermals.


----------



## ZealotKi11er

@*blackzaru*

What are your clocks for 6900 XT? I tried 2700-2150 and only get 21K in TS.


----------



## coelacanth

Running stock other than more aggressive fan curve. Time Spy graphics score up 4.2% using 21.6.1 vs the last driver.


----------



## Godhand007

So I noticed a wired thing with the latest drivers; Switching SAM ON and OFF through the Radeon software results in max clocks being decreased by about ~40MHz. Has anyone else noticed this?

Edit: Max clocks are fine after another run of the same game, wired.


----------



## Godhand007

airisom2 said:


> 1. I think so. I did a Superposition run, and I got an extra 100 points or so, but I'll have to retest.
> 2. For me, HWInfo reports lower voltages than GPU-Z, MSIAB, and AMD's logging. Other will show 1.2v while HWInfo reports 1.18v I believe. I'll have to verify when I get back.
> 3. I'd play around with it if I knew what to do there. There is a possibility of ramping up the voltage curve earlier so it'll be at 1.2v, but I'm not sure what to do there. I wish it was a graph view or something.
> 
> But fortunately, these cards can take an elmor evc, and soldering the wires is pretty easy. I'll get around to it one day haha. Not much motivation to do it since there is no block for my card, so I wouldn't gain much due to thermals.


2. Same here, But how is this possible? Is HWInfo using an offset? Cause the information from hardware sensors would have to be the same.
3. Yeah, Let's wait for someone to provide more info on that. I believe there are around 70-80 MHz more I can get out of my card by further fine-tuning.


----------



## ZealotKi11er

SAM does seem to give more perf with TimeSpy.


----------



## blackzaru

ZealotKi11er said:


> @*blackzaru*
> 
> What are your clocks for 6900 XT? I tried 2700-2150 and only get 21K in TS.


For TimeSpy:

Min: 2575MHz
Max: 2675MHz
For Port Royal and TimeSpy Extreme:

Min: 2625MHz
Max: 2725MHz


----------



## ZealotKi11er

blackzaru said:


> For TimeSpy:
> 
> Min: 2575MHz
> Max: 2675MHz
> For Port Royal and TimeSpy Extreme:
> 
> Min: 2625MHz
> Max: 2725MHz


Was able to get 22K but with Min 2700MHz, Max 2800MHz. Maybe its the platform difference. I am using 10900K.


----------



## blackzaru

ZealotKi11er said:


> Was able to get 22K but with Min 2700MHz, Max 2800MHz. Maybe its the platform difference. I am using 10900K.


A stable card in those frequencies (especially if it's a reference) is rare. And well, anything limiting you? Memory OC can affect FPS. I'm running 3800MHz 14-14-14-14-26-42, which is on the "very tight" side of mem OC.


----------



## ZealotKi11er

blackzaru said:


> A stable card in those frequencies (especially if it's a reference) is rare. And well, anything limiting you? Memory OC can affect FPS. I'm running 3800MHz 14-14-14-14-26-42, which is on the "very tight" side of mem OC.


Running 4000 CL19. 

Also lol at this: I scored 26 766 in Time Spy


----------



## thomasck

blackzaru said:


> A stable card in those frequencies (especially if it's a reference) is rare. And well, anything limiting you? Memory OC can affect FPS. I'm running 3800MHz 14-14-14-14-26-42, which is on the "very tight" side of mem OC.


I wonder if "too tight" or certain timings might affect games in the form of stuttering. I've upgraded from a Radeon VII to the 6900 XT and I am getting some stutters here and there in the games I play most, warzone and enlisted every 5 seconds or so until I boot the pc again. The rest of system and timings are the same, just GPU was changed and is powered by two separate cables from a rm850x. FPS drops, GPU power draw drops, CPU usage drops all at the same time. Ram is stable, tested with TM5, Karhu and HCI.
Timings attached for reference.









Sent from my Pixel 2 XL using Tapatalk


----------



## J7SC

thomasck said:


> I wonder if "too tight" or certain timings might affect games in the form of stuttering. I've upgraded from a Radeon VII to the 6900 XT and I am getting some stutters here and there in the games I play most, warzone and enlisted every 5 seconds or so until I boot the pc again. The rest of system and timings are the same, just GPU was changed and is powered by two separate cables from a rm850x. FPS drops, GPU power draw drops, CPU usage drops all at the same time. Ram is stable, tested with TM5, Karhu and HCI.
> Timings attached for reference.
> 
> 
> 
> 
> 
> 
> 
> 
> 
> Sent from my Pixel 2 XL using Tapatalk


...loosening tFAW from 16 to 20 might help, ditto for tRAS to 31. My setting below works well for both 6900XT and 3090 (two machines with AM4/16c)


----------



## blackzaru

ZealotKi11er said:


> Running 4000 CL19.
> 
> Also lol at this: I scored 26 766 in Time Spy


Wth? The timer was off? Or what?


----------



## ZealotKi11er

blackzaru said:


> Wth? The timer was off? Or what?


No idea. Is there a way to report?

Almost seems like a dual gpu score.


----------



## blackzaru

ZealotKi11er said:


> No idea. Is there a way to report?
> 
> Almost seems like a dual gpu score.


Best guess I have is that your hpet was off.

Edit: it's a dual gpu score.


----------



## Enzarch

Godhand007 said:


> 1.This must have an impact on performance, right?
> 2. HWInfo sensors report diiffernt volatges?
> 3. _MPT's static voltage curve offset_, Has anyone tried playing around with this and gained higher OC?





airisom2 said:


> 1. I think so. I did a Superposition run, and I got an extra 100 points or so, but I'll have to retest.
> 2. For me, HWInfo reports lower voltages than GPU-Z, MSIAB, and AMD's logging. Other will show 1.2v while HWInfo reports 1.18v I believe. I'll have to verify when I get back.
> 3. I'd play around with it if I knew what to do there. There is a possibility of ramping up the voltage curve earlier so it'll be at 1.2v, but I'm not sure what to do there. I wish it was a graph view or something.
> But fortunately, these cards can take an elmor evc, and soldering the wires is pretty easy. I'll get around to it one day haha. Not much motivation to do it since there is no block for my card, so I wouldn't gain much due to thermals.


Yes, can confirm futzing with the SVO settings did improve my clocks a bit, I simply set "a" to zero. I don't believe a positive offset will work, but I have not validated this yet.
And yes, HWInfo seems to report actual voltage to core, whereas, say AIDA64, seems to report VID (BTW do not use AIDA, as it currently plays havoc with AMD drivers)

Here is an easy way to visualize what the SVO points do, probably not worth messing with 'b' or 'c'


----------



## thomasck

J7SC said:


> ...loosening tFAW from 16 to 20 might help, ditto for tRAS to 31. My setting below works well for both 6900XT and 3090 (two machines with AM4/16c)
> 
> View attachment 2515078


Thanks, I'm gonna try and report back.

Sent from my Pixel 2 XL using Tapatalk


----------



## ptt1982

Anyone here tried if the XTX cards can do 1.2v in MPT with the 22.6.1 drivers?

Without the 500mhz clock speed limitation (bug) that is. I’d love to crank up the voltage on my XTX. Would probably gain another 50-100mhz, 2-4% performance with maybe 5C extra.


----------



## LtMatt

I managed to improve my Timespy score with 21.6.1. 

Here is my previous best vs my now best scores.
AMD Radeon RX 6900 XT video card benchmark result - AMD Ryzen 9 5950X,ASUSTeK COMPUTER INC. ROG CROSSHAIR VIII HERO (3dmark.com)
AMD Radeon RX 6900 XT video card benchmark result - AMD Ryzen 9 5950X,ASUSTeK COMPUTER INC. ROG CROSSHAIR VIII HERO (3dmark.com)

My graphics score has gone up by 3.3% but my CPU score has gone down 2.4%. Do i roll back the BIOS? If I do i can only get around 21,900 on the GPU score even with 21.6.1 so I do wonder if the extra performance is anything to do with SAM rather than 21.6.1. That said, SAM is enabled in both BIOS versions. 

Managed to find an old 6800 XT Timespy score I had, back when i was using earlier versions of Windows 10 and before i updated BIOS and my CPU score was 4.6% higher than what i can get now. :/
AMD Radeon RX 6800 XT video card benchmark result - AMD Ryzen 9 5950X,ASUSTeK COMPUTER INC. ROG CROSSHAIR VIII HERO (3dmark.com) 

5950X was using PBO and Curve Optimizer in all tests and settings and memory speeds/timings were the same.


----------



## reqq

Im in shock guys.. i was preparing for amd drop at 17.30 CET.. and at 16:00 i tried the add to cart script to see if it was working.. and then the card was added to the cart ???? AMD apperently decided to drop totally random and i was super lucky  Super stoked right now.


----------



## CantingSoup

reqq said:


> Im in shock guys.. i was preparing for amd drop at 17.30 CET.. and at 16:00 i tried the add to cart script to see if it was working.. and then the card was added to the cart ???? AMD apperently decided to drop totally random and i was super lucky  Super stoked right now.


Will you be putting it under water?


----------



## Godhand007

ptt1982 said:


> Anyone here tried if the XTX cards can do 1.2v in MPT with the 22.6.1 drivers?
> 
> Without the 500mhz clock speed limitation (bug) that is. I’d love to crank up the voltage on my XTX. Would probably gain another 50-100mhz, 2-4% performance with maybe 5C extra.


Not working. Instead of lowering clocks, it lowers volatge now.


----------



## Godhand007

Enzarch said:


> Yes, can confirm futzing with the SVO settings did improve my clocks a bit, I simply set "a" to zero. I don't believe a positive offset will work, but I have not validated this yet.
> And yes, HWInfo seems to report actual voltage to core, whereas, say AIDA64, seems to report VID (BTW do not use AIDA, as it currently plays havoc with AMD drivers)
> 
> Here is an easy way to visualize what the SVO points do, probably not worth messing with 'b' or 'c'
> View attachment 2515092


Can you share your MPT screenshot?


----------



## D1g1talEntr0py

LtMatt said:


> I managed to improve my Timespy score with 21.6.1.
> 
> Here is my previous best vs my now best scores.
> AMD Radeon RX 6900 XT video card benchmark result - AMD Ryzen 9 5950X,ASUSTeK COMPUTER INC. ROG CROSSHAIR VIII HERO (3dmark.com)
> AMD Radeon RX 6900 XT video card benchmark result - AMD Ryzen 9 5950X,ASUSTeK COMPUTER INC. ROG CROSSHAIR VIII HERO (3dmark.com)
> 
> My graphics score has gone up by 3.3% but my CPU score has gone down 2.4%. Do i roll back the BIOS? If I do i can only get around 21,900 on the GPU score even with 21.6.1 so I do wonder if the extra performance is anything to do with SAM rather than 21.6.1. That said, SAM is enabled in both BIOS versions.
> 
> Managed to find an old 6800 XT Timespy score I had, back when i was using earlier versions of Windows 10 and before i updated BIOS and my CPU score was 4.6% higher than what i can get now. :/
> AMD Radeon RX 6800 XT video card benchmark result - AMD Ryzen 9 5950X,ASUSTeK COMPUTER INC. ROG CROSSHAIR VIII HERO (3dmark.com)
> 
> 5950X was using PBO and Curve Optimizer in all tests and settings and memory speeds/timings were the same.


My CPU scores are always all over the place from what I assume to be PBO just doing PBO things. Most of the CPU benchmarks I run on my 5950X are inconsistent.


----------



## ZealotKi11er

LtMatt said:


> I managed to improve my Timespy score with 21.6.1.
> 
> Here is my previous best vs my now best scores.
> AMD Radeon RX 6900 XT video card benchmark result - AMD Ryzen 9 5950X,ASUSTeK COMPUTER INC. ROG CROSSHAIR VIII HERO (3dmark.com)
> AMD Radeon RX 6900 XT video card benchmark result - AMD Ryzen 9 5950X,ASUSTeK COMPUTER INC. ROG CROSSHAIR VIII HERO (3dmark.com)
> 
> My graphics score has gone up by 3.3% but my CPU score has gone down 2.4%. Do i roll back the BIOS? If I do i can only get around 21,900 on the GPU score even with 21.6.1 so I do wonder if the extra performance is anything to do with SAM rather than 21.6.1. That said, SAM is enabled in both BIOS versions.
> 
> Managed to find an old 6800 XT Timespy score I had, back when i was using earlier versions of Windows 10 and before i updated BIOS and my CPU score was 4.6% higher than what i can get now. :/
> AMD Radeon RX 6800 XT video card benchmark result - AMD Ryzen 9 5950X,ASUSTeK COMPUTER INC. ROG CROSSHAIR VIII HERO (3dmark.com)
> 
> 5950X was using PBO and Curve Optimizer in all tests and settings and memory speeds/timings were the same.



Power Plan makes a difference.


----------



## Enzarch

Godhand007 said:


> Can you share your MPT screenshot?


Of course, not much to see, just zeroed out 'a' for now


----------



## LtMatt

ZealotKi11er said:


> Power Plan makes a difference.


Was also using the same PP, 1usmus universal each time.


----------



## D1g1talEntr0py

LtMatt said:


> Was also using the same PP, 1usmus universal each time.


I thought the regular "Balanced" power plan was the way to go now. Is there benefit to using the 1usmus power plan?


----------



## LtMatt

D1g1talEntr0py said:


> I thought the regular "Balanced" power plan was the way to go now. Is there benefit to using the 1usmus power plan?


I get slightly higher boost frequencies in lightly threaded apps and games using the 1usmus universal profile.


----------



## D1g1talEntr0py

LtMatt said:


> I get slightly higher boost frequencies in lightly threaded apps and games using the 1usmus universal profile.


Interesting. I'll give it a shot. You using Zen 3 chip?


----------



## LtMatt

D1g1talEntr0py said:


> Interesting. I'll give it a shot. You using Zen 3 chip?


5950X.


----------



## Kir

Kir said:


> Yeah, just not sure if it's worth it with this specific one. I don't know anymore lol
> 
> Yeah, true. My settings that give me 1000points higher score in Time Spy lead about 3-5% FPS increase in the few games I tested. Undervolting though drops the temps so much. In Assassins Creed Origins I can do 1110mV which drops the junction temp from 100°C to about 92°C at 1800rpm. Yeah, contacting them seems like a good idea.


Late update, but AMD Support (Germany) says opening up the card to remount/repaste/install waterblock would definitely void warranty. rip. Guess I'll just UV as much as I can and set the max clock a bit lower. Or sell it lol


----------



## ZealotKi11er

Kir said:


> Late update, but AMD Support (Germany) says opening up the card to remount/repaste/install waterblock would definitely void warranty. rip. Guess I'll just UV as much as I can and set the max clock a bit lower. Or sell it lol


Is the German policy because it should not be the case in US.


----------



## thomasck

Kir said:


> Late update, but AMD Support (Germany) says opening up the card to remount/repaste/install waterblock would definitely void warranty. rip. Guess I'll just UV as much as I can and set the max clock a bit lower. Or sell it lol


Funny thing, I was just on the phone with the retailer I bought the 6900XT and I questioned about warranty vs water block and he said as long as you are able to put back the old cooler to use the warranty, there is no problem. I am based in the uk.


----------



## Kir

thomasck said:


> Funny thing, I was just on the phone with the retailer I bought the 6900XT and I questioned about warranty vs water block and he said as long as you are able to put back the old cooler to use the warranty, there is no problem. I am based in the uk.


Yeah, I think retailers and custom gpu manufacturers have more lenient policies. If you buy from AMD directly, only the first buyer has any right to warranty.



Tbh, I just reread the warranty information and now I'm thinking it should actually be fine. The response from the AMD support just was kind of ambiguous and sounded as though it would not be okay. Saying it needs to be in its "original state", which of course it wouldn't be any more if I touch it at all. But the warranty states that that's only relevant if AMD assess that the damage was caused by that. I don't see anything about "original state".
I guess I'll just have to ask for confirmation again.


----------



## thomasck

If original state means original thermal compound in place then is difficult [emoji28]

Sent from my Pixel 2 XL using Tapatalk


----------



## Godhand007

Enzarch said:


> Of course, not much to see, just zeroed out 'a' for now
> View attachment 2515145


Seems like I am able to get around 50Mhz more on clocks because of this (Multiple GT2 loops tested). HwInfo is reporting higher voltages as well. Is there a detailed wiki/guide for SVO?


----------



## reqq

CantingSoup said:


> Will you be putting it under water?


No i wont put anything under water because 8 hours later digital river declined the purchase.. no idea whats going on..


----------



## Kir

Kir said:


> Yeah, I think retailers and custom gpu manufacturers have more lenient policies. If you buy from AMD directly, only the first buyer has any right to warranty.
> 
> 
> 
> Tbh, I just reread the warranty information and now I'm thinking it should actually be fine. The response from the AMD support just was kind of ambiguous and sounded as though it would not be okay. Saying it needs to be in its "original state", which of course it wouldn't be any more if I touch it at all. But the warranty states that that's only relevant if AMD assess that the damage was caused by that. I don't see anything about "original state".
> I guess I'll just have to ask for confirmation again.



Got a reply form a different AMD support person. Much clearer and friendlier response this time. They also confirmed that I can in fact take it apart, replace thermal paste and even mount a waterblock. Original state only means it needs to be sent back properly assembled the way it came (meaning reverting any modifications, but having different thermal paste is fine) in case of any warranty claims. And of course the damage must not have happened from modifications.


----------



## ZealotKi11er

Kir said:


> Got a reply form a different AMD support person. Much clearer and friendlier response this time. They also confirmed that I can in fact take it apart, replace thermal paste and even mount a waterblock. Original state only means it needs to be sent back properly assembled the way it came (meaning reverting any modifications, but having different thermal paste is fine) in case of any warranty claims. And of course the damage must not have happened from modifications.


From personal experience with modern GPUs, they have so many safely mechanics like controlling temperature of vrm, hotspot, edge, memory and current/voltage that to kill a GPU it would have to be DOA or you physically break it.


----------



## ptt1982

How does this SVO work? I’m interested in anything safe that can raise the clocks 50mhz. It raises static voltage, but which voltage and how does it affect the vcore or soc?


----------



## ZealotKi11er

ptt1982 said:


> How does this SVO work? I’m interested in anything safe that can raise the clocks 50mhz. It raises static voltage, but which voltage and how does it affect the vcore or soc?


It does nothing. Will not make you card go faster.


----------



## Godhand007

ptt1982 said:


> How does this SVO work? I’m interested in anything safe that can raise the clocks 50mhz. It raises static voltage, but which voltage and how does it affect the vcore or soc?


Yeah waiting for someone to provide some details on this.


----------



## Enzarch

ptt1982 said:


> How does this SVO work? I’m interested in anything safe that can raise the clocks 50mhz. It raises static voltage, but which voltage and how does it affect the vcore or soc?





Godhand007 said:


> Yeah waiting for someone to provide some details on this.


I posted a diagram in post #2076
Theoretically, you should be able to raise 'a' to gain a bit of voltage


----------



## ZealotKi11er

Enzarch said:


> I posted a diagram in post #2076
> Theoretically, you should be able to raise 'a' to gain a bit of voltage


In practice you are hard limited based on the GPU you have. Cant go over that max vCore 1.175/1.2


----------



## J7SC

ZealotKi11er said:


> In practice you are hard limited based on the GPU you have. Cant go over that max vCore 1.175/1.2


...how 'hard' is the hard limit, you think ? My priority is PL anyhow which I got via MPT, but I recall seeing a vid that seemed to suggest that Igor (from Igor's lab > MPT) actually had had 'partial' success raising the voltage limit w/ MPT.


----------



## Godhand007

ZealotKi11er said:


> In practice you are hard limited based on the GPU you have. Cant go over that max vCore 1.175/1.2


But that's the thing, according to HWInfo and Buildzoid, we are not reaching max voltage atm.


----------



## J7SC

...I seem to reach max voltage per HWInfo (pic is stock / pre-MPT)


----------



## Godhand007

J7SC said:


> ...I seem to reach max voltage per HWInfo (pic is stock / pre-MPT)
> 
> View attachment 2515326


Not happening for me (screenshot below). Mine is a refrence card BTW.


----------



## spears

Hey guys I just made this account to report my issue I am currently waiting to start the RMA process with, for my XFX Merc319 6900 XT Black.

Specs:
Gigabyte x570 Gaming X 1.1
Crucial 8gb x4
Season 850w Gold 
Ryzen 5 5600x
Merc319 6900 XT

Last night while playing RDR2 at 1440p my screen briefly flashed white and my card then lost display. I restarted my PC and still did not have display. However my monitor was still recognizing there was a source. Other display sources on my Samsung Odyseey G7 would show the bouncing 'No device" box. When set to the correct source the screen would just turn off after briefly being black while "Display port 1" appeared on the top left. I think it was posting but I do not know for sure as I do not have integrated graphics with the Ryzen 5 5600X

My instability seemed to start after I enabled SAM in the BIOS after realizing it was disabled by default on the Gigabyte X570 Gaming X 1.1 (F33g firmware) After I did this RDR2 would crash every 1.5-2 hours with 'Unknown Error: FFFFF' I had the "Overclock GPU" preset and "Auto OC" preset in Radeon Software and Wattman set. Junction temps would get to low 90Cs, but I was told that was still in the ideal range. Over 110C was too hot for that part of the card. The other parts of the card were well below that at around 80C at the very most. I did not have the sysinfo overlay on last night during the crash. 

After the crash I reseated the card, tried a different display cable, tried different ports, tried an HDMI cable, a different monitor, and finally swapped cards with my old GTX1080 system I had before building the new 6900XT system last month. I then took the GTX1080 and placed it into my new system in the same slot the 6900XT was in and the GTX1080 works, and is how I am currently writing this. The 6900XT would not display on the old system. 

This is what I believe the events were that happened during failure. DxgKernal and Kernel-Power I am still waiting to hear back from XFX support. I believe I have exhausted all troubleshooting options and based on my research so far many of these cards are not properly regulating their power draw. I think mine failing had something to with this and a defective failsafe. I think the card still 'works' but is just not outputting display.

I only had this for about 1.5 months and the only games I played were 60 hours of Horizon Zero Dawn, and maybe 60+ hours of RDR2


----------



## lestatdk

What's the point of SVO if we're still capped by max voltage ? Seems the problem people have is that max voltage is too low, SVO will not solve that. Kinda pointless unless you'd want higher voltage at some lower frequencies perhaps


----------



## LtMatt

LtMatt said:


> I managed to improve my Timespy score with 21.6.1.
> 
> Here is my previous best vs my now best scores.
> AMD Radeon RX 6900 XT video card benchmark result - AMD Ryzen 9 5950X,ASUSTeK COMPUTER INC. ROG CROSSHAIR VIII HERO (3dmark.com)
> AMD Radeon RX 6900 XT video card benchmark result - AMD Ryzen 9 5950X,ASUSTeK COMPUTER INC. ROG CROSSHAIR VIII HERO (3dmark.com)
> 
> My graphics score has gone up by 3.3% but my CPU score has gone down 2.4%. Do i roll back the BIOS? If I do i can only get around 21,900 on the GPU score even with 21.6.1 so I do wonder if the extra performance is anything to do with SAM rather than 21.6.1. That said, SAM is enabled in both BIOS versions.
> 
> Managed to find an old 6800 XT Timespy score I had, back when i was using earlier versions of Windows 10 and before i updated BIOS and my CPU score was 4.6% higher than what i can get now. :/
> AMD Radeon RX 6800 XT video card benchmark result - AMD Ryzen 9 5950X,ASUSTeK COMPUTER INC. ROG CROSSHAIR VIII HERO (3dmark.com)
> 
> 5950X was using PBO and Curve Optimizer in all tests and settings and memory speeds/timings were the same.


Got some slightly better scores on Firestrike Standard, Timespy and Timespy Extreme.

AMD Radeon RX 6900 XT video card benchmark result - AMD Ryzen 9 5950X,ASUSTeK COMPUTER INC. ROG CROSSHAIR VIII HERO (3dmark.com)
AMD Radeon RX 6900 XT video card benchmark result - AMD Ryzen 9 5950X,ASUSTeK COMPUTER INC. ROG CROSSHAIR VIII HERO (3dmark.com)
AMD Radeon RX 6900 XT video card benchmark result - AMD Ryzen 9 5950X,ASUSTeK COMPUTER INC. ROG CROSSHAIR VIII HERO (3dmark.com)

Best I can do on air.


----------



## ZealotKi11er

For some reason the new driver I get much less clock speed with fs and pr. If anything now they have to be as low as ts clocks.


----------



## LtMatt

ZealotKi11er said:


> For some reason the new driver I get much less clock speed with fs and pr. If anything now they have to be as low as ts clocks.


I see lower clock speeds in Firestrike now also, but I can now clock a bit higher in Timespy than i could previously.


----------



## ZealotKi11er

LtMatt said:


> I see lower clock speeds in Firestrike now also, but I can now clock a bit higher in Timespy than i could previously.


What are your TS clocks?


----------



## LtMatt

ZealotKi11er said:


> What are your TS clocks?


Think i was running 2735 or 2740. Previously I could only run around 2700. FS Extreme I've completed runs at 2825 before, but now can't go over 2770.


----------



## J7SC

...new driver initially meant that my MPT mods were gone (makes sense, given the Win registry location). Also, new drivers took may be 20 Mhz or so off the top on my card, but as stated before, fps actually increased marginally in TS and PR...prepping the 6900 XT for water this week; not bad timing, given a.) MPT up to 400W now, and b.) record heat wave


----------



## Godhand007

spears said:


> Hey guys I just made this account to report my issue I am currently waiting to start the RMA process with, for my XFX Merc319 6900 XT Black.
> 
> Specs:
> Gigabyte x570 Gaming X 1.1
> Crucial 8gb x4
> Season 850w Gold
> Ryzen 5 5600x
> Merc319 6900 XT
> 
> Last night while playing RDR2 at 1440p my screen briefly flashed white and my card then lost display. I restarted my PC and still did not have display. However my monitor was still recognizing there was a source. Other display sources on my Samsung Odyseey G7 would show the bouncing 'No device" box. When set to the correct source the screen would just turn off after briefly being black while "Display port 1" appeared on the top left. I think it was posting but I do not know for sure as I do not have integrated graphics with the Ryzen 5 5600X
> 
> My instability seemed to start after I enabled SAM in the BIOS after realizing it was disabled by default on the Gigabyte X570 Gaming X 1.1 (F33g firmware) After I did this RDR2 would crash every 1.5-2 hours with 'Unknown Error: FFFFF' I had the "Overclock GPU" preset and "Auto OC" preset in Radeon Software and Wattman set. Junction temps would get to low 90Cs, but I was told that was still in the ideal range. Over 110C was too hot for that part of the card. The other parts of the card were well below that at around 80C at the very most. I did not have the sysinfo overlay on last night during the crash.
> 
> After the crash I reseated the card, tried a different display cable, tried different ports, tried an HDMI cable, a different monitor, and finally swapped cards with my old GTX1080 system I had before building the new 6900XT system last month. I then took the GTX1080 and placed it into my new system in the same slot the 6900XT was in and the GTX1080 works, and is how I am currently writing this. The 6900XT would not display on the old system.
> 
> This is what I believe the events were that happened during failure. DxgKernal and Kernel-Power I am still waiting to hear back from XFX support. I believe I have exhausted all troubleshooting options and based on my research so far many of these cards are not properly regulating their power draw. I think mine failing had something to with this and a defective failsafe. I think the card still 'works' but is just not outputting display.
> 
> I only had this for about 1.5 months and the only games I played were 60 hours of Horizon Zero Dawn, and maybe 60+ hours of RDR2


Sorry to hear about your issue. If RMA process is easy for you, go ahead with it.


----------



## Godhand007

lestatdk said:


> What's the point of SVO if we're still capped by max voltage ? Seems the problem people have is that max voltage is too low, SVO will not solve that. Kinda pointless unless you'd want higher voltage at some lower frequencies perhaps


If you go through the posts above, the point that is being discussed is that the max voltage of 1175mv is not being reached (according to HwInfo and Buildzoid). We are not talking about increasing voltage beyond 1175mv.


----------



## ZealotKi11er

Godhand007 said:


> If you go through the posts above, the point that is being discussed is that the max voltage of 1175mv is not being reached (according to HwInfo and Buildzoid). We are not talking about increasing voltage beyond 1175mv.


Yeah because of vdrop. You will never reach exactly 1.175v. The limiter does not check actual voltage but set voltage.


----------



## J7SC

ZealotKi11er said:


> Yeah because of vdrop. You will never reach exactly 1.175v. The limiter does not check actual voltage but set voltage.


I'm also wondering why high-PL MPT seems to exaggerate that compared to stock, at least according to sensor software


----------



## Godhand007

ZealotKi11er said:


> Yeah because of vdrop. You will never reach exactly 1.175v. The limiter does not check actual voltage but set voltage.


I have seen it reach 1.175v after messing around with SVO and one of the earlier posts also mentions max 1.175v.


----------



## No-one-no1

Godhand007 said:


> I have seen it reach 1.175v after messing around with SVO and one of the earlier posts also mentions max 1.175v.


Still no software only hack to get voltages over 1.175/1.2 then.
Might have to get around to soldering in the elmor i2c interface.


----------



## Enzarch

J7SC said:


> I'm also wondering why high-PL MPT seems to exaggerate that compared to stock, at least according to sensor software


This is 100% expected, More current = more Vdroop

Just got my EVC2 hooked up, lets see how far this reference card can go.


----------



## J7SC

Enzarch said:


> This is 100% expected, More current = more Vdroop
> 
> Just got my EVC2 hooked up, lets see how far this reference card can go.


...with the higher current per custom MPT, my delta between GPU and Hotspot is also increasing in relative terms. Unfortunately, it is too hot (record ambient temps here) to even mount the water block for the 6900 XT. I rather sit in front of the air conditioner for now...


----------



## Godhand007

Enzarch said:


> This is 100% expected, More current = more Vdroop
> 
> Just got my EVC2 hooked up, lets see how far this reference card can go.


Hmm, so at stock PL, it would go up to max voltage. Still, messing with SVO allows it to reach max voltage even with higher PL limits.


----------



## HyperC

new drivers aren't the best 22,581 so far testing TS


----------



## LtMatt

HyperC said:


> new drivers aren't the best 22,581 so far testing TS


I improved my graphics score in TS by around 800 points going to 21.6.1 from 21.4.1.


----------



## cfranko

Which air cooled AIB 6900 XT is the best? I have a ASRock Phantom Gaming 6900 XT but I want to change it.


----------



## LtMatt

cfranko said:


> Which air cooled AIB 6900 XT is the best? I have a ASRock Phantom Gaming 6900 XT but I want to change it.


Define best?

The XFX Speedster Merc I believe to be the best overall air cooled card, certainly in terms of looks, noise and cooling capacity.

Just a shame there's no XTXU version.


----------



## cfranko

LtMatt said:


> Define best?
> 
> The XFX Speedster Merc I believe to be the best overall air cooled card, certainly in terms of looks, noise and cooling capacity.
> 
> Just a shame there's no XTXU version.


XTXU isn’t really important for me. I am looking for a card that actually looks good and has good cooling . My current card has good cooling but looks terrible. I was thinking of getting a Red Devil, but I am not sure if that would be a downgrade in terms of cooling.


----------



## LtMatt

cfranko said:


> XTXU isn’t really important for me. I am looking for a card that actually looks good and has good cooling . My current card has good cooling but looks terrible. I was thinking of getting a Red Devil, but I am not sure if that would be a downgrade in terms of cooling.


My above answer is correct then in your terms of 'best'.


----------



## cfranko

LtMatt said:


> My above answer is correct then in your terms of 'best'.


Yep, I also really want the Merc319 but its just so expensive. I think I am going to go with the Red Devil, thanks though.


----------



## LtMatt

cfranko said:


> Yep, I also really want the Merc319 but its just so expensive. I think I am going to go with the Red Devil, thanks though.


Furry muff. What is the price difference in your location? 

In the UK they are the same price pretty much.


----------



## cfranko

LtMatt said:


> Furry muff. What is the price difference in your location?
> 
> In the UK they are the same price pretty much.


The Red Devil goes for 15.000 Turkish lira which is 1700 USD while the Merc goes for 18.000 Turkish lira which is 2050 USD, this price difference is probably caused by the Merc having limited stock. I can sell my own gpu for 15.000 liras right now which is basically a free upgrade to a Red devil.


----------



## LtMatt

cfranko said:


> The Red Devil goes for 15.000 Turkish lira which is 1700 USD while the Merc goes for 18.000 Turkish lira which is 2050 USD, this price difference is probably caused by the Merc having limited stock. I can sell my own gpu for 15.000 liras right now which is basically a free upgrade to a Red devil.


That's some difference, I can understand your reluctance to spend all that extra money.


----------



## Takla

Godhand007 said:


> Not happening for me (screenshot below). Mine is a refrence card BTW.
> 
> View attachment 2515327


More likely that reference cards report voltage after vdroop and aftermarket do not (because aftermarket pcb uses a different controller which isn't supported by hwinfo or simply cannot report vdroop)


----------



## Takla

.


----------



## HyperC

LtMatt said:


> I improved my graphics score in TS by around 800 points going to 21.6.1 from 21.4.1.


I wish i could say the same , Maybe it's my newer mobo bios dunno


----------



## spears

HyperC said:


> I wish i could say the same , Maybe it's my newer mobo bios dunno


do you have SAM enabled? mine was turned off by default on my gigabyte x570 gaming x board. 
however when i turned it on it aggravated what must have been the inherent instability of my card.


----------



## LtMatt

HyperC said:


> I wish i could say the same , Maybe it's my newer mobo bios dunno


Could be, I saw lower performance on the latest BIOS for asus so rolled back to 3003 using Crosshair.


----------



## jfrob75

Recently updated to the GB Aorus 6900XT Extreme WB replaceing my Poworcolor reference 6900XT, which was on water as well. Here is my latest and best Timespy results with the GB GPU. It managed to break the 23000 graphics score level 

I had set min freq to 2690 and max freq to 2790 MHz for this run. Memory was set to fast timing @ 2138MHz. MPT power set to 400 watts.


----------



## jonRock1992

cfranko said:


> XTXU isn’t really important for me. I am looking for a card that actually looks good and has good cooling . My current card has good cooling but looks terrible. I was thinking of getting a Red Devil, but I am not sure if that would be a downgrade in terms of cooling.


I wouldn't get the red devil. It's loud and it's cooler is kinda meh. I would always thermal throttle while OC'ing in demanding games. I ended up tearing mine down and it appeared as though everything was making proper contact, but IDK. I put a waterblock on mine, but I haven't been able to test it because my freaking pump came DOA!


----------



## LtMatt

jfrob75 said:


> Recently updated to the GB Aorus 6900XT Extreme WB replaceing my Poworcolor reference 6900XT, which was on water as well. Here is my latest and best Timespy results with the GB GPU. It managed to break the 23000 graphics score level
> 
> I had set min freq to 2690 and max freq to 2790 MHz for this run. Memory was set to fast timing @ 2138MHz. MPT power set to 400 watts.
> 
> View attachment 2515592


Very nice score!


----------



## J7SC

...quick question: I'm interested in flashing a bios form the "H' chip model onto my otherwise identical PCB non-H model. I don't necessarily expect 1.2v instead of 1.175v, but would there be other pitfalls ?


----------



## LtMatt

J7SC said:


> ...quick question: I'm interested in flashing a bios form the "H' chip model onto my otherwise identical PCB non-H model. I don't necessarily expect 1.2v instead of 1.175v, but would there be other pitfalls ?


Yes, it won't work.


----------



## J7SC

LtMatt said:


> Yes, it won't work.


Thanks  , but :-(


----------



## spears

I did some further testing with my XFX 6900XT Black. 

At first the card in device manager was showing in device manage as Microsoft Basic Display - error Code 31. After I DDU'd that machine and reinstalled drivers the card became recognized by name, but displayed an error Code 43. It would not display 

Anyone ran into this before?


----------



## cfranko

jonRock1992 said:


> I wouldn't get the red devil. It's loud and it's cooler is kinda meh. I would always thermal throttle while OC'ing in demanding games. I ended up tearing mine down and it appeared as though everything was making proper contact, but IDK. I put a waterblock on mine, but I haven't been able to test it because my freaking pump came DOA!


Thanks for the heads up


----------



## cfranko

jfrob75 said:


> Recently updated to the GB Aorus 6900XT Extreme WB replaceing my Poworcolor reference 6900XT, which was on water as well. Here is my latest and best Timespy results with the GB GPU. It managed to break the 23000 graphics score level
> 
> I had set min freq to 2690 and max freq to 2790 MHz for this run. Memory was set to fast timing @ 2138MHz. MPT power set to 400 watts.
> 
> View attachment 2515592


Is it possible to get a graphics score like this on air? I have a Asrock Phantom 6900 xt on air and I set it to 350 watts in MPT and an additional +15 in wattman which ended up with a total of 410 watts power limit. However the maximum graphics score I could get was 21.000. Also, on Time Spy I can’t get my Max. Frequency above 2630, even 2635 would crash so 2630 is the hard limit. How did you do 2790 as max? I mean I know silicone lottery is a thing but this seems cherry picked or something idk, It is much better than mine in terms of clocks.


----------



## D1g1talEntr0py

cfranko said:


> Is it possible to get a graphics score like this on air? I have a Asrock Phantom 6900 xt on air and I set it to 350 watts in MPT and an additional +15 in wattman which ended up with a total of 410 watts power limit. However the maximum graphics score I could get was 21.000. Also, on Time Spy I can’t get my Max. Frequency above 2630, even 2635 would crash so 2630 is the hard limit. How did you do 2790 as max? I mean I know silicone lottery is a thing but this seems cherry picked or something idk, It is much better than mine in terms of clocks.


I seriously doubt it. Seems like putting these cards on water makes a huge difference with the frequency. I have an Aorus 6900XT Master and the highest stable freq. I can achieve is 2675MHz which results in around 2625MHz - 2635MHz boost and an effective clock of around 2620MHz depending on the game or benchmark. CPU bottleneck vs. GPU bottleneck.


----------



## Godhand007

spears said:


> I did some further testing with my XFX 6900XT Black.
> 
> At first the card in device manager was showing in device manage as Microsoft Basic Display - error Code 31. After I DDU'd that machine and reinstalled drivers the card became recognized by name, but displayed an error Code 43. It would not display
> 
> Anyone ran into this before?


Yeah, Something is definitely not right here.


----------



## ZealotKi11er

D1g1talEntr0py said:


> I seriously doubt it. Seems like putting these cards on water makes a huge difference with the frequency. I have an Aorus 6900XT Master and the highest stable freq. I can achieve is 2675MHz which results in around 2625MHz - 2635MHz boost and an effective clock of around 2620MHz depending on the game or benchmark. CPU bottleneck vs. GPU bottleneck.


I have not seem much difference from 80-100c going down to 40-60c.


----------



## jfrob75

cfranko said:


> Is it possible to get a graphics score like this on air? I have a Asrock Phantom 6900 xt on air and I set it to 350 watts in MPT and an additional +15 in wattman which ended up with a total of 410 watts power limit. However the maximum graphics score I could get was 21.000. Also, on Time Spy I can’t get my Max. Frequency above 2630, even 2635 would crash so 2630 is the hard limit. How did you do 2790 as max? I mean I know silicone lottery is a thing but this seems cherry picked or something idk, It is much better than mine in terms of clocks.


Unfortunately I doubt it. I believe my graphics card chip is what is referred to as XTXU chip or something like that. Having water cooling certainly helps plus I think I lucked out on the silicone lottery. I might be able to push the clocks higher once cooler temperature arrive during winter months or I figure a way to the chill the water. On my reference 6900XT, which is also water cooled, I could not get timespy to run successfully at these clocks either. The highest I could get was 2750 MHz but that was sporadic at best.


----------



## D1g1talEntr0py

ZealotKi11er said:


> I have not seem much difference from 80-100c going down to 40-60c.


Interesting. I guess it really is down to the silicon lottery.


----------



## weleh

There's pretty much 0 difference between air or water on these cards since they do not have a V/F curve attached to temps like most Nvidia GPUs.
This is only a thing if you hit hotspot throttle temps.

My previous 6800XT Nitro+ SE pulled 400W happily on air at 2750 Mhz. 0 Throttle and one of the fastest 6800XT's I've seen. 

Running on water do allow you to run higher PL since thermal headroom though and that's about it.


----------



## 6u4rdi4n

Excellent point. These cards don't really scale with temperature like the Nvidia cards do, but you do get more thermal headroom. Personally I've always used water cooling to get "max performance" while keeping the noise to a minimum. It's as quiet as a library here!


----------



## ZealotKi11er

D1g1talEntr0py said:


> Interesting. I guess it really is down to the silicon lottery.


Might be able to do a it more for quick benchmark run like 15-25MHz but nothing crazy.


----------



## J7SC

...I disagree, at least to some extent. While the boost vs throttle algorithm input of temps is different from NVidia (I use both a 6900XT and a 3090), temps play a significant role - a simple check at HWBot's summary for the 6900 XT yields the following average GPU speeds by cooling used:

air 2667
water 2727
cascade 2877
ln2 2908


----------



## weleh

J7SC said:


> ...I disagree, at least to some extent. While the boost vs throttle algorithm input of temps is different from NVidia (I use both a 6900XT and a 3090), temps play a significant role - a simple check at HWBot's summary for the 6900 XT yields the following average GPU speeds by cooling used:
> 
> air 2667
> water 2727
> cascade 2877
> ln2 2908


LN2 and Cascade are both subambient and will obviously positively impact core clocks.

However, water and air is really irrelevant for these cards unless you're close to thermal throttle temps. The difference you see is more likely due to silicon quality and binning from some AIBs rather than actually water making any difference.

There's no hard data confirming that these cards actually care about being at 50C or at 80C... 

Just as an example, my Toxic is binned to do at least 2660 Mhz on the core, via TrixxBoost, being on water has nothing to do with this binning process at all.


----------



## ZealotKi11er

J7SC said:


> ...I disagree, at least to some extent. While the boost vs throttle algorithm input of temps is different from NVidia (I use both a 6900XT and a 3090), temps play a significant role - a simple check at HWBot's summary for the 6900 XT yields the following average GPU speeds by cooling used:
> 
> air 2667
> water 2727
> cascade 2877
> ln2 2908


air and liquid are very close. air also has the possibility of thermal throttling and anything over 300w.


----------



## ptt1982

(Edit)

Quick report on the new 21.6.2 drivers = Had to lower my clocks by 50mhz, and they seem overall slightly more unstable, and run 3-4C hotter. This is based only on Timespy runs. So far the best driver I've seen has been 21.6.1, but your mileage may vary.


----------



## BIaze

Currently on 21.5.2, should I stay for now or will upgrading/downgrading net any benefit?


Also does the 6900XT have notoriously bad DX11 performance or am I just unlucky? I'm getting horrible performance in BF4 and I cannot run it in mantle so I'm stuck on DX11

DX12 games are fine though


----------



## LtMatt

ptt1982 said:


> (Edit)
> 
> Quick report on the new 21.6.2 drivers = Had to lower my clocks by 50mhz, and they seem overall slightly more unstable, and run 3-4C hotter. This is based only on Timespy runs. So far the best driver I've seen has been 21.6.1, but your mileage may vary.


I was just about to ask if anyone has tested them cheers. 

I will do some Firestrike and Timespy tests shortly on 21.6.1 vs 21.6.2 and provide a second opinion.

21.6.1 has been the best so far for me so will be surprised if 21.6.2 can top it on Timespy at least.


----------



## J7SC

LtMatt said:


> I was just about to ask if anyone has tested them cheers.
> 
> I will do some Firestrike and Timespy tests shortly on 21.6.1 vs 21.6.2 and provide a second opinion.
> 
> 21.6.1 has been the best so far for me so will be surprised if 21.6.2 can top it on Timespy at least.


...same for me here, 21.6.1 was the best so far (out of a total of two  , card is relatively new).

I am looking forward to your result as we are just coming out of an incredible 'heat wave' in the Pacific Northwest - the only thing I'm running is the portable AC and a few w-cooled systems


----------



## jonRock1992

My new pump/res is coming in the mail today. Hopefully getting this Red Devil Ultimate under water tonight. Already got everything set up. Just gotta attach the hoses to it and fill it up. I'll report back with the some bench results.

Update: Well. My package was delayed. Why is it so hard for me to get a water pump. The first one I tried getting was DOA, and now I tried getting a different one from Amazon and there was an error with the delivery. So annoying.


----------



## LtMatt

J7SC said:


> ...same for me here, 21.6.1 was the best so far (out of a total of two  , card is relatively new).
> 
> I am looking forward to your result as we are just coming out of an incredible 'heat wave' in the Pacific Northwest - the only thing I'm running is the portable AC and a few w-cooled systems


Resulsts are in, margin of error stuff. Basically no difference.

Ran the test twice on each driver, using the best result (overall score) and 21.6.2 was a few points faster in graphics score, but really nothing in it.

6900 XT Stock Core Clock @2379Mhz/2479Mhz / 2124Mhz
21.6.1
AMD Radeon RX 6900 XT video card benchmark result - AMD Ryzen 9 5950X,ASUSTeK COMPUTER INC. ROG CROSSHAIR VIII HERO (3dmark.com)

6900 XT Stock Core Clock @2379Mhz/2479Mhz / 2124Mhz
21.6.2
AMD Radeon RX 6900 XT video card benchmark result - AMD Ryzen 9 5950X,ASUSTeK COMPUTER INC. ROG CROSSHAIR VIII HERO (3dmark.com)

I would recommend revieiwng the 21.6.2 release notes and deciding if it is worth if for you to upgrade. In other words, if you intend to play Doom Eternal and use Ray Tracing.


----------



## ptt1982

LtMatt said:


> Resulsts are in, margin of error stuff. Basically no difference.
> 
> Ran the test twice on each driver, using the best result (overall score) and 21.6.2 was a few points faster in graphics score, but really nothing in it.
> 
> 6900 XT Stock Core Clock @2379Mhz/2479Mhz / 2124Mhz
> 21.6.1
> AMD Radeon RX 6900 XT video card benchmark result - AMD Ryzen 9 5950X,ASUSTeK COMPUTER INC. ROG CROSSHAIR VIII HERO (3dmark.com)
> 
> 6900 XT Stock Core Clock @2379Mhz/2479Mhz / 2124Mhz
> 21.6.2
> AMD Radeon RX 6900 XT video card benchmark result - AMD Ryzen 9 5950X,ASUSTeK COMPUTER INC. ROG CROSSHAIR VIII HERO (3dmark.com)
> 
> I would recommend revieiwng the 21.6.2 release notes and deciding if it is worth if for you to upgrade. In other words, if you intend to play Doom Eternal and use Ray Tracing.


Thanks for sharing the results! It's quite interesting to see that people with certain AIBs/Reference cards have completely different experience with drivers. It seems that my Red Devil is very sensitive to drivers, given that one simple driver update can shave off 50mhz core speed. Maybe some technical person can explain this in detail, but I suspect this has something to do with the voltage, regulators and phasers (I don't know they are exactly, but I understand they control the power delivery, and the more you have, the more control you have.) 

Looking at the vdroop reports, it seems AMD still has a long way to go until the 6900xt becomes fully stable and its full potential has been unleashed. Case in point, here's Digital Trends translated article about a German site which found RDNA2.0 cards improving performance faster than Ampere's through driver updates: The RX 6800 XT Runs 9% Faster Than At Launch. Here's How | Digital Trends


----------



## lawson67

I have a Red Devil Powercolor RX 6800 XT that was running from new out the box in 3DMark TS or any games at about 100c plus on the hotspot, i took it apart and put some Dimond paste i had from years ago and it did nothing to the temps, i thought it was just a **** cooler badly designed even though it looks massive, then i got the balls up to strip it down again armed with some clear nail polish i bought of Amazon and some Thermal grizzly Conductonaut i watched about 3 videos on you tube of people doing LM watch how they applied it then stripped it down again, i really took my time to try and do a clean job, i didn't replace any of the original thermal pad as they looked fine to me, so after i put it back together i took it back to the PC and my heart rate was though the roof hoping i had not buggered it up, i turned on the PSU hit the power button and the powercolor lit up booted to windows and on the desktop HWINFO64 reported it was 26c , i then fired up 3D mark and did a full TS extreme ulta Stress test, normally that would be hitting about 101c - 102c but now max i got was 83c, to say i am happy is an understatement, i am so pleased i did it and very very happy with the results!


----------



## jonRock1992

If it needs liquid metal to perform normally then it is a bad heatsink design. I wasn't happy with my red devil ultimate stock cooler at all, and I didn't wanna use liquid metal just yet because the card was too expensive and the risk is too high. I used liquid metal on my last card after I had it for a few years, and maybe I'll do the same with this one. I won't be getting a powercolor card ever again.


----------



## lawson67

jonRock1992 said:


> If it needs liquid metal to perform normally then it is a bad heatsink design. I wasn't happy with my red devil ultimate stock cooler at all, and I didn't wanna use liquid metal just yet because the card was too expensive and the risk is too high. I used liquid metal on my last card after I had it for a few years, and maybe I'll do the same with this one. I won't be getting a powercolor card ever again.


I agree it should not need LM to preform well however i think if you take your time and use some nail polish around the GPU Die you should be fine to use LM lots of people are using it and we have a nickel plated heatsink so it should not have any negative effects on our heat sink, all i can say is i am so pleased it did it, it now operates as it should out of the box, one thing i noted that it does now after applying the LM is that all of the heat sink is warm all the way down towards the end of the card by the red lights before i used the LM that part of the heatsink was always cool and now with the LM it appears to be using all of the heat sink, anyhow i am now very happy with my card at last!


----------



## Bart

I think with most of these cards, a simple repaste with good standard paste like MX-4 (or Kingpin KPX in my case), will yield similar results. For those who don't want to risk liquid metal, but still have the stones to take their GPU apart, I think it's worth it. A lot of stock GPUs aren't pasted well from the factory.

EDIT: that's not to poop on lawson67's results, that took serious cahones, and I respect that. 🙃


----------



## jonRock1992

I just wish manufactures would use liquid metal by default on these high-end gpu's.


----------



## ptt1982

jonRock1992 said:


> I just wish manufactures would use liquid metal by default on these high-end gpu's.


Absolutely. If they can use liquid metal in a console such as PS5 (also an RDNA2.0 gpu) I’m sure they manufacturers can figure it our for GPUs. LM based AIO for example, would be great to have.


----------



## BIaze

is repasting the Strix LC with LM worth it or will it cause staining later in the future?


----------



## ZealotKi11er

ptt1982 said:


> Absolutely. If they can use liquid metal in a console such as PS5 (also an RDNA2.0 gpu) I’m sure they manufacturers can figure it our for GPUs. LM based AIO for example, would be great to have.


They probably can, but it will add cost.


----------



## weleh

LM on direct die isn't even worth it though.


----------



## Bart

weleh said:


> LM on direct die isn't even worth it though.


This!! Putting LM on stock GPUs would be so stupid. Most of us are gonna put water blocks on these things, so why bother using liquid metal on a garbage air cooler that can't even compare to a proper water block?


----------



## Starkinsaur

weleh said:


> LM on direct die isn't even worth it though.


What's your reason for saying this? Direct-die on GPU is the most worthwhile application of LM on enthusiast PCs - highest heat density chip.



Bart said:


> This!! Putting LM on stock GPUs would be so stupid. Most of us are gonna put water blocks on these things, so why bother using liquid metal on a garbage air cooler that can't even compare to a proper water block?


Perhaps if they used LM from the factory, there would be fewer people replacing stock coolers. Also, you say "us" but we're a small portion of their target market.

I don't think it'd be that stupid. They would perform better. So, the card would be quieter and/or cooler.

However, it would have some disadvantages - nickel plated block required, more difficult application, risk of catastrpohic failure, reduced user serviceability.

On _high end_ enthusiast products which _are _targetted at us, it'd be a reasonable thing for a manufacturer to at least offer imo. Like jonRock1992 said.


----------



## lawson67

Starkinsaur said:


> What's your reason for saying this? Direct-die on GPU is the most worthwhile application of LM on enthusiast PCs - highest heat density chip.
> 
> 
> 
> Perhaps if they used LM from the factory, there would be fewer people replacing stock coolers. Also, you say "us" but we're a small portion of their target market.
> 
> I don't think it'd be that stupid. They would perform better. So, the card would be quieter and/or cooler.
> 
> However, it would have some disadvantages - nickel plated block required, more difficult application, risk of catastrpohic failure, reduced user serviceability.
> 
> On _high end_ enthusiast products which _are _targetted at us, it'd be a reasonable thing for a manufacturer to at least offer imo. Like jonRock1992 said.


Totally agree with this, to drop 20c by applying liquid metal on a GPU Die between the stock cooler was clearly worth the effort for me and not a stupid thing to do, 20c is a massive drop in temps, not everyone wants to bang a water block on there GPU, i personally belive that as GPU manufacturers are now putting out cards that are pulling over 300w regular TIM is is just not conductive enough to transfer that amount of heat into such massive coolers, the heatsink on my card was only hot up around the area directly under the Die, further down the card it was cool, after applying the LM the heatsink is working much more efficiently due to LM having much higher conductivity and the heat is now being dissipated along the whole heat sink, I also do believe that LM will eventually become the standard for GPU manufacturers as they push more wattage though these cards having to use such massive coolers regular silicon based TIM is clearly not conductive enough which is why Sony switched to LM on the PS5 and i don't belive you will be buying a manufactured water looped PS5 any time soon (cost practicability) so clearly the best option for them and for me was using LM, the best thermal conductivity with regular paste i have found is 12.5 W/mk vs LM at 73.0 W/mk









PlayStation 5 uses liquid metal — here’s why that’s cool


The PlayStation 5 uses liquid metal as its thermal-interface material. Don't worry, I'll explain what that means, but just know that's it's impressive.




venturebeat.com


----------



## weleh

Are you sure you got a 20C drop from going paste to liquid metal?
Are you sure the temperature drop wasn't from a bad paste job in the first place?

I repasted my GPU and also got a 15C off from the hotspot temps during heavy load and it wasn't because of liquid metal, it was because the stock paste job was a disaster and I used a better quality paste too.

LMing on direct die is just asking for trouble and manufacturers will never go for LM on their GPUs ever...They will turn to thermal pads faster than LM that's for sure.


----------



## weleh

Here a bunch of data from someone on another discord about LM vs Kryonaut at 600W.

===================================== Kryonaut (a few weeks old) ===================================== 5-Second Instant Rise (600W): - *11.6C* CORE TEMP (delta temp vs water) - *23.5C* CORE HOTSPOT (delta temp vs water) 5-Minute Sustained Load (600W): - *12.7C* CORE TEMP (delta temp vs water) - *24.2C* CORE HOTSPOT (delta temp vs water) ===================================== Conductonaut - Day 1 ===================================== 5-Second Instant Rise (600W): - *10.3C* CORE TEMP (delta temp vs water) - *21.5C* CORE HOTSPOT (delta temp vs water) 5-Minute Sustained Load (600W): - *11.4C* CORE TEMP (delta temp vs water) - *22.3C* CORE HOTSPOT (delta temp vs water) ===================================== Conductonaut - Day 7 (6 days after application) ===================================== 5-Second Instant Rise (600W): - *9.7C* CORE TEMP (delta temp vs water) - *19.8C* CORE HOTSPOT (delta temp vs water) 5-Minute Sustained Load (600W): - *10.2C* CORE TEMP (delta temp vs water) - *20.4C* CORE HOTSPOT (delta temp vs water) ===================================== Conductonaut - Day 15 (14 days after application) ===================================== 5-Second Instant Rise (600W): - *9.5C* CORE TEMP (delta temp vs water) - *19.7C* CORE HOTSPOT (delta temp vs water) 5-Minute Sustained Load (600W): - *9.9C* CORE TEMP (delta temp vs water) - *20.1C* CORE HOTSPOT (delta temp vs water) Although I observed a 0.2-0.3C improvement from last week to this week, this is very possibly within margin of error. Now, if I keep observing this lower number next week and the week after that, it's possible that it's real. So far, though, the improvement over Kryonaut paste is 2.8C on core temp and 4.1C on hotspot temp.


----------



## lawson67

weleh said:


> Are you sure you got a 20C drop from going paste to liquid metal?
> Are you sure the temperature drop wasn't from a bad paste job in the first place?
> 
> I repasted my GPU and also got a 15C off from the hotspot temps during heavy load and it wasn't because of liquid metal, it was because the stock paste job was a disaster and I used a better quality paste too.
> 
> LMing on direct die is just asking for trouble and manufacturers will never go for LM on their GPUs ever...They will turn to thermal pads faster than LM that's for sure.


I am sure the 20c drop in temp was from using LM, i had already teared it down and re-pasted it with Antec Formula 7 Nano diamond that resulted in about a 2 degree drop in temps on TS and in games, then i took it apart again in exactly the same manor and used the LM and bang 20c drop in temps, so yep i truly belive it was the LM, and the stats point to that too, the best regular paste (Kryonaut) max thermal conductivity 12.5 W/mk vs LM 73.0 W/mk, also if you take precautions IE coat the semiconductors around the die in nail varnish why should it lead to a disaster if your not a hamfisted idiot?


----------



## weleh

No idea what that paste is. Probably terrible.

Also, I seriously doubt you managed to shave 20C from going TIM to LTIM. Nothing on the Internet points to that direction except your own account.


----------



## lawson67

weleh said:


> No idea what that paste is. Probably terrible.
> 
> Also, I seriously doubt you managed to shave 20C from going TIM to LTIM. Nothing on the Internet points to that direction except your own account.


No idea what that paste is. Probably terrible. = learn how to use google
I seriously doubt you managed to shave 20C from going TIM to LTIM = Believe what you want no cares especially not me, however I DID manage to shave off 20c by using LM

Nothing on the Internet points to that direction except your own account. = my first google results below, furthermore maybe you should consider Emailing Thermal Grizzly and letting them know that the difference they quote for thermal conductivity between there Kryonaut and there Conduconaut is plan wrong 12.5 W/mk for Kryonaut and 73.0 W/mk as there no real difference between using LM and regular paste and that you belive LM should never be used on a die directly, let me know there response 







Liquid Metal temperature drop


I have an i7 4790k that has been over clocked to 4.6 or 4.7 at 1.25v for over a year. Max temps were 75C (noctua nh-d15) in every test other than Prime 95 28.10 where I would hit 95C max. Well today I finally took the heat spreader off the CPU and used Cool Laboratory's Liquid Ultra below and...




forums.tomshardware.com


----------



## cfranko

I was planning to get a Red Devil 6900 XT, but after reading the last 2-3 pages of this thread as far as I understood it is not a good model right?


----------



## lawson67

cfranko said:


> I was planning to get a Red Devil 6900 XT, but after reading the last 2-3 pages of this thread as far as I understood it is not a good model right?


Its a great card 19 power stages however limited by AMD right now with max frequency and core voltage as they all are, you could negate some of this by using more power tool however the cooler on the powercolor with stock TIM is utterly useless and results in temps around 100c without touching the power limit, if you crank the power limit up your be hitting about 110c which is when the card starts throttling and that happens with mine with a 100mv undervolt so i would say stay away from it unless your happy to strip it down and go with LM, or stick a water block on it.


----------



## cfranko

lawson67 said:


> Its a great card 19 power stages however limited by AMD right now with max frequency and core voltage as they all are, however the cooler on the powercolor with stock TIM is utterly useless and results in temps around 100c without touching the power limit, if you crank the power limit up your be hitting about 110c which is when the card starts throttling so i would say stay away from it unless your happy to strip it down and go with LM, or stick a water block on it.


I have a 6900 XT Phantom right now and I rarely see 100C on the hotspot at 400 watts power limit in Time Spy so I am happy with it. However the card is very ugly so I am trying to find another one which looks good and has decent cooling. The Red Devil looks very nice but I don’t have the courage to repaste a gpu so its gonna stay with stock paste which is apparently terrible as you and others said, I don’t know what to do.


----------



## weleh

lawson67 said:


> No idea what that paste is. Probably terrible. = learn how to use google
> I seriously doubt you managed to shave 20C from going TIM to LTIM = Believe what you want no cares especially not me, however I DID manage to shave off 20c by using LM
> 
> Nothing on the Internet points to that direction except your own account. = my first google results below, furthermore maybe you should consider Emailing Thermal Grizzly and letting them know that the difference they quote for thermal conductivity between there Kryonaut and there Conduconaut is plan wrong 12.5 W/mk for Kryonaut and 73.0 W/mk as there no real difference between using LM and regular paste and that you belive LM should never be used on a die directly, let me know there response
> 
> 
> 
> 
> 
> 
> 
> Liquid Metal temperature drop
> 
> 
> I have an i7 4790k that has been over clocked to 4.6 or 4.7 at 1.25v for over a year. Max temps were 75C (noctua nh-d15) in every test other than Prime 95 28.10 where I would hit 95C max. Well today I finally took the heat spreader off the CPU and used Cool Laboratory's Liquid Ultra below and...
> 
> 
> 
> 
> forums.tomshardware.com


For someone who doesn't care you seem to care a lot considering how many edits you've made to the post.

And you seem to be stuck at thermal conductivity numbers rather than real world scenarios. You're the perfect buyer for most companies.
My opinion remains, 5C at best? Sure, higher on hotspot junction? Possible. 20C decrease on an air cooler? Most likely not true. 

But you can show us your testing methodology. I would be interested.


----------



## jonRock1992

When I used liquid metal on my 1080 Ti, I shaved off around 8C versus the kryonaut I had on there. I was using a 280mm aio to cool it.


----------



## weleh

jonRock1992 said:


> When I used liquid metal on my 1080 Ti, I shaved off around 8C versus the kryonaut I had on there. I was using a 280mm aio to cool it.


Seems like on par with what I've gathered.
5 to 10C depending on cooling and die size/thermal density.

20C just seems too unrealistic from LM alone. As I said, half of it was probably terrible paste+badmount and the rest LM.


----------



## lawson67

weleh said:


> For someone who doesn't care you seem to care a lot considering how many edits you've made to the post.
> 
> And you seem to be stuck at thermal conductivity numbers rather than real world scenarios. You're the perfect buyer for most companies.
> My opinion remains, 5C at best? Sure, higher on hotspot junction? Possible. 20C decrease on an air cooler? Most likely not true.
> 
> But you can show us your testing methodology. I would be interested.


I like editing i think of things later i want to add and i edit, sorry if that upsets you so much but your just have to deal with it, and it really seems to be bothering you more than me with your constants posts saying that what i am claiming must surly be lies or simply could not happen!, but it really did i knocked off 20c by adding LM and as a result i am as happy as Larry with having the balls to strip it down twice and then going with LM and weather you belive me or not is irrelevant to me, and what testing methodology do you want to see??, i have laid out pretty much all i did in previous posts!, so i guess you could say that my test and methodology was that i was hitting 100c stock so i stripped it down repasted it with Antec Formula 7 Nano diamond that resulted in about a 2 degree drop so i stripped it down again using the exact same method and screw driver added LM bolted it back together using the exact same method and screw driver again but now i dropped 20c using LM, there you go that's what happened, wanna disbelieve then fill your boot son shine


----------



## jonRock1992

It probably made a bigger difference because he wasn't using water cooling.


----------



## lestatdk

lawson67 said:


> I like editing i think of things later i want to add and i edit, sorry if that upsets you so much but your just have to deal with it, and it really seems to be bothering you more than me with your constants posts saying that what i am claiming must surly be lies or simply could not happen!, but it really did i knocked off 20c by adding LM and as a result i am as happy as Larry with having the balls to strip it down twice and then going with LM and weather you belive me or not is irrelevant to me, and what testing methodology do you want to see??, i have laid out pretty much all i did in previous posts!, so i guess you could say that my test and methodology was that i was hitting 100c stock so i stripped it down repasted it with Antec Formula 7 Nano diamond that resulted in about a 2 degree drop so i stripped it down again using the exact same method and screw driver added LM bolted it back together using the exact same method and screw driver again but now i dropped 20c using LM, there you go that's what happened, wanna disbelieve then fill your boot son shine


Do you have a link to the guide you followed ? I'm considering doing it as well, but not sure I have the balls lol


----------



## weleh

jonRock1992 said:


> It probably made a bigger difference because he wasn't using water cooling.


Exactly the contrary though.

The better the cooling the better the scenario.
If the cooler is bad (which air coolers are compared to most WC solutions) then the difference should be even smaller.


----------



## LtMatt

cfranko said:


> I was planning to get a Red Devil 6900 XT, but after reading the last 2-3 pages of this thread as far as I understood it is not a good model right?


No it is a good model, however a few users (not all) have had issues with hotspot temperature. However I’m sure you can optimise it to run cool and quiet at above stock performance.


----------



## lawson67

lestatdk said:


> Do you have a link to the guide you followed ? I'm considering doing it as well, but not sure I have the balls lol


I am sure you will have the balls to do it lol, what i did was first watch buildzoid strip down his Powercolor RX6900XT link below, i then took the card out and removed the 4 screws around the GPU put them in a cup, then i unbolted the rest of the card and put those screws in another cup, then gently pulled it apart trying not to mess up the original thermal pads as i belive changing them can really mess things up if you use the wrong thickness ones or they don't compress as much as the original ones, then i cleaned up the die and cooler and removed my old paste off, then i cut a piece of paper the same size as the die looped abit of tape under the back of the paper and stuck it on the die, then looped another bit of tape on the top of the paper and gently lowered the card back over the heatsink now the paper was stuck to the heatsink and now i knew where the die meets up with the heatsink so i used a black marker pen around the paper and removed the paper while i was doing this i had already added nail polish around the die, i used 2 coats of clear nail polish, left that 20 mins to dry, then added the LM to the cooler within the black marker i had put on it from the paper i used and then i coated the die, then i gently lowered the card on top of the cooler making sure the holes where lined up, then i put all the screws back in the card very loose once they was all back in i concentrated on the 4 screws around the Die i tightened the screws up half a turn at a time in a criss-cross fashion until they was all tight, then i tightened up the rest of the screws on the card and that was it job done, and i used that same method for the two times i striped it down, the only thing i did different from the first time was to use paper and tape on the die to find out where the die met the cooler as with the normal paste you don't need to coat the cooler.


----------



## jonRock1992

I like using liquid electrical tape when dealing with liquid metal. It dries quicker and it much easier to remove rather than nail polish.


----------



## J7SC

jonRock1992 said:


> I like using liquid electrical tape when dealing with liquid metal. It dries quicker and it much easier to remove rather than nail polish.


...liquid electrical tape always served me well, especially when I ran out of conformal coating. I used both (and nail polish as well) when I did w-cooled Quad-SLI/Crossfire back in the day. It is worth remembering that LM can dry out after a few years. Also, if you mount your GPU and its air-or-water GPU cooler vertically, additional precautions should be taken with LM as it can 'run out'. I built up a bit of a ridge of MX4 in the trough area around (but not on) the GPU die.


----------



## ptt1982

cfranko said:


> I was planning to get a Red Devil 6900 XT, but after reading the last 2-3 pages of this thread as far as I understood it is not a good model right?


Let me rant about my experience with the Red Devil 6900XT once more, it's therapy for me and a warning for you:

If you want to OC your card "to the max" you need to put your card under water and use MPT, and buying the reference model is the cheapest and easiest way to do it. If you want to keep it on air and overclock it slightly, buy Nitro+, TUF or Gaming X Trio. This is essentially your best plug n play solution. I would not recommend using MPT on air, because the temps can shoot really high on air, and you have to be prepared to put your fans to 80-100%. UV+OC is probably the way to go on air.

If you are in the markets for the XTXH cards avoid the Red Devil Ultimate, because it has mounting problems which lead to high temperatures, or you need to put it under water, like some people here have. The binning doesn't seem to be as good as other models either, but it is cheaper. Don't be fooled by the beefy looking cooler.

If you never want to overclock, which means you are leaving 10-20% performance on the table, depending on silicon lottery, you can buy the Red Devil. Be prepared to heavily undervolt it with software like MPT to keep it cool enough so it doesn't thermally throttle. That's the only way to use Red Devil on air without constantly thinking "how hot is it running now...".

Whatever you do if you buy the Red Devil, do not open it yourself if it runs hot. Send it back. It is incredibly difficult to get the remount working, and the default thermal pads can break even when you are careful and essentially cannot be replaced by aftermarket pads because they are always too hard for the delicate mounting. The heatsink/cooler mounting pressure is messed up, so my theory is that the longer you use Red Devil the higher the temps will creep due to the mount loosening up. This is based on around 30 attempts to repaste, and using 4 different aftermarket thermal pads. Sometimes I would get it right and maybe a reduction of 10-15C compared to stock temps, but the temps crept back in three weeks due to losing the mounting pressure slowly. Under the same test the card junction temp was at 90C in TimeSpy 2-3 days after successful repasting, and three weeks later it reached 110C and thermal throttling using same drivers. I spent a lot of time inspecting the card, testing different ways of mounting it, and concluded that the thermal pads cannot be replaced, and that even if you repaste once successfully, there's a catastrophe waiting later. I did not use washers, they might help, but then again getting the mounting pressure right is incredibly difficult in the first place, especially if you want to keep your VRAM temps in check. My conclusion is to avoid the card all together or know that you need to put it under water. The latter option is expensive, and Red Devil's do not have the best bins out there either, and seem to be very sensitive to drivers in terms of core clocks (driver change can boost or reduce your core clock capability by 50mhz.)

Personally, didn't want to sell my war-torn Red Devil cheap, so I had to put it under water (ok I admit, I loved the idea of a new project), and now it finally works as it should when overclocked and using MPT. However it was a mess of a project, and caused a lot of stress and it sucked my time and money.

The best overall solution (no surprises here) for 6900xt seems to be highend AIO models or buying the cheapest card and putting it under cheap custom loop to keep the temps low and overclock high. Remember, 6900xt clocks do not boost based on GPU temps like Nvidia's cards, so there's no tangible benefit in gaming if your card is running 45C/70C vs 60C/85C (gpu/junction).

I won't compromise personally anymore when it comes to high end GPUs and will always use a waterblock, and buy a binned model. I'm willing to pay double the price to get rid of all the hassle that comes with cheap cards. It's a hobby, it doesn't need to make financial sense. I say, splurge on your GPU and never look back.


----------



## lawson67

ptt1982 said:


> the longer you use Red Devil the higher the temps will creep due to the mount loosening up. This is based on around 30 attempts to repaste, and using 4 different aftermarket thermal pads. Sometimes I would get it right and maybe a reduction of 10-15C compared to stock temps, but the temps crept back in three weeks due to losing the mounting pressure slowly.


I cant see how the card can loosen up over time, it has 8 spring loaded screws holding it together, even if you broke one spring the others would still keep the pressure on the card and i cant belive you broke all 8 springs which is the only scenario that i can think of that could possibly allow it to come loose over time i honestly cant see how it could loosen up, i personally belive the mistake you made was using after market pads from the get go that was not the correct thickness or did not compress as well as the original pads which is why i did not want to change the original thermal pads (been there done that), time will tell and ill report back in a few weeks, but I am absolutely not concerned or even a little worried that it will loosen up over time, its just not going to happen with the weight of the cooler hanging below the PCB adding more pressure on the screws along with 8 springs those screws are using, and there not crap weak springs!


----------



## LtMatt

The best air cooled 6900 XT just got better, the XFX 6900 XT Merc 319 XTXH. 
XFX Radeon RX 6900 XT Merc319 BLACK Limited Ed Graphics Card | Ebuyer.com

Not sure I can resist trying one...


----------



## cfranko

ptt1982 said:


> Let me rant about my experience with the Red Devil 6900XT once more, it's therapy for me and a warning for you:
> 
> If you want to OC your card "to the max" you need to put your card under water and use MPT, and buying the reference model is the cheapest and easiest way to do it. If you want to keep it on air and overclock it slightly, buy Nitro+, TUF or Gaming X Trio. This is essentially your best plug n play solution. I would not recommend using MPT on air, because the temps can shoot really high on air, and you have to be prepared to put your fans to 80-100%. UV+OC is probably the way to go on air.
> 
> If you are in the markets for the XTXH cards avoid the Red Devil Ultimate, because it has mounting problems which lead to high temperatures, or you need to put it under water, like some people here have. The binning doesn't seem to be as good as other models either, but it is cheaper. Don't be fooled by the beefy looking cooler.
> 
> If you never want to overclock, which means you are leaving 10-20% performance on the table, depending on silicon lottery, you can buy the Red Devil. Be prepared to heavily undervolt it with software like MPT to keep it cool enough so it doesn't thermally throttle. That's the only way to use Red Devil on air without constantly thinking "how hot is it running now...".
> 
> Whatever you do if you buy the Red Devil, do not open it yourself if it runs hot. Send it back. It is incredibly difficult to get the remount working, and the default thermal pads can break even when you are careful and essentially cannot be replaced by aftermarket pads because they are always too hard for the delicate mounting. The heatsink/cooler mounting pressure is messed up, so my theory is that the longer you use Red Devil the higher the temps will creep due to the mount loosening up. This is based on around 30 attempts to repaste, and using 4 different aftermarket thermal pads. Sometimes I would get it right and maybe a reduction of 10-15C compared to stock temps, but the temps crept back in three weeks due to losing the mounting pressure slowly. Under the same test the card junction temp was at 90C in TimeSpy 2-3 days after successful repasting, and three weeks later it reached 110C and thermal throttling using same drivers. I spent a lot of time inspecting the card, testing different ways of mounting it, and concluded that the thermal pads cannot be replaced, and that even if you repaste once successfully, there's a catastrophe waiting later. I did not use washers, they might help, but then again getting the mounting pressure right is incredibly difficult in the first place, especially if you want to keep your VRAM temps in check. My conclusion is to avoid the card all together or know that you need to put it under water. The latter option is expensive, and Red Devil's do not have the best bins out there either, and seem to be very sensitive to drivers in terms of core clocks (driver change can boost or reduce your core clock capability by 50mhz.)
> 
> Personally, didn't want to sell my war-torn Red Devil cheap, so I had to put it under water (ok I admit, I loved the idea of a new project), and now it finally works as it should when overclocked and using MPT. However it was a mess of a project, and caused a lot of stress and it sucked my time and money.
> 
> The best overall solution (no surprises here) for 6900xt seems to be highend AIO models or buying the cheapest card and putting it under cheap custom loop to keep the temps low and overclock high. Remember, 6900xt clocks do not boost based on GPU temps like Nvidia's cards, so there's no tangible benefit in gaming if your card is running 45C/70C vs 60C/85C (gpu/junction).
> 
> I won't compromise personally anymore when it comes to high end GPUs and will always use a waterblock, and buy a binned model. I'm willing to pay double the price to get rid of all the hassle that comes with cheap cards. It's a hobby, it doesn't need to make financial sense. I say, splurge on your GPU and never look back.


I actually have a 6900 XT Phantom right now and I have my MPT power limit set to 400 watts I generally use my card with 80% fan speed. At 400 watts and 100% fan speed, in TimeSpy the hotspot generally sits at 85-95C and goes up to 103C in extreme scenes. So I guess my GPU has good cooling and I am happy with it. But the card is extremely ugly which bothers me. I thought of going with a custom loop but it would be very difficult do to that where I live, so I am going to stick to air. After what you said about the Red Devil I will try get a Merc 319 or Nitro+ If I can’t get both of them I am going to stick to my current card. Thanks for the heads up.


----------



## J7SC

ptt1982 said:


> Let me rant about my experience with the Red Devil 6900XT once more, it's therapy for me and a warning for you:
> 
> If you want to OC your card "to the max" you need to put your card under water and use MPT, and buying the reference model is the cheapest and easiest way to do it. I (...)
> The best overall solution (no surprises here) for 6900xt seems to be highend AIO models or buying the cheapest card and putting it under cheap custom loop to keep the temps low and overclock high. Remember, *6900xt clocks do not boost based on GPU temps like Nvidia's cards, so there's no tangible benefit* in gaming if your card is running 45C/70C vs 60C/85C (gpu/junction).
> 
> I won't compromise personally anymore when it comes to high end GPUs and will always use a waterblock, and buy a binned model. I'm willing to pay double the price to get rid of all the hassle that comes with cheap cards. It's a hobby, it doesn't need to make financial sense. I say, splurge on your GPU and never look back.


I seem to make this point a lot in this thread...temps matter re. clocks with BigNavi, also per YT from GN's testing of the 6800XT (...close enough). Most modern CPUs and GPUs have algorithms with temps as a major input. Also, boost and throttling are not the same thing in this context. In any event, there's also a point to be made about longevity re. well-cooled electronic components...





 



LtMatt said:


> The best air cooled 6900 XT just got better, the XFX 6900 XT Merc 319 XTXH.
> XFX Radeon RX 6900 XT Merc319 BLACK Limited Ed Graphics Card | Ebuyer.com
> 
> Not sure I can resist trying one...


...wait till Buildzoid sees this - he'll probably can't resist trying this one either


----------



## thomasck

Performance uplift with 21.6.2 in timespy. Haven't tried FF but I will update the post once it's done. Stock card fan at 75%.










Sent from my Pixel 2 XL using Tapatalk


----------



## kairi_zeroblade

ptt1982 said:


> If you never want to overclock, which means you are leaving 10-20% performance on the table, depending on silicon lottery, you can buy the Red Devil. Be prepared to heavily undervolt it with software like MPT to keep it cool enough so it doesn't thermally throttle. That's the only way to use Red Devil on air without constantly thinking "how hot is it running now...".
> 
> Whatever you do if you buy the Red Devil, do not open it yourself if it runs hot. Send it back. It is incredibly difficult to get the remount working, and the default thermal pads can break even when you are careful and essentially cannot be replaced by aftermarket pads because they are always too hard for the delicate mounting. The heatsink/cooler mounting pressure is messed up, so my theory is that the longer you use Red Devil the higher the temps will creep due to the mount loosening up. This is based on around 30 attempts to repaste, and using 4 different aftermarket thermal pads. Sometimes I would get it right and maybe a reduction of 10-15C compared to stock temps, but the temps crept back in three weeks due to losing the mounting pressure slowly. Under the same test the card junction temp was at 90C in TimeSpy 2-3 days after successful repasting, and three weeks later it reached 110C and thermal throttling using same drivers. I spent a lot of time inspecting the card, testing different ways of mounting it, and concluded that the thermal pads cannot be replaced, and that even if you repaste once successfully, there's a catastrophe waiting later. I did not use washers, they might help, but then again getting the mounting pressure right is incredibly difficult in the first place, especially if you want to keep your VRAM temps in check. My conclusion is to avoid the card all together or know that you need to put it under water. The latter option is expensive, and Red Devil's do not have the best bins out there either, and seem to be very sensitive to drivers in terms of core clocks (driver change can boost or reduce your core clock capability by 50mhz.)


is this for the 6900XT only or the 6800XT as well..I have been in the Red Devil for 7 months now and I never had thermal issues..(GPU Core temp and Hotspot)


----------



## lawson67

kairi_zeroblade said:


> is this for the 6900XT only or the 6800XT as well..I have been in the Red Devil for 7 months now and I never had thermal issues..(GPU Core temp and Hotspot)


Both you got lucky same heatsink on both cards, myself and 2 other guys on here had high temps out of the box, I rectified my temperature issues using LM, and just for the record i stripped my card down twice and i found it one the easiest graphic cards i have ever had to remount, its literally idiot proof all the screws holes slot though the PCB on stand offs from the heatsink as you can see in the picture below, so if the PCB board is not lined up correctly on the cooler it's going to be very obvious and the card will be tipping left to right on top of the heatsink where it is balancing off one of the stand offs, the only way you can mess up the mounting of this card is to use the incorrect thermal pads or strip a few screws or break the screw springs!

Here's a tip for anyone that wants to strip this card down, make sure the screws on the cold plate are tight to the heatsink, when i first took my card apart they were loose and i truly believed that i had fixed the the temperature issue on my card however once reinstalled back in the PC the temps had only dropped by 2c, but the loose cold plate would certainly not of helped matters, it was only once i took it apart again and added LM that it dropped 20c on the hotspot temp and then of course i was happy, and i should not need to touch the card again for a few years as the LM wont effect the heatsink as its nickel plated


----------



## Bart

Indeed, my stock Asus TUF 6900XT was awful even in stock form. Once I started using MPT to unlock the wattage, my hot spot was hitting 115C even with fans at full blast, AND all 10 case fans in a Dynamic 011 XL on full blast. Thing had plenty of air, but temps were still nuts. Thankfully Alphacool released a block for it, but the stock mount was bloody awful.


----------



## jonRock1992

lawson67 said:


> Both you got lucky same heatsink on both cards, myself and 2 other guys on here had high temps out of the box, I rectified my temperature issues using LM, and just for the record i stripped my card down twice and i found it one the easiest graphic cards i have ever had to remount, its literally idiot proof all the screws holes slot though the PCB on stand offs from the heatsink as you can see in the picture below, so if the PCB board is not lined up correctly on the cooler it's going to be very obvious and the card will be tipping left to right on top of the heatsink where it is balancing off one of the stand offs, the only way you can mess up the mounting of this card is to use the incorrect thermal pads or strip a few screws or break the screw springs!
> 
> Here's a tip for anyone that wants to strip this card down, make sure the screws on the cold plate are tight to the heatsink, when i first took my card apart they were loose and i truly believed that i had fixed the the temperature issue on my card however once reinstalled back in the PC the temps had only dropped by 2c, but the loose cold plate would certainly not of helped matters, it was only once i took it apart again and added LM that it dropped 20c on the hotspot temp and then of course i was happy, and i should not need to touch the card again for a few years as the LM wont effect the heatsink as its nickel plated
> 
> View attachment 2516232


I'm pretty sure my heatsink was missing that really thick pad in the top left of your pic. I remember having two thicker pads, but not three lol. I have a waterblock installed now though. I still haven't been able to test it yet though because I haven't received my replacement pump/res yet.


----------



## lawson67

jonRock1992 said:


> I'm pretty sure my heatsink was missing that really thick pad in the top left of your pic. I remember having two thicker pads, but not three lol. I have a waterblock installed now though. I still haven't been able to test it yet though because I haven't received my replacement pump/res yet.


It wouldn't surprise me that you was missing that pad after seeing my cold plate hanging off, good luck with your waterblock that will allow you to have some fun, I've been able to play abit myself now that i have my temps under control, not a bad score for RX 6800 XT and only 88c on the hotspot on that run too, very pleased indeed!


----------



## ptt1982

jonRock1992 said:


> I'm pretty sure my heatsink was missing that really thick pad in the top left of your pic. I remember having two thicker pads, but not three lol. I have a waterblock installed now though. I still haven't been able to test it yet though because I haven't received my replacement pump/res yet.


Yeah my heatsink was missing the two upper right hand side thick thermal pads as well, they weren't there. There's nothing they need to cool on the Red Devil 6900xt PCB, however, the Red Devil 6800xt has some chips there on the PCB to cool.


----------



## ptt1982

J7SC said:


> I seem to make this point a lot in this thread...temps matter re. clocks with BigNavi, also per YT from GN's testing of the 6800XT (...close enough). Most modern CPUs and GPUs have algorithms with temps as a major input. Also, boost and throttling are not the same thing in this context. In any event, there's also a point to be made about longevity re. well-cooled electronic components...


I respect you for your opinion, and in fact can transparently say that I am not an expert of any kind and have no knowledge of the inner workings of the AMD's 6000-series boost technology. From what I have gathered online and watching others speak about the clock boost behavior and temperature relationship, is that at least 6000-series do not boost nearly as aggressively as Nvidia's 2000 and 3000 series GPUs do with lower temps. I would say that it is so insignificant based on my tests on my Red Devil card (an isolated case that cannot lead to generalization) that it doesn't really matter as long as your temps are under say, 70C gpu 95C junction at worst case scenarios (these arbitrary numbers represent spikes here and there after a long stress test.) This led me to a conclusion that as long as you put your card under water, you should see temps in extreme conditions always under those temps, while the clocks stay pretty much they same as they would at these spikes as when the card is say 45C gpu 70C junction.

While I agree that throttling is a completely different (opposite, one could say) topic, I understood that there is a specific threshold of 6900xt cards somewhere around 70-75C (depending on the model perhaps?) where the throttling starts, which brings clocks down significantly to stop overheating. I noticed in Red Devil that the behavior starts based on the GPU temp rather than Junction, which is why I saw Junction temps at worst up to 118C without throttling under the gpu temp threshold, but when GPU temp was at 74C it stopped rising, and as a result junction stayed at a constant 110C or lower. I wasn't brave enough to test it specifically, nor did I aim at doing such a test in the first place, but ended up seeing these numbers as result of the multiple failed heatsink remounts I did.


----------



## ptt1982

Edit: Deleted


----------



## dureiken

Hi, I'm the proud new owner of an AMD 6900XT, coming from a 2080TI under water. Could someone tell me if it's worth to go underwater with this card ? I did't find any benchmark before/after.

Thanks


----------



## weleh

dureiken said:


> Hi, I'm the proud new owner of an AMD 6900XT, coming from a 2080TI under water. Could someone tell me if it's worth to go underwater with this card ? I did't find any benchmark before/after.
> 
> Thanks


In terms of performance? Hardly.
These cards have no air-water scalling at all.

In terms of cooling performance and noise? Sure.


----------



## J7SC

dureiken said:


> Hi, I'm the proud new owner of an AMD 6900XT, coming from a 2080TI under water. Could someone tell me if it's worth to go underwater with this card ? *I did't find any benchmark before/after.*
> 
> Thanks


Apart from GN's observations in > this timestamped vid on temps and MHz, HWBot has benchmark submissions by cooling type here:










The 6900 XT clearly gains from air > water, but not as much as the 3090. FYI, I run w-cooled 2080 Tis as well as a RTX 3090 air > now water and a 6900 XT air > now water on different productivity-and-play setups and I find water-cooling more than worthwhile...

...in the case of the 6900 XT, the extra 60 MHz or so are useful, but the real payoff has to do with noise - my 3x8 pin card has its three fans spinning at 3800 rpm at full tilt  When using MPT to push PL past 370W from the stock PL, the rise in hotspot temps is also much more dramatic in relative terms than the general GPU temp. So water makes sense given not only the data above but 'noise sanity', especially if you plan to push MPT Watts...which really is a measure of heat energy, after all


----------



## dureiken

J7SC said:


> So water makes sense given not only the data above but 'noise sanity', especially if you plan to push MPT Watts...which really is a measure of heat energy, after all


Its finally the same isn'it ? If i Use MPT to go over default Watts and the card is unusable because of noise, WC will allow to reach such performances with decent noise, isn'it ?

Thanks guys


----------



## weleh

No, puting your card on Water won't magically make it run higher clocks that were otherwise unstable. Specially not an effective 55 Mhz.

If you want a good card either buy XTXH, a binned AIB with high boost clock out of the box or pray your reference card is decent.

Water is worth it if you want to bench and push the PL above baked in limits, for everyday usage I don't see how any card could have any issues with thermals unless it's a complete failure of a card.


----------



## J7SC

dureiken said:


> Its finally the same isn'it ? If i Use MPT to go over default Watts and the card is unusable because of noise, WC will allow to reach such performances with decent noise, isn'it ?
> 
> Thanks guys


...here is a quick 'semi-' comparison for Superposition 4k runs...left is stock, right is MPT+ but still on air ...I'm in the Pacific NW / BC and we had a massive heat dome haunt us, so ambient temps are not comparable. Still, throwing MPT into the mix (about 25% more power), the relative increase of hotspot temps just sealed the w-cooling deal. In fact, with the unusual heat we have been having (including several heat related deaths etc here), I'm glad that I w-cooled everything now...


----------



## dureiken

I tried power with 100% fan :










almost same results than yours.

What is max TDC limit for stock 6900xT amd in MPT ?

thanks


----------



## J7SC

dureiken said:


> I tried power with 100% fan :
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> almost same results than yours.
> 
> What is max TDC limit for stock 6900xT amd in MPT ?
> 
> thanks


...not sure if MPT has a built-in limit; but you don't want to blow your VRM, GPU etc...probably also depends on your PCB and power stages. For now, I stay below 380 W (w/ PL all in at + 15%) on my card which has 17 stages for GPU, VRAM and SoC and is 3x8 pin (= '''technically''' 525W limit incl. PCIe per official spec though that can be exceeded). The setup w/ the 6900XT is 2/3rds work and 1/3rd play while the one with the 3090 is 2/3rds play and 1/3rd work - I tend to push the latter one a bit harder, like 520 W vbios (also measured)


----------



## jonRock1992

Does anyone know if running your GPU at a cooler temp, but at the same PL, will help prevent OCP shutdowns? Say like from 110C hotspot down to 70C hotspot?


----------



## LtMatt

Curious to know what the stock maximum core frequency is listed as for XTXH owners as displayed in Radeon Software > GPU Tuning at default settings?


----------



## smokedawg

These are the settings for a Powercolor 6900XT Liquid Devil Ultimate (Unleashed BIOS). Alternative BIOS has the same settings but Power Limit is 303W instead of 332W.


----------



## thomasck

Do you guys think a 3900X could bottleneck the 6900 XT at 1440P? I am stating to suspect that I am being bottlenecked by the CPU. 

Sent from my Pixel 2 XL using Tapatalk


----------



## ZealotKi11er

thomasck said:


> Do you guys think a 3900X could bottleneck the 6900 XT at 1440P? I am stating to suspect that I am being bottlenecked by the CPU.
> 
> Sent from my Pixel 2 XL using Tapatalk


Is possible if its DX11 game.


----------



## cfranko

My 6900 XT Phantom used to be stable at 2500 minimum frequency and 2630 maximum frequency while doing Time Spy on driver 21.4.1. I updated to 21.6.2 now and with the same wattman settings as I used in 21.4.1, Time Spy crashes now. Is this because of a driver thing or did my gpu degrade somehow?


----------



## LtMatt

cfranko said:


> My 6900 XT Phantom used to be stable at 2500 minimum frequency and 2630 maximum frequency while doing Time Spy on driver 21.4.1. I updated to 21.6.2 now and with the same wattman settings as I used in 21.4.1, Time Spy crashes now. Is this because of a driver thing or did my gpu degrade somehow?


Performance has increased in Timespy in 21.6.1 and later so even though you may have to lower clocks a little bit, performance should be higher in graphics score at least.


----------



## thomasck

ZealotKi11er said:


> Is possible if its DX11 game.


Yes I believe so, not just DX11 but also DX12 as Warzone is DX12 and if I don't increase the render resolution I don't get gpu utilization near 99%.


----------



## jfrob75

LtMatt said:


> Curious to know what the stock maximum core frequency is listed as for XTXH owners as displayed in Radeon Software > GPU Tuning at default settings?


Here are my default settings.


----------



## LtMatt

jfrob75 said:


> Here are my default settings.
> View attachment 2516578


Cheers so far i have seen clock speeds of, 2549, 2569, 2579, 2604Mhz reported by various users here and on Ocuk. I am curious to see what the range is, yet to see higher than 2604Mhz so far.


----------



## kratosatlante

cfranko said:


> My 6900 XT Phantom used to be stable at 2500 minimum frequency and 2630 maximum frequency while doing Time Spy on driver 21.4.1. I updated to 21.6.2 now and with the same wattman settings as I used in 21.4.1, Time Spy crashes now. Is this because of a driver thing or did my gpu degrade somehow?


its a driver things , my phantom in some version +/- 25mhz to -/+50 core , other vram frecuency, 21.6.2 get 2% a 5% improve from 21.5.1, ss 4k optimized get 400+ pts, 1080p extreme 100+, sotr 4k 93 from101, 1080p 202 to 209


http://imgur.com/a/5J6e1xX


----------



## EastCoast

Is it really worth it to get the 6900xt over the 6800xt for gaming at 1080/1440p?
COD
BF2042
etc

Side note:
I noticed that MW is more optimized for Radeon then Blackops, IE: Cold War. 
I can easily break 200 FPS in MW. But I can barely break 170 FPS in Cold War. Pretty big drop in performance with no real way how to fix it .


----------



## lestatdk

EastCoast said:


> Is it really worth it to get the 6900xt over the 6800xt for gaming at 1080/1440p?
> COD
> BF2042
> etc
> 
> Side note:
> I noticed that MW is more optimized for Radeon then Blackops, IE: Cold War.
> I can easily break 200 FPS in MW. But I can barely break 170 FPS in Cold War. Pretty big drop in performance with no real way how to fix it .


No, it's not worth it. I have a 6900 but was actually trying to buy a 6800xt at the time. But there just wasn't any. 6800xt is best value for money


----------



## amigafan2003

EastCoast said:


> Is it really worth it to get the 6900xt over the 6800xt for gaming at 1080/1440p?


No.

I do have a 6900 xt (GB Extreme Waterforce) but that's only because I couldn't find a 6800 xt + waterblock anywhere that was <300gbp than this 6900 xt.


----------



## J7SC

lestatdk said:


> No, it's not worth it. I have a 6900 but was actually trying to buy a 6800xt at the time. But there just wasn't any. 6800xt is best value for money





amigafan2003 said:


> No.
> 
> I do have a 6900 xt (GB Extreme Waterforce) but that's only because I couldn't find a 6800 xt + waterblock anywhere that was <300gbp than this 6900 xt.


...same here...was looking for a 6800 XT, but they only had one BigNavi at all, a custom PCB 6900 XT...price was good, though ('old' MSRP)


----------



## LtMatt

EDIT - Update

I spoke with XFX Support and they confirmed that the hotspot temperature is indeed supposed to be at 95c for the Limited Edition.


----------



## ZealotKi11er

LtMatt said:


> Warning for anyone considering the XFX 6900 XT Merc Limited Edition (XTXH). ❌


I am sure you can increase the volt to 1.2v with MPT for the bios with 1.175v


----------



## Bart

ZealotKi11er said:


> I am sure you can increase the volt to 1.2v with MPT for the bios with 1.175v


Can you now? Last I heard you could not adjust the voltage with MPT, did something change?


----------



## jonRock1992

I finally got my custom loop set up for my 6900 XT Red Devil Ultimate! Just did a run in Timespy with 400W PL, 2600MHz min, 2750MHz max, and 2112MHz Fast Timings mem. I'm using the default voltage. My temps are like 30C cooler with this waterblock. Never gonna hit that 110C anymore lol.
AMD Radeon RX 6900 XT video card benchmark result - AMD Ryzen 7 5800X,ASUSTeK COMPUTER INC. ROG CROSSHAIR VIII DARK HERO (3dmark.com)


----------



## ptt1982

jonRock1992 said:


> I finally got my custom loop set up for my 6900 XT Red Devil Ultimate! Just did a run in Timespy with 400W PL, 2600MHz min, 2750MHz max, and 2112MHz Fast Timings mem. I'm using the default voltage. My temps are like 30C cooler with this waterblock. Never gonna hit that 110C anymore lol.
> AMD Radeon RX 6900 XT video card benchmark result - AMD Ryzen 7 5800X,ASUSTeK COMPUTER INC. ROG CROSSHAIR VIII DARK HERO (3dmark.com)


Fantastic! What type of radiator setup do you have? How high do your Timespy GT2 goes in junction temps (the highest peak, not average junction) ?

Running quite similar settings as you with 50mhz less mhz on max clock. I presume you are quite happy with the set up now?


----------



## Bart

jonRock1992: I'm daily driving the exact same settings, but with 375W set in MPT, although I'm using the 15% power limit still. You might be able to dial that back a little.


----------



## jonRock1992

ptt1982 said:


> Fantastic! What type of radiator setup do you have? How high do your Timespy GT2 goes in junction temps (the highest peak, not average junction) ?
> 
> Running quite similar settings as you with 50mhz less mhz on max clock. I presume you are quite happy with the set up now?


I saw a peak junction temp at 84C on a Timespy GT2 loop. I'm assuming it's high because the room I'm in is 76F at the moment, and my fans are only running at a constant 1300RPM. I'm just using a single 360mm rad that is 28mm thick. I'm pulling air through the rad with three Noctua A12x25's. It's virtually silent. I have power limit and TDC limit set to 348 in MPT and I use +15% in wattman.


----------



## Thanh Nguyen

If I watercool the card, which model should I get? Any advantage over air or just cooler?


----------



## jonRock1992

My Timespy GPU score went up by like 700 points just by putting a waterblock on. That's without clock adjustment. The better cooling allowed my card to stay at higher clocks when unlocking the power limit.


----------



## LtMatt

ZealotKi11er said:


> I am sure you can increase the volt to 1.2v with MPT for the bios with 1.175v


Interesting, I could give this a try nothing to lose.

I received a reply from XFX support telling me this is normal, but I don't think the L1 agent really understands what the problem is.


----------



## LtMatt

ZealotKi11er said:


> I am sure you can increase the volt to 1.2v with MPT for the bios with 1.175v





LtMatt said:


> Interesting, I could give this a try nothing to lose.
> 
> I received a reply from XFX support telling me this is normal, but I don't think the L1 agent really understands what the problem is.


This actually worked good idea, can now get full performance and throttle temp back at 110c with 1.2v.


----------



## weleh

If you change vcore to 1.2 on MPT your card will be bricked at 500 Mhz...


----------



## lawson67

Was playing a bit last night with my Red Devil RX 6800 XT so glad i added LM to bring temps down, i set the clock to 2550 mhz - 2650 and was nearly hitting 21000 on graphic score on TS with the hotspot only at 90c so plenty more room to try and push abit more, maybe its time to play with MPT powerlimits


----------



## jonRock1992

lawson67 said:


> Was playing a bit last night with my Red Devil RX 6800 XT so glad i added LM to bring temps down, i set the clock to 2550 mhz - 2650 and was nearly hitting 21000 on graphic score on TS with the hotspot only at 90c so plenty more room to try and push abit more, maybe its time to play with MPT powerlimits
> View attachment 2516728


I was debating whether or not I should use liquid metal while I was installing my waterblock. I have an unopened tube of some good liquid metal. I opted to use Kryonaut instead. I don't think using liquid metal with my waterblock would net any performance gains with my GPU. It would probably just run 5C cooler on the hotspot.


----------



## lawson67

jonRock1992 said:


> I was debating whether or not I should use liquid metal while I was installing my waterblock. I have an unopened tube of some good liquid metal. I opted to use Kryonaut instead. I don't think using liquid metal with my waterblock would net any performance gains with my GPU. It would probably just run 5C cooler on the hotspot.


Would be interesting but it also depends what your waterblock is made out of If its nickel plated copper you should be fine, if its just copper which i doubt it is there is more electromechanical potential for galvanic corrosion, if its made out of aluminium then its a no no as aluminium is highly soluble in gallium and will form an alloy with the gallium in the LM


----------



## cfranko

I just repasted my 6900 xt and now my edge temps are higher however my hotspot temps are lower compared to stock thermal paste. Is this something good or bad?


----------



## LtMatt

cfranko said:


> I just repasted my 6900 xt and now my edge temps are higher however my hotspot temps are lower compared to stock thermal paste. Is this something good or bad?


That's good! Which 6900 XT?


----------



## jonRock1992

lawson67 said:


> Would be interesting but it also depends what your waterblock is made out of If its nickel plated copper you should be fine, if its just copper which i doubt it is there is more electromechanical potential for galvanic corrosion, if its made out of aluminium then its a no no as aluminium is highly soluble in gallium and will form an alloy with the gallium in the LM


It's nickel plated copper. I've noticed that this gpu's frequency doesn't scale with lower temps. Therefore, it would give me absolutely nothing in terms of performance if I'm not temp throttling. I was throttling with the stock air cooler when overclocking, so switching to a full-cover waterblock greatly benefited me. My GPU is performing how it should be now, and it's silent! The red devil heatsink was soo loud.

It was quite the opposite with my 1080 Ti though. The lower the temp, the higher I could push my clocks. Using liquid metal on my 1080 Ti netted me an additional bin on the max gpu frequency. I think the only way to gain any significant frequency boosts with the 6900 XT is with extreme cooling such as LN2.


----------



## cfranko

LtMatt said:


> That's good! Which 6900 XT?


Asrock Phantom Gaming


----------



## lawson67

So i started playing with the power limits in MPT i set 330 watts on the power limit and 330 TDC as igorslab says this "Please do not chase more than 330 watts ASIC through an RX 6800 XT. Exceptions to this are cards with significantly better phase placement, such as the Devil or the Strix" my card is the Red Devil so he believes i could push more? anyhow i set 2550 - 2650mhz in wattman and did a TS run which resulted in max draw of 384 watts and with the fans set at 75% i only hit 96c on the hotspot so i am happy about temps however i belive i am 9 watts over the limit of 375w that i belive two 8 pin PCI-e cables can supply (150w each) plus the 75w the PCI slot can give which is a shame,not to sure if 9 watts over the top matters to much maybe ill need to back down the max power limit in MPT? (btw - FOOTBALL'S COMING HOME!!)


----------



## ptt1982

jonRock1992 said:


> I saw a peak junction temp at 84C on a Timespy GT2 loop. I'm assuming it's high because the room I'm in is 76F at the moment, and my fans are only running at a constant 1300RPM. I'm just using a single 360mm rad that is 28mm thick. I'm pulling air through the rad with three Noctua A12x25's. It's virtually silent. I have power limit and TDC limit set to 348 in MPT and I use +15% in wattman.


Those temps are quite similar to mine. I think the highest peak I got was around 76C when gaming, but I use Vsync because I'm gaming on a 4K60 TV. I've got 240x30, and 360x30 rads, but also 5600x in the loop. Our apartment is quite hot as well, so I think I can shave off 10C of the temps during winter. 

Good stuff!


----------



## jonRock1992

Sweet! In gaming I barely ever see junction break 70C so that's nice. Been playing the new doom eternal ray tracing update on ultra nightmare with ray tracing at 1440p 100+FPS and the GPU is cool as can be. Junction is in the 60's most of the times. And that's with the specs I posted recently that I used for that Timespy run. Funnily enough that GPU score was 9th in the world for my setup (5800X + 6900XT single GPU). I think I can get higher though because I didn't try to max out my clocks.


----------



## chispy

Hello fellow owners , i just want to share my experience with the rx 6900xt graphics cards. ( my whole purpose is to seek the highest overclocking card for my hobbie of extreme overclocking at hwbot ).

I started with a reference 6900xt , did not like it because i could not clock high on it so i sold that , then bought an asrock phantom gaming 6900xt and the card was great no complains and overclocked like a champ but i sold that one because i want it an xtx chip. Got me a red devil ultimate 6900xtx and the card only shines when under water cooling full cover water block , the stock cooler is not that good. Finally i grab me an asrock 6900xtx oc formula and is a diffrent beast on the overclocking department , on my testing it is the top overclocking card for the rx 6900xtx series as it is made for one thing only _very high overclocks_ tested on water chiller and no problems getting 3000Mhz + with 2160~2180Mhz memory fast timmings , will test on liquid nitrogen on it very soon.

I have found that this cards do shine and run best under water cooling as the clocks are always very stable and it gives you the headroom needed for high mpt pl settings no problems. Even on stock bios going full water cooling the cards do clock and boost higher on the core and due to the memory running cooler on h2o it will overclcok a bit higher the memory too. Air cooling all depends on the heatsink / fan that came with your card , some cards have very bad stock coolers while others have great ones. Best thing you can do to get the most out of your 6900xt is install full cover water block , trust me i have tested 4 different 6900xt cards on stock coolers and all have been tested on water cooling and the same things i found apply to all of my 4 gpus , meaning water cooling makes this cards really shine.

As per thermal paste applications i have found what it works for me is , use thermal grizzly kryonaut ( best ) or any other high thermal conductivity thermal paste. I tested many thermal paste and different method of applications. It seems making an X pattern from corner to corner on the die with a very thick layer of thermal paste and spreaded evenly with a spatula works the best on my very own months long testing for the best application seeking best temperatures. You want a thick layer of thermal paste as it will squish out the excess with the mounting pressure and it will end up perfect each time ( i tested and re-tested this method against pea size grain , thin layer cover and others ).

Also a trick that works great is to install plastic washers under the screws of the die area to give you more mounting pressure , you want a lot of mounting pressure on the die so tightned up those screws as tight as you can get them and then some more  . I was ask in a PM about the best way to apply thermal paste and tricks to get this beast cool so i tought i would share my experiences with the rx 6900xt. I hope this helps.

Kind Regards: Angelo


----------



## J7SC

@chispy ^^great write-up ! Quick question re. the washers for the water-block:: metal or plastic (or does it matter?). I would think that plastic washers might flatten / compress after a while


----------



## jonRock1992

J7SC said:


> @chispy ^^great write-up ! Quick question re. the washers for the water-block:: metal or plastic (or does it matter?). I would think that plastic washers might flatten / compress after a while


My waterblock came with plastic washers for this. I was also wondering if they would flatten and make a noticeable difference. I would assume that at some point they wouldn't flatten anymore.


----------



## chispy

J7SC said:


> @chispy ^^great write-up ! Quick question re. the washers for the water-block:: metal or plastic (or does it matter?). I would think that plastic washers might flatten / compress after a while


Or you can do what i did , use one metal washer and one plastic washer. The metal washer goes on top of the plastic washer and make sure the plastic one is the one touching the pcb and not the metal one.

Kind Regards: Chispy


----------



## Bart

Chispy: what settings do you change in MPT? So far I've _only_ altered the power limit to 375W, and touched nothing else. My Asus TUF 6900XT seems to top out at 2750mhz under those conditions, so I'm wondering if there's more gains to be had there by tweaking other MPT settings?


----------



## weleh

My Nitro+ SE 6800XT pulled more power than my Toxic 6900XT.
My 6900XT doesn't pull more than 380W no matter what I do.

This is my 6800XT


----------



## lawson67

weleh said:


> My Nitro+ SE 6800XT pulled more power than my Toxic 6900XT.
> My 6900XT doesn't pull more than 380W no matter what I do.
> 
> This is my 6800XT
> View attachment 2516847


Wow that's strange surly that's down to the power stages and controllers that the toxic uses?, the red devil RX 6800 XT and the red devil RX 6900 XT share the same board and power stages from infineon being the tda27472 rated at 70 amps each and there's 19 in total and 14 for the VDDCI so its total over kill for the card especially for the 6800xt, the card can handle way more power than you can throw at it, the only difference between the boards on the devils RX 6800 XT and the RX 6900 XT ls that the 6800 does not have the third 8 Molex connector but the holes are there on the board, I highly Suspects you could just solder a third 8 pin molex to the board and it would pull the extra juice, anyhow it did not worry you pulling over 400 watts though your 6800 in regards to the two 8-pin molex being rated at 300 watts along with the PCI-e slot giving the extra 75 watts maxing out at 375 watts?


----------



## weleh

lawson67 said:


> Wow that's strange surly that's down to the power stages and controllers that the toxic uses?, the red devil RX 6800 XT and the red devil RX 6900 XT share the same board and power stages from infineon being the tda27472 rated at 70 amps each and there's 19 in total and 14 for the VDDCI so its total over kill for the card especially for the 6800xt, the card can handle way more power than you can throw at it, the only difference between the boards on the devils RX 6800 XT and the RX 6900 XT ls that the 6800 does not have the third 8 Molex connector but the holes are there on the board, I highly Suspects you could just solder a third 8 pin molex to the board and it would pull the extra juice, anyhow it did not worry you pulling over 400 watts though your 6800 in regards to the two 8-pin molex being rated at 300 watts along with the PCI-e slot giving the extra 75 watts maxing out at 375 watts?


I suspect that the 6900XT has more vdroop under load than the 6800XT hence the lower power usage. The 6800XT can probably maintain close to 1.15V during load while the 6900XT will drop to somewhere around 1.125V.

The PCB design itself is probably similar with slight advantage to the Toxic.

Pulling 400W on a 2x8pin wouldn't worry be one bit because not only am I using a good PSU with good cables but also because this was just for benchmarks and not for pronlongued usage. Another point is that the cables are officially rated at 150W but you can pull close to double than that and still be safe. It depends on ambient temps too due to the possibility of the cables getting hot. 

Don't forget there are some +400W cards out there on 2 PCI-E connetors.


----------



## lawson67

Yep i was only thinking that last night, the R9 295X2 was pulling up to 500 watts i belive with two 8 pin molex


----------



## chispy

Bart said:


> Chispy: what settings do you change in MPT? So far I've _only_ altered the power limit to 375W, and touched nothing else. My Asus TUF 6900XT seems to top out at 2750mhz under those conditions, so I'm wondering if there's more gains to be had there by tweaking other MPT settings?


You could try a bit more power limit ~400-450w + play with the tdc limit GFX A ( around 400 ) + tdc limit SoC A ( around 100+ ). I have not use MPT for a very long time since i'm using Elmor's evc2 to control all voltages and power limits. Every card like different PL you have to test for yourself and find wish value gets you the best output. I hope this helps.

Kind Regards: Chispy


----------



## kratosatlante

chispy said:


> You could try a bit more power limit ~400-450w + play with the tdc limit GFX A ( around 400 ) + tdc limit SoC A ( around 100+ ). I have not use MPT for a very long time since i'm using Elmor's evc2 to control all voltages and power limits. Every card like different PL you have to test for yourself and find wish value gets you the best output. I hope this helps.
> 
> Kind Regards: Chispy


thanks for the tips, can do this in rx 6900 xt phantom in air? soc 100+? , i open the card and repaste mastergel 11wk i think, go dawn hotspot and a litle core and vram temp, i see the thermpal 
They barely touch The memories, adjust to close but , But it is not enough, which thermal pads do you recommend and will it improve if I place thermal gryzly extreme 14wk? in some drivers the card can do 2750 but vdrop its hard 1.110 crash driver, Generally held 1.14- 1.15
















After changing the pasta I was able to raise the core to 2715


















run all with this mpt config


----------



## LtMatt

Does running a higher TDC limit on the SoC actually provide any performance benefits?


----------



## Pedropc

Hello everyone, is still unable to flash the bios of the RX6900 XT ??? 

Thanks.


----------



## CantingSoup

Pedropc said:


> Hello everyone, is still unable to flash the bios of the RX6900 XT ???
> 
> Thanks.


You still can’t flash the bios.


----------



## jonRock1992

LtMatt said:


> Does running a higher TDC limit on the SoC actually provide any performance benefits?


I was reading in another forum that it helps with memory OC. I ended up increasing PL, TDC, and SOC by 15%.
Got a new personal best Timespy GPU score. I scored 20 287 in Time Spy


----------



## lowrider_05

kratosatlante said:


> thanks for the tips, can do this in rx 6900 xt phantom in air? soc 100+? , i open the card and repaste mastergel 11wk i think, go dawn hotspot and a litle core and vram temp, i see the thermpal
> They barely touch The memories, adjust to close but , But it is not enough, which thermal pads do you recommend and will it improve if I place thermal gryzly extreme 14wk? in some drivers the card can do 2750 but vdrop its hard 1.110 crash driver, Generally held 1.14- 1.15
> View attachment 2516942
> View attachment 2516943
> 
> 
> After changing the pasta I was able to raise the core to 2715
> 
> View attachment 2516944
> 
> 
> View attachment 2516945
> 
> run all with this mpt config


I just let the Card consume as much Power as it wants, but it rarely goes above 400 Watts anyways.


----------



## ptt1982

lowrider_05 said:


> I just let the Card consume as much Power as it wants, but it rarely goes above 400 Watts anyways.


Hahaha! When I saw your MPT settings I almost spurted my Okinawa Sango IPA on my tv. That's certainly one way to look at it!


----------



## jonRock1992

lowrider_05 said:


> I just let the Card consume as much Power as it wants, but it rarely goes above 400 Watts anyways.
> View attachment 2517144


Is that dangerous? Lol. Whatever you do don't open up Furmark with those settings lol.


----------



## ZealotKi11er

jonRock1992 said:


> Is that dangerous? Lol. Whatever you do don't open up Furmark with those settings lol.


nothing will happen. If furmark does pull that much load it will probably shut down the system.


----------



## lowrider_05

jonRock1992 said:


> Is that dangerous? Lol. Whatever you do don't open up Furmark with those settings lol.





ZealotKi11er said:


> nothing will happen. If furmark does pull that much load it will probably shut down the system.


Well, it is not dangerous if you got a card and VRM that can handle the load 
I have a Asrock OCF on a Universal Waterblock and the rest is cooled with Copperheatsinks and 2 Fans and my 1300 Watt PSU can handle that just fine.

As you can see on the Screens, Furmark is not that bad its "only" around 450 Watts but the max Powerusage i can generate is with the 3D Mark RT Feature test with 500+ Watts.

just so you don´t get the wrong idea: If you are on the Stock Aircooler, DO NOT TRY THIS, they can not handle the heat!


----------



## ZealotKi11er

lowrider_05 said:


> Well, it is not dangerous if you got a card and VRM that can handle the load
> I have a Asrock OCF on a Universal Waterblock and the rest is cooled with Copperheatsinks and 2 Fans and my 1300 Watt PSU can handle that just fine.
> 
> As you can see on the Screens, Furmark is not that bad its "only" around 450 Watts but the max Powerusage i can generate is with the 3D Mark RT Feature test with 500+ Watts.
> 
> just so you don´t get the wrong idea: If you are on the Stock Aircooler, DO NOT TRY THIS, they can not handle the heat!
> 
> View attachment 2517219
> View attachment 2517220


If anything your setup is more dangerous. An air cooled card will hit temp limits at 350w+


----------



## lowrider_05

ZealotKi11er said:


> If anything your setup is more dangerous. An air cooled card will hit temp limits at 350w+


I of course mean, if you operate an Aircoold Card mith MPT Limits set to 999 Watts, then its more dangerous. With stock settings, there is no danger, because then the card is operating at factory parameters and an Aircoold Card with almost no powerlimit will most likely hit the 118C shutdown temp and that is obviously more dangerous.


----------



## LtMatt

ZealotKi11er said:


> If anything your setup is more dangerous. An air cooled card will hit temp limits at 350w+


Yep, need a great air cooler to survive above 350W. 

My Merc Ultra can just about handle 400W at maximum fan speed while gaming (2700Mhz core clock in game) without throttling. Good for recording videos and benchmarks, but not practical for every day silent gaming.


----------



## lawson67

LtMatt said:


> Yep, need a great air cooler to survive above 350W.
> 
> My Merc Ultra can just about handle 400W at maximum fan speed while gaming (2700Mhz core clock in game) without throttling. Good for recording videos and benchmarks, but not practical for every day silent gaming.


Since i put LM on my Powercolor Red Devil RX 6800 XT i can hit well over 350w on air and still be no where near shut down temps, i am hitting up to 400w with a junction temp still in the 90's, example below 384w only 96c on hotspot and i have my fans set at 75% max, that screen shot was taken after about an hour of gaming on metro exodus enhanced and the max speed the fans hit was 2,200 so was no where near 75%, the fans idle at 1,000rpm on the Powercolor


----------



## lowrider_05

lawson67 said:


> Since i put LM on my Powercolor Red Devil RX 6800 XT i can hit well over 350w on air and still be no where near shut down temps, i am hitting up to 400w with a junction temp still in the 90's, example below 384w only 96c on hotspot and i have my fans set at 75% max, that screen shot was taken after about an hour of gaming on metro exodus enhanced and the max speed the fans hit was 2,200 so was no where near 75%, the fans idle at 1,000rpm on the Powercolor
> 
> 
> View attachment 2517285


Yeah well, LM does the trick


----------



## LtMatt

lawson67 said:


> Since i put LM on my Powercolor Red Devil RX 6800 XT i can hit well over 350w on air and still be no where near shut down temps, i am hitting up to 400w with a junction temp still in the 90's, example below 384w only 96c on hotspot and i have my fans set at 75% max, that screen shot was taken after about an hour of gaming on metro exodus enhanced and the max speed the fans hit was 2,200 so was no where near 75%, the fans idle at 1,000rpm on the Powercolor
> 
> 
> View attachment 2517285


Nice, your core clock is 100Mhz lower though looking at your metrics. 

I don't want to risk taking mine apart as I tried it with a B grade Merc (now sold) and saw worse results using Thermal Grizzly than the stock solution paste.

Perhaps LM makes a big difference if you get a good mount. There have been others here that tried LM with the Powercolor and saw worse results due to mounting.

We should do some comparisons? 20 runs of Timespy stress test and then share our results at a set voltage, clock speed and power limit?


----------



## lawson67

LtMatt said:


> Nice, your core clock is 100Mhz lower though looking at your metrics.
> 
> I don't want to risk taking mine apart as I tried it with a B grade Merc (now sold) and saw worse results using Thermal Grizzly than the stock solution paste.
> 
> Perhaps LM makes a big difference if you get a good mount. There have been others here that tried LM with the Powercolor and saw worse results due to mounting.


I can push it to 2700mhz at just over 400w but not getting better scores i am just clock stretching, that's the sweet spot for my card which results with a 21k graphic score on 3Dmark TS which is bloody good for a RX6800XT so i leave it at that, i also put my CPU score up to 12500 by simply adding an extra 2 sticks cl16 3600mhz ram so 4 sticks of 8gb single rank ram over the 4 slots which gives zen 3 about a 10% performance boost so i read and it indeed does, and if anyone wants some good quality and extremely well priced CL 16 ram get it from scan computers link below, its micron E-die and i can clock it at 4000mhz with slight voltage boost @1.42v with a FLCK clock of 2000mhz the only problem i am getting as most are with a FLCK clock of 2000mhz is whea errors, so i am keeping at 1800mhz with with ram speed of 3600mhz and hoping in time this will be addressed with future AGESA Patches 


https://www.scan.co.uk/products/16gb-2x8gb-corsair-ddr4-vengeance-rgb-pro-black-pc4-28800-3600-non-ecc-unbuffered-cas-16-19-19-36-xm


----------



## LtMatt

lawson67 said:


> I can push it to 2700mhz at just over 400w but not getting better scores i am just clock stretching, that's the sweet spot for my card which results with a 21k graphic score on 3Dmark TS which is bloody good for a RX6800XT so i leave it at that, i also put my CPU score up to 12500 by simply adding an extra 2 sticks cl16 3600mhz ram so 4 sticks of 8gb single rank ram over the 4 slots which gives zen 3 about a 10% performance boost so i read and it indeed does, and if anyone wants some good quality and extremely well priced CL 16 ram get it from scan computers link below, its micron E-die and i can clock it at 4000mhz with slight voltage boost @1.42v with a FLCK clock of 2000mhz the only problem i am getting as most are with a FLCK clock of 2000mhz is whea errors, i am hoping in time this will be addressed in with AGESA Patches
> 
> 
> https://www.scan.co.uk/products/16gb-2x8gb-corsair-ddr4-vengeance-rgb-pro-black-pc4-28800-3600-non-ecc-unbuffered-cas-16-19-19-36-xm
> 
> 
> 
> View attachment 2517289


That is a good score, my 6900 XT is just 7.3% faster on graphics score.

I thought you had a 6900 XT though as you were posting in this thread, Lol. 

6800 XT should run a bit cooler as less CUs. Speaking of 6800 XT i have a Midnight black (MBA) edition but it is a bit of a lemon and can't even complete Timespy with 2500Mhz max frequency set in Radeon Software.


----------



## lawson67

LtMatt said:


> That is a good score, my 6900 XT is just 7.3% faster on graphics score.
> 
> I thought you had a 6900 XT though as you were posting in this thread, Lol.
> 
> 6800 XT should run a bit cooler as less CUs. Speaking of 6800 XT i have a Midnight black (MBA) edition but it is a bit of a lemon and can't even complete Timespy with 2500Mhz max frequency set in Radeon Software.


I post in both threads, this one is more active so hang out here more, as for your RX 6800 XT have you tried to up the powerlimit in MPT to say 300w? it should easily be able to do 2500mhz or it really is a lemon, i am running mine at 330w with the powerlimit at 15% which feeds it just over 380 watts


----------



## LtMatt

lawson67 said:


> I post in both threats, this one is more active so hang out here more, as for your RX 6800 XT have you tried to up the powerlimit in MPT to say 300w? it should easily be able to do 2500mhz or it really is a lemon, i am running mine at 330w with the powerlimit at 15% which feeds it just over 380 watts


Yep, nothing can save it. So it's now sitting on the mantlepiece (in front of my pc) for the rest of its life, or until my 6900 XT dies when it may get a chance at redemption.


----------



## lawson67

LtMatt said:


> Nice, your core clock is 100Mhz lower though looking at your metrics.
> 
> I don't want to risk taking mine apart as I tried it with a B grade Merc (now sold) and saw worse results using Thermal Grizzly than the stock solution paste.
> 
> Perhaps LM makes a big difference if you get a good mount. There have been others here that tried LM with the Powercolor and saw worse results due to mounting.
> 
> We should do some comparisons? 20 runs of Timespy stress test and then share our results at a set voltage, clock speed and power limit?


You want to do TS comparisons with your RX 6800 XT vs mine?, not much point in doing voltage comparisons with your RX 6900 XT vs mine as they run different voltages curves, even doing the same on two RX 6800 XT wont prove much due to silicon lottery, but what got my RX 6800 XT to run much faster was to set in MPT (max voltage soc 1030mv, this was the most important setting i changed as it started to boost much higher with better scores after that) then max power limit GPU 330W, max TDC limit 330A then TDC SOC 70A and here are my wattman settings below with 15% power limit set which lets it draw around 383 watts, with the Power limit set at 15% and leaving 1150mv set in MPT for the core voltage and setting 1020mv for the core in wattman it allows the core to draw whatever voltage it needs which typically tops out at 1099mv after a long gaming session in HWINFO64, if i set back 1150mv for max SOC voltage in MPT the core will always draw 1150mv @15% PL regardless to what you set in wattman for core voltage making the card run much hotter and not boosting anywhere near as far, so for me lowering Max Soc voltage was the magic key to really help boost my cards frequencies using a lower core voltage resulting in lower temps and higher boosts


----------



## LtMatt

lawson67 said:


> You want to do TS comparisons with your RX 6800 XT vs mine?, not much point in doing voltage comparisons with your RX 6900 XT vs mine as they run different voltages curves, even doing the same on two RX 6800 XT wont prove much due to silicon lottery, but what got my RX 6800 XT to run much faster was to set in MPT (max voltage soc 1030mv, this was the most important setting i changed as it started to boost much higher with better scores after that) then max power limit GPU 330W, max TDC limit 330A then TDC SOC 70A and here are my wattman settings below with 15% power limit set which lets it draw around 383 watts
> 
> View attachment 2517290


Nah, can't be bothered to even plug it in.


----------



## J7SC

Question on MPT, specifically the 'Power and Voltage tab' and within it the Power Limit GPU (in Watts) and GFX TDX Limit (in Amps). While those measures are related, they are not the same. 

For now, I keep the 'percent difference' from stock the same, ie. when increasing PL (W), I increase GFX (A) by a similar percentage. Looking at various vBios for the 6900s in stock form, the relationship between the two parameters does vary, depending on card.


----------



## lawson67

J7SC said:


> Question on MPT, specifically the 'Power and Voltage tab' and within it the Power Limit GPU (in Watts) and GFX TDX Limit (in Amps). While those measures are related, they are not the same.
> 
> For now, I keep the 'percent difference' from stock the same, ie. when increasing PL (W), I increase GFX (A) by a similar percentage. Looking at various vBios for the 6900s in stock form, the relationship between the two parameters does vary, depending on card.


Yes keeping them them at about the same percentage is fine, you can also raise the TDC GFX higher than the GPU PL if you want as Igorslab has done in the link below








AMD’s Radeon RX 6800 stable with continuous 2.55 GHz and RX 6800 XT overclocked up to 2.5 GHz - Thanks to MorePowerTool and board partner BIOS | Page 2 | igor'sLAB


Yeah, we did it again, and better this time. Especially since our forum member Gurdi probably won one of the main prizes in the silicone lottery, as his RX 6800 in reference design shows very…




www.igorslab.de


----------



## cfranko

Is it normal for my 6900 XT Phantom to sit at 92-95 hotspot at 330 watts even after a repaste? I Repasted it with MX-4.


----------



## LtMatt

cfranko said:


> Is it normal for my 6900 XT Phantom to sit at 92-95 hotspot at 330 watts even after a repaste? I Repasted it with MX-4.


What fan speed (RPM) and noise?

But yes, that sounds normal. Did you record the temps and delta between edge/junction before your repaste?


----------



## cfranko

LtMatt said:


> What fan speed (RPM) and noise?


2200 RPM, not too loud if you ask me.



LtMatt said:


> But yes, that sounds normal. Did you record the temps and delta between edge/junction before your repaste?


I was so anxious about the repaste that I wanted to quickly do it and be finished so no time for benchmarking before repasting lol, but I do remember that my junction temp was 105-107 at 350 watts and edge was 77 before the repaste


----------



## LtMatt

cfranko said:


> 2200 RPM, not too loud if you ask me.
> 
> 
> I was so anxious about the repaste that I wanted to quickly do it and be finished so no time for benchmarking before repasting lol, but I do remember that my junction temp was 105-107 at 350 watts and edge was 77 before the repaste


Seems like a good job then. Is yours an XTXH chip?


----------



## cfranko

LtMatt said:


> Seems like a good job then. Is yours an XTXH chip?


No mine is not an XTXH, the maximum voltage I can give is 1175 mV, not 1200 mV. And also the clocks are bad as well in Time Spy anything above 2600 MHz crashes 

This was how I applied the paste if anyone wans to take a look.


----------



## marcoschaap

lowrider_05 said:


> Well, it is not dangerous if you got a card and VRM that can handle the load
> I have a Asrock OCF on a Universal Waterblock and the rest is cooled with Copperheatsinks and 2 Fans and my 1300 Watt PSU can handle that just fine.
> 
> As you can see on the Screens, Furmark is not that bad its "only" around 450 Watts but the max Powerusage i can generate is with the 3D Mark RT Feature test with 500+ Watts.
> 
> just so you don´t get the wrong idea: If you are on the Stock Aircooler, DO NOT TRY THIS, they can not handle the heat!
> 
> View attachment 2517219
> View attachment 2517220


Can I ask which waterblock do you use? I'd love to get my hands on a Asrock 6900XT OC Formula, but I'm unaware of compatible waterblock.


----------



## LtMatt

Anyone here using a Sapphire Toxic with the AIO? Either xtx or xtxh?
Would like to know how the temps are under various power limits and fan speeds and what the delta is between edge and junction temp under heavy load?


----------



## cfranko

LtMatt said:


> Anyone here using a Sapphire Toxic with the AIO? Either xtx or xtxh?
> Would like to know how the temps are under various power limits and fan speeds and what the delta is between edge and junction temp under heavy load?


@weleh is using the 6900 xt toxic with its included aio


----------



## ptt1982

New drivers released, and am happy to say the core clocks are back at least up +50mhz (based on one cold run), and most importantly the score is 600 points higher. My personal best in Timespy was 21900 with Red Devil 6900xt (non-ultimate) at 2650mhz, and now the score went to 22580. This is RTX 3090 OC level. The drivers adds another 3% gain in performance. Sadly, the junction temps also spiked 10C, but are still within the spec. These settings are MPT 350W/375TDP, and I saw spikes to 405W in GT2 in TS based on MSI Afterburner.


----------



## LtMatt

ptt1982 said:


> New drivers released, and am happy to say the core clocks are back at least up +50mhz (based on one cold run), and most importantly the score is 600 points higher. My personal best in Timespy was 21900 with Red Devil 6900xt (non-ultimate) at 2650mhz, and now the score went to 22580. This is RTX 3090 OC level. The drivers adds another 3% gain in performance. Sadly, the junction temps also spiked 10C, but are still within the spec. These settings are MPT 350W/375TDP, and I saw spikes to 425W in GT2 in TS.
> 
> View attachment 2517563


Very nice will try 21.7.1 out tomorrow. I had a similar graphics score on air on 21.6.1 on air with 400W set in MPT.


----------



## LtMatt

Wow, another 3.6% performance increase in 21.7.1 in Timespy. Seriously impressive!! The gains in Timespy from 21.3.2 - 21.7.1 have got to be close to 10%.

My test conditions were kept identical each time so should be accurate comparisons on the same system and settings.

I think I'll be challenging 3090's on water if I overclock my core clock on this Nvidia friendly benchmark.

6900 XT Stock Core Clock @2379Mhz/2479Mhz / 2124Mhz
21.6.1
AMD Radeon RX 6900 XT video card benchmark result - AMD Ryzen 9 5950X,ASUSTeK COMPUTER INC. ROG CROSSHAIR VIII HERO (3dmark.com)

6900 XT Stock Core Clock @2379Mhz/2479Mhz / 2124Mhz
21.6.2
AMD Radeon RX 6900 XT video card benchmark result - AMD Ryzen 9 5950X,ASUSTeK COMPUTER INC. ROG CROSSHAIR VIII HERO (3dmark.com)

6900 XT Stock Core Clock @2379Mhz/2479Mhz / 2124Mhz
21.7.1
AMD Radeon RX 6900 XT video card benchmark result - AMD Ryzen 9 5950X,ASUSTeK COMPUTER INC. ROG CROSSHAIR VIII HERO (3dmark.com)


----------



## weleh

Thanks all for the testing.

Gonna try it myself.

21.6.1 increased my scores a bit but atm, I'm on 26-27C ambients in my room so testing is limited


----------



## LtMatt

weleh said:


> Thanks all for the testing.
> 
> Gonna try it myself.
> 
> 21.6.1 increased my scores a bit but atm, I'm on 26-27C ambients in my room so testing is limited


Can you let me know how your Toxic is? Getting the urge to get the XTXH version again. 

What is the delta between edge/junction temperature at 400W+ load say under multiple Timespy runs?
What's your best Timespy score overclocked?
What's your general experience and opinion of the card?


----------



## ptt1982

LtMatt said:


> Can you let me know how your Toxic is? Getting the urge to get the XTXH version again.
> 
> What is the delta between edge/junction temperature at 400W+ load say under multiple Timespy runs?
> What's your best Timespy score overclocked?
> What's your general experience and opinion of the card?


To give you a reference point, I've noticed that in my very basic custom loop the Red Devil junction temp goes past 25C delta once the wattage goes above 370W, and peaks at around 35C delta when it hits the maximum of 405W (possibly it hits higher, but the spikes are so fast the software cannot catch it. TDP allows up to 432W.) This might be specific to the Red Devil and the way its power is regulated.

Personally, I would not want to run the card past 400W constantly given that the edge/junction delta starts to get exponentially higher with every 5W-10W you add past 390W range. Just based on my intuition, feels like 6900xt was not designed to run past 400W, given the junction temp spiraling out of control. I think it does enjoy the extra 80W breathing room I've given it, though.

That being said, an XTXH might run entirely differently with the 1.2v and having a better silicon quality.


----------



## LtMatt

ptt1982 said:


> To give you a reference point, I've noticed that in my very basic custom loop the Red Devil junction temp goes past 25C delta once the wattage goes above 370W, and peaks at around 35C delta when it hits the maximum of 405W (possibly it hits higher, but the spikes are so fast the software cannot catch it. TDP allows up to 432W.) This might be specific to the Red Devil and the way its power is regulated.
> 
> Personally, I would not want to run the card past 400W constantly given that the edge/junction delta starts to get exponentially higher with every 5W-10W you add past 390W range. Just based on my intuition, feels like 6900xt was not designed to run past 400W, given the junction temp spiraling out of control. I think it does enjoy the extra 80W breathing room I've given it, though.
> 
> That being said, an XTXH might run entirely differently with the 1.2v and having a better silicon quality.


I was not impressed with the XTXH in air, it murdered the beastly Merc cooler. Just wondering how much better an AIO will fare and the Sapphire seems the best of the bunch.

The max junction temp of 95c might not be an issue with an AIO either. Sadly there are no reviews of the XTXH version so no comparisons or user feedback available for that model.


----------



## weleh

Ok,

I just ran Time Spy with the latest 2.7.1 drivers.
I got 1000 Graphic Score more than my previous run... Mind blowing...









3dmark.com







www.3dmark.com













This is at 2600-2700 (2680 Mhz average), 400W.

About the Toxic, I have the XTX chip, it's pretty good binned out of the box due to Sapphire Trixx (2660 Mhz at least). I can game at 2750 Mhz effective however for Time Spy I have to drop it down to 2700 range on latest drivers.

Temps wise, I'm at very high ambients atm (27ºC inside my room) and this is what it looked like during this run. For everyday usage Core usually sits at 50ºC and Junction at like 60-65ºC. Memory, VRM and other crap cooled by the air part of the card remains at 50ºC no matter what.

I did replace the fans though, using NF A12x25's instead of the stock ones.


----------



## LtMatt

weleh said:


> Ok,
> 
> I just ran Time Spy with the latest 2.7.1 drivers.
> I got 1000 Graphic Score more than my previous run... Mind blowing...
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 3dmark.com
> 
> 
> 
> 
> 
> 
> 
> www.3dmark.com
> 
> 
> 
> 
> 
> View attachment 2517595
> 
> This is at 2600-2700 (2680 Mhz average), 400W.
> 
> About the Toxic, I have the XTX chip, it's pretty good binned out of the box due to Sapphire Trixx (2660 Mhz at least). I can game at 2750 Mhz effective however for Time Spy I have to drop it down to 2700 range on latest drivers.
> 
> Temps wise, I'm at very high ambients atm (27ºC inside my room) and this is what it looked like during this run. For everyday usage Core usually sits at 50ºC and Junction at like 60-65ºC. Memory, VRM and other crap cooled by the air part of the card remains at 50ºC no matter what.
> 
> I did replace the fans though, using NF A12x25's instead of the stock ones.
> 
> View attachment 2517594


Thanks great feedback. Was that with 100% fan speed? Do the stock fans move more air and have better pressure?

25c Delta decent with 400W.


----------



## cfranko

ptt1982 said:


> New drivers released, and am happy to say the core clocks are back at least up +50mhz (based on one cold run), and most importantly the score is 600 points higher. My personal best in Timespy was 21900 with Red Devil 6900xt (non-ultimate) at 2650mhz, and now the score went to 22580. This is RTX 3090 OC level. The drivers adds another 3% gain in performance. Sadly, the junction temps also spiked 10C, but are still within the spec. These settings are MPT 350W/375TDP, and I saw spikes to 405W in GT2 in TS based on MSI Afterburner.
> 
> View attachment 2517563


If you are on air, what was your junction temp when the power spiked to 405 watts? There is this specific section of TimeSpy where my gpu power jumps up to 410 watts and junction temp jumps to 110 degrees celcius. Is 110 high for 410 watts? I want to compare this with other people because I think it is higher than what it should be.


----------



## jimpsar

Great Scores !!Amd did it again !  

This is mine on Air XFX Merc 319 Black









I scored 21 840 in Time Spy


AMD Ryzen 9 5950X, AMD Radeon RX 6900 XT x 1, 16384 MB, 64-bit Windows 10}




www.3dmark.com


----------



## weleh

XTXH chips will be kings for benchmarks with drivers like these...

Incredible stuff from AMD.


----------



## LtMatt

jimpsar said:


> Great Scores !!Amd did it again !
> 
> This is mine on Air XFX Merc 319 Black
> 
> 
> 
> 
> 
> 
> 
> 
> 
> I scored 21 840 in Time Spy
> 
> 
> AMD Ryzen 9 5950X, AMD Radeon RX 6900 XT x 1, 16384 MB, 64-bit Windows 10}
> 
> 
> 
> 
> www.3dmark.com
> 
> 
> 
> 
> 
> 
> View attachment 2517596


Great score, I’ve beaten your graphics score but not by much.


----------



## LtMatt

Just breached 22,000 total points, and 23200 graphics points. GPU at the limit now, managed a run at 2740, 2745 usually crashes but trying it anyway.


----------



## cfranko

LtMatt said:


> Just breached 22,000 total points, and 23200 graphics points. GPU at the limit now, managed a run at 2740, 2745 usually crashes but trying it anyway.


How do you do 2740 at time spy? Mine crashes at 2635


----------



## LtMatt

cfranko said:


> How do you do 2740 at time spy? Mine crashes at 2635


Silicon lottery. I just managed 2745, lost points at 2750. Think I just got my best possible score for air, one more run at 2745 and will post results.


----------



## weleh

Someone with a XTXH card needs to run this with this drivers.


----------



## LtMatt

That's my limit on air. 

5950X PBO
6900 XT Merc Air 2645/2745/2124 FT
21.7.1

AMD Radeon RX 6900 XT video card benchmark result - AMD Ryzen 9 5950X,ASUSTeK COMPUTER INC. ROG CROSSHAIR VIII HERO (3dmark.com)

*SCORE 22 123 with AMD Radeon RX 6900 XT(1x) and AMD Ryzen 9 5950X*
Graphics Score
23 420
CPU Score
16 842


----------



## LtMatt

weleh said:


> Someone with a XTXH card needs to run this with this drivers.


Will do tomorrow, using a Toxic. 

@L!ME you should rerun this with 21.7.1.


----------



## marcoschaap

Tried the Alphacool send one in, get one free service for the Asrock RX 6900XT OC Formula, but sadly the interest is zero... 😔


----------



## lawson67

Wow i have just managed to get the 2nd fastest graphic score of anyone in the world with a Ryzen 7 5800x and a single RX 6800 XT with those new drivers, i got a graphic score of 21861 which i have just bettered with score of 21871 and all on air with my trusty Red Devil lol, overall i am 26th on 3D mark


----------



## amigafan2003

weleh said:


> Someone with a XTXH card needs to run this with this drivers.











Result not found







www.3dmark.com


----------



## lawson67

Those new drivers have seemed to of broken Freesync for me, Tomb Raider benchmark on my 65 OLED is no where near as buttery as before


----------



## jonRock1992

Sounds like the new drivers are a win. I haven't ran any benches with it yet, but gaming was noticeably smoother for me. I was playing doom eternal last night (maxed out with ray tracing @ 1440p) with the latest driver and the experience was perfect at 2800 MHz on my core clock and 2130 MHz fast timings in my memory. It's been pretty hot in my house recently, like 78F, so I haven't been benching. But I might try a Timespy run sometime soon.


----------



## LtMatt

jonRock1992 said:


> Sounds like the new drivers are a win. I haven't ran any benches with it yet, but gaming was noticeably smoother for me. I was playing doom eternal last night (maxed out with ray tracing @ 1440p) with the latest driver and the experience was perfect at 2800 MHz on my core clock and 2130 MHz fast timings in my memory. It's been pretty hot in my house recently, like 78F, so I haven't been benching. But I might try a Timespy run sometime soon.


Those are nice clocks Jon considering Doom has RT. RT usually nerfs core clock more than anything else I've seen.


----------



## jonRock1992

LtMatt said:


> Those are nice clocks Jon considering Doom has RT. RT usually nerfs core clock more than anything else I've seen.


I'm surprised, but it worked for a good hour straight with no stutters nor crashes. I haven't tried Metro Exodus Enhanced Edition with the new drivers yet, but I got a crash at 2800 MHz in that game with the previous driver. So far Metro is the most graphically demanding game I've come across.


----------



## lestatdk

around +700 graphic score for me


----------



## lawson67

jonRock1992 said:


> I'm surprised, but it worked for a good hour straight with no stutters nor crashes. I haven't tried Metro Exodus Enhanced Edition with the new drivers yet, but I got a crash at 2800 MHz in that game with the previous driver. So far Metro is the most graphically demanding game I've come across.


Yep Metro Exodus Enhanced Edition is the best game i have for testing OC stability


----------



## jonRock1992

I'm hoping I get the same boost as everyone else with this driver. If so I should be able to break 23k GPU score in Timespy.


----------



## J7SC

...good to see that AMD continues to work on the drivers  ...I hope they can also do a bit more re. ray tracing and Super Resolution as my 6900XT scores about the same as my 2080 Ti in Port Royal


----------



## ZealotKi11er

J7SC said:


> ...good to see that AMD continues to work on the drivers  ...I hope they can also do a bit more re. ray tracing and Super Resolution as my 6900XT scores about the same as my 2080 Ti in Port Royal


How much does 2080 Ti score? I got 11.5K with 6900 XT and 12.5K with 3080.


----------



## weleh

ZealotKi11er said:


> How much does 2080 Ti score? I got 11.5K with 6900 XT and 12.5K with 3080.


My best PR is 12400 ish.


----------



## weleh

Actually the fastest 6900XT on hwbot and 3dmark.









Weleleh`s 3DMark - Port Royal score: 12456 marks with a Radeon RX 6900 XT


The Radeon RX 6900 XTscores getScoreFormatted in the 3DMark - Port Royal benchmark. Welelehranks #275 worldwide and #5 in the hardware class. Find out more at HWBOT.




hwbot.org


----------



## J7SC

ZealotKi11er said:


> How much does 2080 Ti score? I got 11.5K with 6900 XT and 12.5K with 3080.


single 2080 Ti - 11,000 +- PR
dual 2080 Ti - 21,200 +- PR
single 3090 - 15,300 +- PR


----------



## jonRock1992

Holy crap guys the new driver is insane. I scored 21 010 in Time Spy
23821 Timespy GPU score with my Red Devil XTX-H.


----------



## cfranko

LtMatt said:


> That's my limit on air.
> 
> 5950X PBO
> 6900 XT Merc Air 2645/2745/2124 FT
> 21.7.1
> 
> AMD Radeon RX 6900 XT video card benchmark result - AMD Ryzen 9 5950X,ASUSTeK COMPUTER INC. ROG CROSSHAIR VIII HERO (3dmark.com)
> 
> *SCORE 22 123 with AMD Radeon RX 6900 XT(1x) and AMD Ryzen 9 5950X*
> Graphics Score
> 23 420
> CPU Score
> 16 842












You sure you card is on air ? 
I don’t understand how people have such cool cards while mine is blazing hot.



jonRock1992 said:


> Holy crap guys the new driver is insane. I scored 21 010 in Time Spy
> 23821 Timespy GPU score with my Red Devil XTX-H.


Insane score you got there, Now I really want to try my 6900 xt phantom on the new drivers when I get home.


----------



## jonRock1992

cfranko said:


> View attachment 2517598
> 
> 
> You sure you card is on air ?
> I don’t understand how people have such cool cards while mine is blazing hot.
> 
> 
> Insane score you got there, Now I really want to try my 6900 xt phantom on the new drivers when I get home.


Thanks. Two things have changed from my last run. I updated to windows 11 and I'm using the latest driver. But my personal best before was 22774. My score went up by over 1K lol. Also, if his card is actually on air that would be insane. There would be no benefit by going to water-cooling for him, besides a quieter system lol.


----------



## LtMatt

cfranko said:


> View attachment 2517598
> 
> 
> You sure you card is on air ?
> I don’t understand how people have such cool cards while mine is blazing hot.
> 
> 
> Insane score you got there, Now I really want to try my 6900 xt phantom on the new drivers when I get home.


Lol.
Yes I am on air using the Merc cooler. I have air con in my room so had the ambient down at 18c which helped.


----------



## jonRock1992

LtMatt said:


> Lol.
> Yes I am on air using the Merc cooler. I have air con in my room so had the ambient down at 18c which helped.


Damn. That Merc is one good GPU. It's cooling your GPU just as well as my custom 360mm loop. Then again my ambient temp was at 76F for that Timespy run I just did.


----------



## cfranko

LtMatt said:


> Lol.
> Yes I am on air using the Merc cooler. I have air con in my room so had the ambient down at 18c which helped.


My Asrock phantom 6900 xt hits 110 hotspot at 410 watts 
It may be because its 40 degrees outside. I am going to do a custom loop because I can’t give power limit freely which bothers me. Wish I had a Merc, apparently it is the best air cooled card.


----------



## LtMatt

jonRock1992 said:


> Damn. That Merc is one good GPU. It's cooling your GPU just as well as my custom 360mm loop. Then again my ambient temp was at 76F for that Timespy run I just did.


Amazing what air con on the wall on full blast pointing towards my chassis can do tbh.


----------



## cfranko

LtMatt said:


> Amazing what air con on the wall on full blast pointing towards my chassis can do tbh.


The 6900 XT Merc 319 Limited Edition is on sale right now where I live and I am trying very hard not to get one, its a XTXH afaik. But I would have to sell my current Phantom after getting the merc and I am afraid that I won’t be able to sell the Phantom. Not worth the risk but damn it’s already very good card, can’t image how good the XTXH version is.


----------



## weleh

Is anyone in here with EVC on their card?

I want to know how worth it it is...


----------



## LtMatt

cfranko said:


> The 6900 XT Merc 319 Limited Edition is on sale right now where I live and I am trying very hard not to get one, its a XTXH afaik. But I would have to sell my current Phantom after getting the merc and I am afraid that I won’t be able to sell the Phantom. Not worth the risk but damn it’s already very good card, can’t image how good the XTXH version is.


Don’t get the XTXH, it’s useless on air. Had two of the Mercs XTXH and both were slower in tTimespy than my xTX Merc due to a) a lower max hotspot temp by 10c and b) a higher power draw which the cooler could not manage.
XTXH needs water or an AIO.


----------



## cfranko

LtMatt said:


> a lower max hotspot temp by 10c


What do you mean by this? Does it start throttling at a lower temp?


----------



## LtMatt

cfranko said:


> What do you mean by this? Does it start throttling at a lower temp?


Yep 95c vs 110c on the XTX.


----------



## lowrider_05

marcoschaap said:


> Can I ask which waterblock do you use? I'd love to get my hands on a Asrock 6900XT OC Formula, but I'm unaware of compatible waterblock.


I bought this one: aliexpress.com/item/32223454070.html and
those for the Mem cooling: aliexpress.com/item/4000670979331.html

but just Save you the trouble and get an Preblocked one like the Liquid Devil.

on another note, i was able to flash the XTXH LC AMD Bios on my card and now my mem speed can go up to 2370 Mhz!!!
wich set me on the NR1 Spot of TSE in my Hardware config and that with a low CPU Score:


----------



## HeLeX63

My TS run. 6900XT Red Devil (Non XTH). 2650MHz max, 2,550MHz min.

Is this a good/bad/average score ?

What constitutes a "LEGENDARY" score ???


----------



## weleh

lowrider_05 said:


> I bought this one: aliexpress.com/item/32223454070.html and
> those for the Mem cooling: aliexpress.com/item/4000670979331.html
> 
> but just Save you the trouble and get an Preblocked one like the Liquid Devil.
> 
> on another note, i was able to flash the XTXH LC AMD Bios on my card and now my mem speed can go up to 2370 Mhz!!!
> wich set me on the NR1 Spot of TSE in my Hardware config and that with a low CPU Score:
> 
> View attachment 2517686


You sure memory that high is helping?

This is my best run of TSE with a non XTXH card.
11093 Graphic Score. 

What card do you have?


----------



## Henry Owens

Everyone should watercooled it if you can!


----------



## lawson67

HeLeX63 said:


> My TS run. 6900XT Red Devil (Non XTH). 2650MHz max, 2,550MHz min.
> 
> Is this a good/bad/average score ?
> 
> What constitutes a "LEGENDARY" score ???
> 
> View attachment 2517738


That's a good score but you should be able to do better, i am getting about the same graphic score as you with my Red Devil RX 6800 XT on air, Don't worry to much about the CPU score as that can change massively from an Ryzen 7 5800x to a Ryzen 9 5900x and a Ryzen 9 5950x yet in games there's no difference in performance, the only difference being the other CPU's have more cores which gives you a better CPU score and overall score in 3Dmark and other benchmarks, if your interested in just how fast your graphic card is going concentrate on the graphic score, the higher that goes the faster your graphic card is running!


----------



## HeLeX63




----------



## HeLeX63

lawson67 said:


> That's a good score but you should be able to do better, i am getting about the same graphic score as you with my Red Devil RX 6800 XT on air, Don't worry to much about the CPU score as that can change massively from an Ryzen 7 5800x to a Ryzen 9 5900x and a Ryzen 9 5950x yet in games there's no difference in performance, the only difference being the other CPU's have more cores which gives you a better CPU score and overall score in 3Dmark and other benchmarks, if your interested in just how fast your graphic card is going concentrate on the graphic score, the higher that goes the faster your graphic card is running!


My settings are above. Am I doing anything wrong ? I can't go above 2650Mhz on core max on TS Extreme.


----------



## Henry Owens

Is everyone isong mpt?


----------



## lawson67

HeLeX63 said:


> My settings are above. Am I doing anything wrong ? I can't go above 2650Mhz on core max on TS Extreme.


Some GPU's just cant go much above the 2650mhz its the silicon lottery, here is my setting on my RX 6800 XT, i would try to lower your max soc voltage similar to mine that allowed my card to run slightly cooler with less Vcore voltage allowing me to boost higher


----------



## lawson67

Henry Owens said:


> Is everyone isong mpt?


Without MPT i would not be able to push my card so hard, so i guess most others on here are using MPT as it gives you access to voltages and power limits that normally you wouldn't be able to access.


----------



## Henry Owens

lawson67 said:


> Without MPT i would not be able to push my card so hard, so i guess most others on here are using MPT as it gives you access to voltages and power limits that normally you wouldn't be able to access.


What limits for a reference would you recommend if heat is no issue?


----------



## lawson67

Henry Owens said:


> What limits for a reference would you recommend if heat is no issue?


That all depends on your cards power phase's the amount of them the controllers they use and what they can handle, the red devil RX 6800 XT and the red devil RX 6900 XT share the same board and power stages from infineon being the tda27472 rated at 70 amps each and there's 19 in total 14+2 for the Vcore, 2 for the Vmem and one for the VDDCI so it can handle way more power than i can throw at it, each card uses different configurations and power phases so knowing this is important, but i would imagine most can take 400w, but again i don't know your card or power phases, but be careful, 330w in MPT with 15% power limit in wattman allows my card to pull 384w


----------



## Henry Owens

got this score with 2670mhz 1095mV 400w 375A (MPT) 2150mhz vram. refrence card watercooled. max temp about 65 hotspot


----------



## LtMatt

That cpu score is great for a 12 core.


----------



## Henry Owens

LtMatt said:


> That cpu score is great for a 12 core.


thanks its a pretty decent CPU I think. It wont do 2000FLK like my 5800x that I returned before this one though


----------



## jonRock1992

Henry Owens said:


> thanks its a pretty decent CPU I think. It wont do 2000FLK like my 5800x that I returned before this one though


I bet that 5800X you had would be better at gaming if it was fully tuned with PBO, Curve Optimizer, max stable FCLK, and optimized memory timings in 1:1 mode. But I'm assuming you needed 12 cores for a reason lol. I have my 5800X running with FCLK @ 2067MHz, RAM @ 4133MHz CL16 with optimized timings, PBO enabled with 5050MHz boost clock, Curve Optimizer @ -10, and since I have a Dark Hero I'm using Dynamic OC Switcher as well. It's pairing very nicely with the 6900 XT.


----------



## lawson67

jonRock1992 said:


> I bet that 5800X you had would be better at gaming if it was fully tuned with PBO, Curve Optimizer, max stable FCLK, and optimized memory timings in 1:1 mode. But I'm assuming you needed 12 cores for a reason lol. I have my 5800X running with FCLK @ 2067MHz, RAM @ 4133MHz CL16 with optimized timings, PBO enabled with 5050MHz boost clock, Curve Optimizer @ -10, and since I have a Dark Hero I'm using Dynamic OC Switcher as well. It's pairing very nicely with the 6900 XT.


You can get a stable FLCK clock of 2000mhz with no WHEA errors?, i cant without corrected hardware CPU errors, the pc runs fine and never crashes but a FLCK clock of 2000mhz is normally unstable for most people unless they ignore the WHEA errors, i tried all voltages to get it stable without corrected hardware errors but could not achieve it


----------



## jonRock1992

lawson67 said:


> You can get a stable FLCK clock of 2000mhz with no WHEA errors?, i cant without corrected hardware CPU errors, the pc runs fine and never crashes but a FLCK clock of 2000mhz is normal unstable for most people


I got a really good bin. I was the first one to get one at my local micro center on launch day lol. I always wondered if they gave the first person a golden bin or something lol. Probably just coincidence.


----------



## lawson67

jonRock1992 said:


> I got a really good bin. I was the first one to get one at my local micro center on launch day lol. I always wondered if they gave the first person a golden bin or something lol. Probably just coincidence.


I have a good bin in respect i can run PBO at -25 on all cores bar two i have at -20 but a fully stable FLCK clock of 2000mhz i cant achieve, but i do have the 5th fastest graphic score in Timespy with a RX 6800 XT and a Ryzen 7 5800x 
3DMark.com search


----------



## LtMatt

@weleh 
I’ve been plying with the Toxic XTXH, is it possible to lower fan speeds beyond 38%? It’s bit too loud for my liking. I guess this is why you changed your fans over.

I have to say the Toxic is a very well engineered piece of kit, bar The noisy fans.


----------



## Henry Owens

lawson67 said:


> I have a good bin in respect i can run PBO at -25 on all cores bar two i have at -20 but a fully stable FLCK clock of 2000mhz i cant achieve, but i do have the 5th fastest graphic score in Timespy with a RX 6800 XT and a Ryzen 7 5800x
> 3DMark.com search


When I try 2000 flck it just reboots while doing occt. I also now have 4 sticks 32gb memory. I might slightly regret switching from my 5800x because when I did I was still pretty new to ryzen. It was probably faster gaming lol . I'm pretty sure my 5800 could do over 2000 flck and it boosted to 5.1 I think. Before trying the flck I was running DDR4 4400 stable and above. I switched to the 5900 just to go all out on my build and I noticed some benchmarks showing the 59 slightly ahead of the 58. This one is stable -30 all core and +200mhz but it still rarely reaches the 5ghz level when it should be getting to 5.1 something max.
My last timespy was top 89? It's no top 5 but I guess not bad.


----------



## lawson67

Got me a new best score with my RX 6800 XT and moved myself up to 3rd fastest place on 3D mark time Spy, i think i can go faster too 
3DMark.com search


----------



## jonRock1992

Nice! I got the fastest Timespy GPU score with my config. 5800X and 6900XT (single GPU). I'm going to try to break 24K tomorrow.


----------



## lawson67

jonRock1992 said:


> Nice! I got the fastest Timespy GPU score with my config. 5800X and 6900XT (single GPU). I'm going to try to break 24K tomorrow.


Fantastic well done mate


----------



## LtMatt

Here is the best I've been able to do with an XTXH Toxic Extreme on 21.7.1.

This is overclocked to the max with a 500W power limit + TDC and SOC TDC at 70W.

Actual GPU power didn't go too much over 425W though from what I saw.

5950X @ 4.8Ghz
6900 XT @ 2800/2110Mhz
21.7.1

Timespy
AMD Radeon RX 6900 XT video card benchmark result - AMD Ryzen 9 5950X,ASUSTeK COMPUTER INC. ROG CROSSHAIR VIII HERO (3dmark.com)

Timespy Extreme
AMD Radeon RX 6900 XT video card benchmark result - AMD Ryzen 9 5950X,ASUSTeK COMPUTER INC. ROG CROSSHAIR VIII HERO (3dmark.com)

Firestrike Extreme
AMD Radeon RX 6900 XT video card benchmark result - AMD Ryzen 9 5950X,ASUSTeK COMPUTER INC. ROG CROSSHAIR VIII HERO (3dmark.com)

Firestrike Ultra
AMD Radeon RX 6900 XT video card benchmark result - AMD Ryzen 9 5950X,ASUSTeK COMPUTER INC. ROG CROSSHAIR VIII HERO (3dmark.com)

That's moved me nicely up the Hall of Fame rankings.

I can clock around 60-70 Mhz higher than my 6900 XT Merc. I think it shows how good my Merc sample is tbh.


----------



## Henry Owens

LtMatt said:


> Here is the best I've been able to do with an XTXH Toxic Extreme on 21.7.1.
> 
> This is overclocked to the max with a 500W power limit + TDC and SOC TDC at 70W.
> 
> Actual GPU power didn't go too much over 425W though from what I saw.
> 
> 5950X @ 4.8Ghz
> 6900 XT @ 2800/2110Mhz
> 21.7.1
> 
> Timespy
> AMD Radeon RX 6900 XT video card benchmark result - AMD Ryzen 9 5950X,ASUSTeK COMPUTER INC. ROG CROSSHAIR VIII HERO (3dmark.com)
> 
> Timespy Extreme
> AMD Radeon RX 6900 XT video card benchmark result - AMD Ryzen 9 5950X,ASUSTeK COMPUTER INC. ROG CROSSHAIR VIII HERO (3dmark.com)
> 
> Firestrike Extreme
> AMD Radeon RX 6900 XT video card benchmark result - AMD Ryzen 9 5950X,ASUSTeK COMPUTER INC. ROG CROSSHAIR VIII HERO (3dmark.com)
> 
> Firestrike Ultra
> AMD Radeon RX 6900 XT video card benchmark result - AMD Ryzen 9 5950X,ASUSTeK COMPUTER INC. ROG CROSSHAIR VIII HERO (3dmark.com)
> 
> That's moved me nicely up the Hall of Fame rankings.
> 
> I can clock around 60-70 Mhz higher than my 6900 XT Merc. I think it shows how good my Merc sample is tbh.


Hello does changing the SoC in MPT do anything? If so what should I change it to?


----------



## LtMatt

Henry Owens said:


> Hello does changing the SoC in MPT do anything? If so what should I change it to?


I'm not sure it really does much, but see no harm in raising it anyway. Someone on here, might have been Jon, said he saw slightly better scores with it raised. So that was good enough for me.


----------



## Henry Owens

Henry Owens said:


> Hello does changing the SoC in MPT do anything? If so what should I change it to?


What is your gpu voltage set at?


----------



## LtMatt

Henry Owens said:


> What is your gpu voltage set at?


1.2v.


----------



## weleh

How much could you clock on RAM without losing performance on the Toxic XTXH?

Also, you posted FSU twice.


----------



## weleh

I'm still #1 5800X/6900XT on TS thanks to CPU score


----------



## CantingSoup

New version of MPT.








New beta versions of MorePowerTools(MPT) and Red BIOS Editor - BIOS Unlock for RDNA and all frequencies for RDNA2 cards! | igor'sLAB


Since we don't want to fragemnt our software any further and confuse users, we now also separate the RBE into two paths and offer together with the MPT a separate beta page with the latest beta…




www.igorslab.de


----------



## LtMatt

weleh said:


> How much could you clock on RAM without losing performance on the Toxic XTXH?
> 
> Also, you posted FSU twice.


About 2162-2165Mhz.

However I had to run 2110Mhz in synthetics to max out my core clock at 2805Mhz. I can run 2162Mhz in games and see performance increases though.

Here is my 24/7 gaming clock (attached) it is disgustingly fast in games.

Locked 2750Mhz+. 38% max fan speed. Edge around 60, hotspot around 80c with spikes to 85c. 24-25c ambient in my hot man cave. 

Corrected post above to update Firestrike Extreme results.


----------



## Henry Owens

CantingSoup said:


> New version of MPT.
> 
> 
> 
> 
> 
> 
> 
> 
> New beta versions of MorePowerTools(MPT) and Red BIOS Editor - BIOS Unlock for RDNA and all frequencies for RDNA2 cards! | igor'sLAB
> 
> 
> Since we don't want to fragemnt our software any further and confuse users, we now also separate the RBE into two paths and offer together with the MPT a separate beta page with the latest beta…
> 
> 
> 
> 
> www.igorslab.de


Wow, with this new version, will I be able to increase voltage above 1.175?


----------



## weleh

LtMatt said:


> About 2162-2165Mhz.
> 
> Corrected post above to update Firestrike Extreme results.


Nice, just make sure you're not losing performance.
What VRAM clockspeed does Toxic boost apply? On non XTXH cards it does 2100 Mhz on memory, does it do higher on your card?


----------



## CantingSoup

Henry Owens said:


> Wow, with this new version, will I be able to increase voltage above 1.175?


Only one way to find out 😎


----------



## LtMatt

weleh said:


> Nice, just make sure you're not losing performance.
> What VRAM clockspeed does Toxic boost apply? On non XTXH cards it does 2100 Mhz on memory, does it do higher on your card?


Edited my post above again with more information.

2100Mhz, same as yours.

Yes, will double check but I saw a small gain going from 2124Mhz to 2162Mhz in Timespy in graphics score.

My XTX Merc would start to lose it around 2130Mhz, so this sample is a bit better extra 30Mhz or so.


----------



## Takla

thomasck said:


> Do you guys think a 3900X could bottleneck the 6900 XT at 1440P? I am stating to suspect that I am being bottlenecked by the CPU.


Depends on your target FPS. 120Hz? No. 144Hz? Maybe. 240Hz/Unlimited? Yes.
And just because you don't see 99% GPU doesn't mean your cpu is the bottleneck.


----------



## Henry Owens

Takla said:


> Depends on your target FPS. 120Hz? No. 144Hz? Maybe. 240Hz/Unlimited? Yes.
> And just because you don't see 99% GPU doesn't mean your cpu is the bottleneck.


You might get 20 more fps. If you sell your 3900 it might be a nice pop in upgrade. Or you could wait for am4+


----------



## Henry Owens

CantingSoup said:


> Only one way to find out 😎


I feel like it probably won't. So if it doesn't work I just reinstall drivers and it will fix it?


----------



## weleh

deleted


----------



## Takla

Henry Owens said:


> You might get 20 more fps. If you sell your 3900 it might be a nice pop in upgrade. Or you could wait for am4+


Why you're telling me? Learn how to read.


----------



## CantingSoup

Henry Owens said:


> I feel like it probably won't. So if it doesn't work I just reinstall drivers and it will fix it?


Just delete the SPPT and restart.


----------



## Henry Owens

Takla said:


> Why you're telling me? Learn how to read.


Couldn't find his quote he will read it relax


----------



## thomasck

Takla said:


> Depends on your target FPS. 120Hz? No. 144Hz? Maybe. 240Hz/Unlimited? Yes.
> And just because you don't see 99% GPU doesn't mean your cpu is the bottleneck.


Yeah let's see. I am thinking in upgrade to 5XXX but so far the games I play are pegged around 144Hz. Warzone has some hiccups every 10 seconds depeding on the match, not every match so I am not sure if is the server, ram timings (still testing) or even CPU. Enlisted is a example of low performance, it was running better with a Radeon VII + 3900X. Astroneer as well, low performance. RE2 is on the list too with low fps. I was just not expecting this at 1440P. Read low performance as lower performance than expected, as a nicely optimised game Enlisted should be around 144FPS all the time but I am getting mostly 80-115FPS, which is weird. I don't really want to get a 5600X (which would be the most ideal option because I just game in this rig), but I also don't want to leave a 12 core cpu to a 6 core one, and on the other hand I don't want to spen 600GPB right now in a brand new 5900X.


----------



## Henry Owens

CantingSoup said:


> Only one way to find out 😎


Just tried doesn't work


----------



## weleh

Anyone with a 6900XT check on HWINFO what Voltage is your VRAM at.










Pretty sure it's supposed to say 1.356V or something


----------



## LtMatt

weleh said:


> Anyone with a 6900XT check on HWINFO what Voltage is your VRAM at.
> 
> View attachment 2517907
> 
> 
> Pretty sure it's supposed to say 1.356V or something


At low/idle load it drops to 1.256v - ish.


----------



## LtMatt

Takla said:


> Why you're telling me? Learn how to read.


Lol. Takla has zero chill.


----------



## weleh

I absolutely cannot replicate my score of yesterday (23.6K Graphic)...
Been trying whole day today. Even managed my personal best CPU score of 14010! on a 5800X.









Result not found







www.3dmark.com





Managed a 23.4K Graphic Score and also noticed memory isn't scalling anymore to 2150 but rather scores the best consistently at 2070ish on TS at least.
These cards are so random...

Tried a bunch of MPT stuff too. There are XTX cards on Igor's Lab and Luxx.de doing 24K+ points and I would love to know how...OS tweaks? MPT tweaks we don'tknow of?

Also if anyone had any doubts about Water Vs Air, if you keep the card cool enough in terms of hotspot, there's 0 scaling to be had... I just had a 3h session with a 9000BTU portable AC blowing onto the card at 400W (40C core and 60C hotspot) and the card has 0 scaling in terms of clockspeed or performance.


----------



## jonRock1992

weleh said:


> I absolutely cannot replicate my score of yesterday (23.6K Graphic)...
> Been trying whole day today. Even managed my personal best CPU score of 14010! on a 5800X.
> 
> 
> 
> 
> 
> 
> 
> 
> 
> Result not found
> 
> 
> 
> 
> 
> 
> 
> www.3dmark.com
> 
> 
> 
> 
> 
> Managed a 23.4K Graphic Score and also noticed memory isn't scalling anymore to 2150 but rather scores the best consistently at 2070ish on TS at least.
> These cards are so random...
> 
> Tried a bunch of MPT stuff too. There are XTX cards on Igor's Lab and Luxx.de doing 24K+ points and I would love to know how...OS tweaks? MPT tweaks we don'tknow of?
> 
> Also if anyone had any doubts about Water Vs Air, if you keep the card cool enough in terms of hotspot, there's 0 scaling to be had... I just had a 3h session with a 9000BTU portable AC blowing onto the card at 400W (40C core and 60C hotspot) and the card has 0 scaling in terms of clockspeed or performance.


I agree with the scaling. How did you get your cpu score so high though!? I think my 5800X is temp throttling or something. I have my 6900XT 360mm rad fans as intake, so I think the extra hot air being dumped into my case is lowering my CPU performance.

This is my current setup, except I switched my rear exhaust around to bring some cool air to the EK AIO on my 5800X.


----------



## weleh

Is there a bug currently?

How is this possible...









I scored 21 997 in Time Spy


AMD Ryzen 9 5900X, AMD Radeon RX 6900 XT x 1, 32768 MB, 64-bit Windows 10}




www.3dmark.com





23.5K Graphic Score at 2500 Mhz average???


----------



## ZealotKi11er

weleh said:


> Is there a bug currently?
> 
> How is this possible...
> 
> 
> 
> 
> 
> 
> 
> 
> 
> I scored 21 997 in Time Spy
> 
> 
> AMD Ryzen 9 5900X, AMD Radeon RX 6900 XT x 1, 32768 MB, 64-bit Windows 10}
> 
> 
> 
> 
> www.3dmark.com
> 
> 
> 
> 
> 
> 23.5K Graphic Score at 2500 Mhz average???


That seems very strange. To break 23K i need 2800MHz set clk.


----------



## weleh

Is there a new undected LOD hack we don't know about?


----------



## jonRock1992

I was reading on hwbot that raising the min SOC voltage to 950mv can help squeeze a bit more out of mem oc.


----------



## J7SC

jonRock1992 said:


> I was reading on hwbot that raising the min SOC voltage to 950mv can help squeeze a bit more out of mem oc.
> 
> View attachment 2517925


...I find 2450 min clock also works best for me no matter what driver or MPT setting...for max, it depends on MPT PL and temps on my setup


----------



## HeLeX63

New MPT Beta: _"RDNA2: The setting options for the frequencies have been expanded considerably and pretty much everything that could be disclosed has been disclosed."_

Does this mean non-XTH GPU's can increase their memory past 2150MHz ??? It still didn't work for me


----------



## LtMatt

weleh said:


> Is there a bug currently?
> 
> How is this possible...
> 
> 
> 
> 
> 
> 
> 
> 
> 
> I scored 21 997 in Time Spy
> 
> 
> AMD Ryzen 9 5900X, AMD Radeon RX 6900 XT x 1, 32768 MB, 64-bit Windows 10}
> 
> 
> 
> 
> www.3dmark.com
> 
> 
> 
> 
> 
> 23.5K Graphic Score at 2500 Mhz average???





ZealotKi11er said:


> That seems very strange. To break 23K i need 2800MHz set clk.


I suspect it's just a reporting bug.

I've found that the clock speeds and averages are not always 100% accurate on some runs.



jonRock1992 said:


> I was reading on hwbot that raising the min SOC voltage to 950mv can help squeeze a bit more out of mem oc.


Good find Jon. I will have to try bumping the min SOC voltage to 0.950mv next time I'm doing some benching to see if more can be gained.


----------



## weleh

HeLeX63 said:


> New MPT Beta: _"RDNA2: The setting options for the frequencies have been expanded considerably and pretty much everything that could be disclosed has been disclosed."_
> 
> Does this mean non-XTH GPU's can increase their memory past 2150MHz ??? It still didn't work for me


No, there's nothing new on MPT that makes any difference unfortunately. 
I've tested it all.

Maybe for gaming that support REBAR since you can now mess with infinity fabric clockspeed but from my testing it does absolutely nothing.


----------



## jfrob75

weleh said:


> Anyone with a 6900XT check on HWINFO what Voltage is your VRAM at.
> 
> View attachment 2517907
> 
> 
> Pretty sure it's supposed to say 1.356V or something


According to my HWINFO the memory (VDDIO) is steady at 1.356V, no load or loaded.


----------



## LtMatt

jfrob75 said:


> According to my HWINFO the memory (VDDIO) is steady at 1.356V, no load or loaded.


Is your memory frequency running at maximum when idle?


----------



## jonRock1992

I just wanted to report back and say that raising min SOC voltage to 950mV actually did help me go from a stable 2130MHz fast timings to a 2150MHz fast timings.

New PR for Timespy graphics score: AMD Radeon RX 6900 XT video card benchmark result - AMD Ryzen 7 5800X,ASUSTeK COMPUTER INC. ROG CROSSHAIR VIII DARK HERO (3dmark.com)


----------



## LtMatt

jonRock1992 said:


> I just wanted to report back and say that raising min SOC voltage to 950mV actually did help me go from a stable 2130MHz fast timings to a 2150MHz fast timings.
> 
> New PR for Timespy graphics score: AMD Radeon RX 6900 XT video card benchmark result - AMD Ryzen 7 5800X,ASUSTeK COMPUTER INC. ROG CROSSHAIR VIII DARK HERO (3dmark.com)


It made no difference for me. After extensive testing in Timespy 2150 MHz gives the best results.


----------



## cfranko

jonRock1992 said:


> I just wanted to report back and say that raising min SOC voltage to 950mV actually did help me go from a stable 2130MHz fast timings to a 2150MHz fast timings.
> 
> New PR for Timespy graphics score: AMD Radeon RX 6900 XT video card benchmark result - AMD Ryzen 7 5800X,ASUSTeK COMPUTER INC. ROG CROSSHAIR VIII DARK HERO (3dmark.com)


Did memory temps change at all?


----------



## J7SC

LtMatt said:


> It made no difference for me. After extensive testing in Timespy 2150 MHz gives the best results.


2150 MHz VRAM w/ fast timings seem to work w/o issue (ie loss of efficiency) for me as well, in fact I wish I could adjust the VRAM to a bit higher speed...

...or better yet, AMD adding GDDR6X for the 6900 XT(/H) 😎


----------



## CS9K

Unigine Superposition "4k Optimized" is MUCH more sensetive to memory speed changes than anything I've found in the 3dMark suite.

I do 3 runs at each memory speed to rule out background tasks, it's worth a shot since yall are getting nowhere with Time Spy.


----------



## J7SC

I use Unigine Superposition 8K to max VRAM stress..caution re. excessive MPT PL or inadequate cooling though - Superposition really 'makes the lights flicker' and heats things up


----------



## weleh

Messing with any of the settings on MPT does nothing on my card, except of course, increasing the Power Limit and Current limits.

These new drivers however, despite being more efficient, made my previously super stable and performance increase 2150 Mhz lose performance and I have to run around 2050-2070 Mhz to get the best performance. 

This is specially noticeable on Superposition as someone already mentioned. Time Spy is too inconsistent to give proper results in terms of scaling.

Unless there are combination of settings or hidden stuff we don't know about, I call placebo.

Another thing I would like to know is how the hell the top 2 Graphic Score cards on TS leaderboard are reference AMD cards, one of them at 2800 and the other at 2660 Mhz. One of them even has the core at 61C which would suggest at best, water cooling so nothing exotic.

Are people using EVC to post 24K+ Graphic Score on non XTXH cards?


----------



## ZealotKi11er

weleh said:


> Messing with any of the settings on MPT does nothing on my card, except of course, increasing the Power Limit and Current limits.
> 
> These new drivers however, despite being more efficient, made my previously super stable and performance increase 2150 Mhz lose performance and I have to run around 2050-2070 Mhz to get the best performance.
> 
> This is specially noticeable on Superposition as someone already mentioned. Time Spy is too inconsistent to give proper results in terms of scaling.
> 
> Unless there are combination of settings or hidden stuff we don't know about, I call placebo.
> 
> Another thing I would like to know is how the hell the top 2 Graphic Score cards on TS leaderboard are reference AMD cards, one of them at 2800 and the other at 2660 Mhz. One of them even has the core at 61C which would suggest at best, water cooling so nothing exotic.
> 
> Are people using EVC to post 24K+ Graphic Score on non XTXH cards?


Maybe the early XTX could have been XTXH.


----------



## weleh

Nah, not that.

Look at this score...









I scored 22 154 in Time Spy


AMD Ryzen 9 5900X, AMD Radeon RX 6900 XT x 1, 32768 MB, 64-bit Windows 10}




www.3dmark.com





How is this even possible?


----------



## jonRock1992

LtMatt said:


> It made no difference for me. After extensive testing in Timespy 2150 MHz gives the best results.


I think it will only help if you're just almost stable at 2150Mhz and need something little to get you there. Me and the guy that originally posted it were both only stable at 2130MHz before the tweak.

Also, enable SAM. It really helps boost Timespy scores.



weleh said:


> Nah, not that.
> 
> Look at this score...
> 
> 
> 
> 
> 
> 
> 
> 
> 
> I scored 22 154 in Time Spy
> 
> 
> AMD Ryzen 9 5900X, AMD Radeon RX 6900 XT x 1, 32768 MB, 64-bit Windows 10}
> 
> 
> 
> 
> www.3dmark.com
> 
> 
> 
> 
> 
> How is this even possible?


Nothing is strange about that score, unless that's a reference card on air lol. My best GPU score was higher than that, but I have XTXH under water. I had to run 2770/2150 with SAM and +35% PL/TDC/SOC to get that high of a score.


----------



## LtMatt

jonRock1992 said:


> I think it will only help if you're just almost stable at 2150Mhz and need something little to get you there. Me and the guy that originally posted it were both only stable at 2130MHz before the tweak.
> 
> Also, enable SAM. It really helps boost Timespy scores.
> 
> 
> 
> Nothing is strange about that score, unless that's a reference card on air lol. My best GPU score was higher than that, but I have XTXH under water. I had to run 2770/2150 with SAM and +35% PL/TDC/SOC to get that high of a score.


What's your best Timespy score Jon on 21.7.1?


----------



## jfrob75

LtMatt said:


> Is your memory frequency running at maximum when idle?


My memory frequency, according to HWINFO, is running at the set frequency when at idle.


----------



## jonRock1992

LtMatt said:


> What's your best Timespy score Jon on 21.7.1?


23953 for the graphics score. I posted a link for it recently.


----------



## jfrob75

jonRock1992 said:


> I think it will only help if you're just almost stable at 2150Mhz and need something little to get you there. Me and the guy that originally posted it were both only stable at 2130MHz before the tweak.
> 
> Also, enable SAM. It really helps boost Timespy scores.
> 
> 
> 
> Nothing is strange about that score, unless that's a reference card on air lol. My best GPU score was higher than that, but I have XTXH under water. I had to run 2770/2150 with SAM and +35% PL/TDC/SOC to get that high of a score.


I think there is something strange about that score. My best graphics score is 23079 I scored 21 804 in Time Spy, as you can see from this result my average clock is 2715 MHz where the one in question has an average clock of 2523 MHz an almost 200 MHz difference. So, how does one get a higher graphics score with a almost 200 MHz difference in average clocks? The only other difference I can see in the two TS results is the difference in the graphics drivers. My result is from using 21.6.1 driver. The other result appears to be with a newer driver.


----------



## jonRock1992

jfrob75 said:


> I think there is something strange about that score. My best graphics score is 23079 I scored 21 804 in Time Spy, as you can see from this result my average clock is 2715 MHz where the one in question has an average clock of 2523 MHz an almost 200 MHz difference. So, how does one get a higher graphics score with a almost 200 MHz difference in average clocks? The only other difference I can see in the two TS results is the difference in the graphics drivers. My result is from using 21.6.1 driver. The other result appears to be with a newer driver.


If I had to guess, TS is not reporting clock frequencies correctly. Also the new driver gave me like +1000 graphics score points with SAM enabled.


----------



## djtbster

new member can't wait to play with the oc


----------



## J7SC

LtMatt said:


> It made no difference for me. After extensive testing in Timespy 2150 MHz gives the best results.


...are you folks hitting '2150 MHz' (VRAM w/fast timings) right on the nose ? I usually get some variation around 2140+ ...fyi, below a quick Superpos 8K run, but with air cooler back on temporarily as I'm updating the w-cooling loop...PL is on +12 instead of +15% given hotspot temp


----------



## jfrob75

jonRock1992 said:


> If I had to guess, TS is not reporting clock frequencies correctly. Also the new driver gave me like +1000 graphics score points with SAM enabled.


Well I'll be damn, time to try the new drivers. lol


----------



## LtMatt

jfrob75 said:


> My memory frequency, according to HWINFO, is running at the set frequency when at idle.


That's why you don't see 1.256v at idle then for memory.

My memory frequency is currently at 192Mhz @ 1.256v.


jonRock1992 said:


> 23953 for the graphics score. I posted a link for it recently.


My bad, nice score.



J7SC said:


> ...are you folks hitting '2150 MHz' (VRAM w/fast timings) right on the nose ? I usually get some variation around 2140+ ...fyi, below a quick Superpos 8K run, but with air cooler back on temporarily as I'm updating the w-cooling loop...PL is on +12 instead of +15% given hotspot temp
> 
> View attachment 2518089


That's the clock set in RS. 

I believe i get somewhere around 2138Mhz as reported by Timespy, so yeah it's normal.


----------



## jfrob75

So, I just updated to the latest driver and what a difference with TS I scored 22 189 in Time Spy . My driver settings are somewhat conservative with 2675 min freq, 2775 max freq, mem @ 2150 fast timings. An almost 700 point difference over my past best graphics score.


----------



## LtMatt

jfrob75 said:


> So, I just updated to the latest driver and what a difference with TS I scored 22 189 in Time Spy . My driver settings are somewhat conservative with 2675 min freq, 2775 max freq, mem @ 2150 fast timings. An almost 700 point difference over my past best graphics score.


Yes it's amazing what an increase we are all seeing. Are you using an XTXH?

Could you share your mobo bios settings? Decent physics score.


----------



## ZealotKi11er

Did some runs with my Reference 6900 XT.

I ran at 2300MHz to keep it consistent (no power/temp throttling)

Actual clock was about 2240-2245MHz. 


CoreMEMPWRDriverScore GPUTweaks2300/5002000255+15%21.3.118 7302300/200018 7562300/220018 7052300/2200205018 6682300/2200210018 8422300/22002100+FT18 9772300/22002150+FT18 9882300/22002130+FT18 9772300/22002150+FT19 009Radeon Settings2300/22002150+FT21.7.120 340Radeon Settings
Windows High Performance 2300/22002100+FT21.7.120 311


----------



## jfrob75

LtMatt said:


> Yes it's amazing what an increase we are all seeing. Are you using an XTXH?
> 
> Could you share your mobo bios settings? Decent physics score.


I believe my GPU is an XTXH. It is the GB Aorus 6900XT Extreme WB. Just checked it with HWINFO and it is a NAV21 XTXH
I'll provide the MB bios settings in a later post but essentially I am running using PBO with curve optimizer and memory running at 3800MHz 1:1:1.

Here are my Bios settings.


----------



## J7SC

jfrob75 said:


> I believe my GPU is an XTXH. It is the GB Aorus 6900XT Extreme WB. Just checked it with HWINFO and it is a NAV21 XTXH
> I'll provide the MB bios settings in a later post but essentially I am running using PBO with curve optimizer and memory running at 3800MHz 1:1:1.
> 
> Here are my Bios settings.


That's a nice card, one of if not the best 6900XT, IMO. Do you mind saving the vBios via GPUz and uploading it here, renaming the .rom extension as a .txt ? I have a nice collection I'm checking with MPT re. PL ratios (W, A) but I'm missing that one, and the TechPowerUp db doesn't have a copy


----------



## jfrob75

J7SC said:


> That's a nice card, one of if not the best 6900XT, IMO. Do you mind saving the vBios via GPUz and uploading it here, renaming the .rom extension as a .txt ? I have a nice collection I'm checking with MPT re. PL ratios (W, A) but I'm missing that one, and the TechPowerUp db doesn't have a copy


Well, I have tried to attach the .rom file as .txt file but it does not seem to work.
It is in TechpowerUp DB. Here is the link Gigabyte RX 6900 XT VBIOS


----------



## djtbster

Would I be able to flash my 6900tuf to a 6900 strix lc bios?


----------



## J7SC

jfrob75 said:


> Well, I have tried to attach the .rom file as .txt file but it does not seem to work.


Thanks for trying...if you have the time, can you show some MPT screenies of (stock) PL and frequencies for that yummy card, please ? 'It's for science...'


----------



## LtMatt

jfrob75 said:


> I believe my GPU is an XTXH. It is the GB Aorus 6900XT Extreme WB. Just checked it with HWINFO and it is a NAV21 XTXH
> I'll provide the MB bios settings in a later post but essentially I am running using PBO with curve optimizer and memory running at 3800MHz 1:1:1.
> 
> Here are my Bios settings.


Nice cheers. Yes I know a couple of folks using the same GPU as you and it is XTXH. 

What is your default core clock as shown in Radeon Software > GPU Tuning?


----------



## ZealotKi11er

So having XTXH LC with 2310MHz give no perf increase over 2150MHz + FT in Time Spy.


----------



## J7SC

jfrob75 said:


> Well, I have tried to attach the .rom file as .txt file but it does not seem to work.
> It is in TechpowerUp DB. Here is the link Gigabyte RX 6900 XT VBIOS


...tx for the TPU link, just downloaded it, pls disregard MPT screenie request


----------



## weleh

jonRock1992 said:


> I think it will only help if you're just almost stable at 2150Mhz and need something little to get you there. Me and the guy that originally posted it were both only stable at 2130MHz before the tweak.
> 
> Also, enable SAM. It really helps boost Timespy scores.
> 
> 
> 
> Nothing is strange about that score, unless that's a reference card on air lol. My best GPU score was higher than that, but I have XTXH under water. I had to run 2770/2150 with SAM and +35% PL/TDC/SOC to get that high of a score.



Nothing strange? You didn't even open the link did you?

This is a reference card doing 23823 Graphic Score at 2523 Mhz average clock speed...
Tell me how there's nothing strange...


----------



## weleh

Or this one, reference card on air...


----------



## jonRock1992

Did you read my comment? I said that it would be strange if it was indeed a reference card on air. But the pic you just posted. It looks like they adjusted the FCLK on that card. If they made the right tweaks, and they are using the latest driver, I could see it. 3DMark doesn't always report the correct clocks. This is especially true if there is a large variation in clock speed during the test.


----------



## ZealotKi11er

weleh said:


> Or this one, reference card on air...
> 
> View attachment 2518127


There are features disabled which boost performance. Still 23827 is very high for 2685MHz even with latest driver.


----------



## J7SC

jonRock1992 said:


> Did you read my comment? I said that it would be strange if it was indeed a reference card on air. But the pic you just posted. It looks like they adjusted the FCLK on that card. If they made the right tweaks, and they are using the latest driver, I could see it. 3DMark doesn't always report the correct clocks. This is especially true if there is a large variation in clock speed during the test.


...and in addition, he has a decent PL and apparently is _undervolting_ a bit which also helps, subject to having a good chip and decent cooling


----------



## LtMatt

weleh said:


> Or this one, reference card on air...
> 
> View attachment 2518127


I do wonder if tweaking some of those new options will bring some gains. That score is very high for reference, a undervolt and that clock speed.


----------



## jonRock1992

J7SC said:


> ...and in addition, he has a decent PL and apparently is _undervolting_ a bit which also helps, subject to having a good chip and decent cooling


Yeah his GPU is highly tuned. He optimized his clocks, PL,TDC, SOC, FCLK, adjusted his min and max SOC voltage, his min GFX voltage was increased...etc. If that was with the latest driver and a cold enough ambient temp, I could see it being legit.


----------



## weleh

Clearly you guys aren't paying attention to the screens.

First of all, he has deepsleep disabled (all the DS check marks) which means nothing on the card will downclock to idle (clockspeed, infinity fabric, etc).

Second of all, power limit numbers are meaningless. You can set them all to 1000 and watch your scores be the same as if you set them at 450 because your card will not hit them.

Third of all, it's on water so nothing special. Many cards on water here and none do that score at 2650 Mhz average. 
There's no variation because DS is disabled so the card will keep itself at the min value, at least, which is low.

The undervolt does nothing because the card does not hit 1.175V at all during high loads. It has too much Vdroop so the offset does not help specially if you're on water and have decent temps.

fCLK changes do nothing at all, try them yourself. Infinity fabric is related to SAM and SAM does almost nothing for Time Spy.

I've tested million of settings, I also observed HWinfo during various runs running various settings, the reported clockspeed from 3dmark is pretty consistent with what the cards do.


----------



## weleh

Cold temps do nothing. His core is at 60ºC so clearly nothing exotic.
I've done runs with a portable AC blowing onto the card, GPU at 10C and it's exactly the same with ambient of 25C.
If you do not hit HS throttle the card doesn't care.


----------



## jonRock1992

Dude. SAM makes a HUGE difference for me in Timespy with the latest driver. It's like an automatic +700 points when I enable it. With a higher FCLK, maybe it could add another 100 points. I haven't tested it yet. Also, for my card specifically Power limit numbers make a difference. If I go too high then I get instability. +35% PL/TDC/SOC is the sweet spot for me. +40% PL/TDC/SOC is less stable at 2770MHz than +35% for my GPU.


----------



## J7SC

...in my experience and on my setup, undervolting also makes a difference. These modern GPUs adjust almost continuously given their key input parameter readings rather than stay at a locked voltage, and if you have a decent chip, undervolting does help, depending also on your MPT PL etc.

...as to FCLK, that too can make a difference, depending on your setup. For example, when I first got my 3090 Strix OC earlier in the year, I had it running on an older 5960X (Intel 8c/16t HEDT) before moving it to an AMD 3950x and then AMD 5950X; bumping bus speeds from 100 to 103 instantly yielded around 300 points in PR with everything else equal (ie GPU settings) on the 5960X. 

...just because some people outscore others that have a 'higher MHz' doesn't mean they're cheating, they may just know more about overall system efficiency, or have a better chip sample.


----------



## weleh

J7SC said:


> ...in my experience and on my setup, undervolting also makes a difference. These modern GPUs adjust almost continuously given their key input parameter readings rather than stay at a locked voltage, and if you have a decent chip, undervolting does help, depending also on your MPT PL etc.
> 
> ...as to FCLK, that too can make a difference, depending on your setup. For example, when I first got my 3090 Strix OC earlier in the year, I had it running on an older 5960X (Intel 8c/16t HEDT) before moving it to an AMD 3950x and then AMD 5950X; bumping bus speeds from 100 to 103 instantly yielded around 300 points in PR with everything else equal (ie GPU settings) on the 5960X.
> 
> ...just because some people outscore others that have a 'higher MHz' doesn't mean they're cheating, they may just know more about overall system efficiency, or have a better chip sample.


I'm talking GPU score here not overall.

I have the fastest 5800X on Time Spy without exotic cooling (LN2/DICE) so I kinda know what I'm talking about.

I have a 23600 run myself so it's not like I started overclocking today. I was also #1 on my setup previous to 7.1 drivers.
Even with 7.1 drivers and despite having only managed a 23600 Graphic Score run, I'm still #4 (techinically #2because all the 3 first runs are from the same guy).

If you don't find it odd that reference cards are doing 24000 Graphic Score at 2650 Mhz average or you're not curious about it at least, then I'm sorry to ask but what are we doing here?

Anyway, I'll keep investigating and testing on my own setup since I'm doing covid isolation so got time.


----------



## LtMatt

ZealotKi11er said:


> There are features disabled which boost performance. Still 23827 is very high for 2685MHz even with latest driver.


Which features specifically, and have you tried this?


----------



## jonRock1992

weleh said:


> I'm talking GPU score here not overall.
> 
> I have the fastest 5800X on Time Spy without exotic cooling (LN2/DICE) so I kinda know what I'm talking about.
> 
> I have a 23600 run myself so it's not like I started overclocking today. I was also #1 on my setup previous to 7.1 drivers.
> Even with 7.1 drivers and despite having only managed a 23600 Graphic Score run, I'm still #4 (techinically #2because all the 3 first runs are from the same guy).
> 
> If you don't find it odd that reference cards are doing 24000 Graphic Score at 2650 Mhz average or you're not curious about it at least, then I'm sorry to ask but what are we doing here?
> 
> Anyway, I'll keep investigating and testing on my own setup since I'm doing covid isolation so got time.


It's definitely an outlier, but he has shown a lot of the tweaks he's done. He probably did some other tweaks as well. We'll never know unless someone contacts him and asks.


----------



## weleh

jonRock1992 said:


> It's definitely an outlier, but he has shown a lot of the tweaks he's done. He probably did some other tweaks as well. We'll never know unless someone contacts him and asks.


There are a couple more runs of other reference cards doing the same.
I'm not exactly sure what they are doing because I have tested all the new features on MPT and nothing does anything special and worth mentioning.

Look at this run:









I scored 23 511 in Time Spy


Intel Core i9-7980XE Processor, AMD Radeon RX 6900 XT x 1, 32768 MB, 64-bit Windows 11}




www.3dmark.com














Reference card doing 2800Mhz average.
The guy even says "no EVC" which I 99% doubt unless he binned a bunch of reference cards to get one that could do those clocks stable.

There's certainly something going on with these runs and I'm not sure what it is, maybe some OS tweaks, maybe LOD tweaks, I'm not sure.

There's a few runs on TS with reference cards doing insane scores with pretty low clockspeed.


----------



## J7SC

weleh said:


> (...)
> If you don't find it odd that reference cards are doing 24000 Graphic Score at 2650 Mhz average or you're not curious about it at least, then I'm sorry to ask but what are we doing here?
> (...)


...there's a difference between being curious and jumping to conclusions, as I have learned in my decade plus of oc'ing which includes HWBot (below) many moons ago. The poster of the score you referenced seemed to have done a decent job of sharing the number of tweaks. Perhaps there's funny stuff going on, perhaps he just really good to produce such an outlier...I certainly won't jump to any automatic conclusion with what I have seen so far 



Spoiler


----------



## ZealotKi11er

weleh said:


> There are a couple more runs of other reference cards doing the same.
> I'm not exactly sure what they are doing because I have tested all the new features on MPT and nothing does anything special and worth mentioning.
> 
> Look at this run:
> 
> 
> 
> 
> 
> 
> 
> 
> 
> I scored 23 511 in Time Spy
> 
> 
> Intel Core i9-7980XE Processor, AMD Radeon RX 6900 XT x 1, 32768 MB, 64-bit Windows 11}
> 
> 
> 
> 
> www.3dmark.com
> 
> 
> 
> 
> 
> View attachment 2518136
> 
> 
> Reference card doing 2800Mhz average.
> The guy even says "no EVC" which I 99% doubt unless he binned a bunch of reference cards to get one that could do those clocks stable.
> 
> There's certainly something going on with these runs and I'm not sure what it is, maybe some OS tweaks, maybe LOD tweaks, I'm not sure.
> 
> There's a few runs on TS with reference cards doing insane scores with pretty low clockspeed.



That is pretty fast for TS at 2850 for 1.175mv which is not even XTXH.


----------



## weleh

J7SC said:


> ...there's a difference between being curious and jumping to conclusions, as I have learned in my decade plus of oc'ing which includes HWBot (below) many moons ago. The poster of the score you referenced seemed to have done a decent job of sharing the number of tweaks. Perhaps there's funny stuff going on, perhaps he just really good to produce such an outlier...I certainly won't jump to any automatic conclusion with what I have seen so far
> 
> 
> 
> Spoiler
> 
> 
> 
> 
> View attachment 2518135


No idea where this jumping to conclusions comes from.
Read my posts. I question the scores not the integrity of the overclockers or whatever you're trying to insinuate.


----------



## Haplo181

New here, finally upgraded to a 6900xt. First run and just playing with settings got me this score.
Card is a 6900xt toxic.
Max power curve, 1175 volts, 2800mhz max and 2700 min, memory at 2150 normal timings.









I scored 21 367 in Time Spy


Intel Core i9-10900K Processor, AMD Radeon RX 6900 XT x 1, 16384 MB, 64-bit Windows 10}




www.3dmark.com


----------



## jonRock1992

Haplo181 said:


> New here, finally upgraded to a 6900xt. First run and just playing with settings got me this score.
> Card is a 6900xt toxic.
> Max power curve, 1175 volts, 2800mhz max and 2700 min, memory at 2150 normal timings.
> 
> 
> 
> 
> 
> 
> 
> 
> 
> I scored 21 367 in Time Spy
> 
> 
> Intel Core i9-10900K Processor, AMD Radeon RX 6900 XT x 1, 16384 MB, 64-bit Windows 10}
> 
> 
> 
> 
> www.3dmark.com


Very nice! With MPT you should be able to get around 24k GPU score.


----------



## Haplo181

jonRock1992 said:


> Very nice! With MPT you should be able to get around 24k GPU score.


I will at some point. Still reading about MPT so I don't mess stuff up.


----------



## ptt1982

Need advice from the watercooling experts here. I've got yet again the same problem with my Red Devil 6900xt: GPU Edge and Junction temp delta has gotten out of hand, it's 40C at worst scenarios and I've got a relatively cool apartment with aircon on, maybe 23C ambient or so. Any advice why this might happen?

*Rig*
-Alphacool Red Devil 6800xt/6900xt waterblock using their default thermal paste and pads that came with the block. I put a very thin layer of paste. I installed the block strictly following the guide, however one of the vram thermal pads got a bit diagonal as it didn't somehow fit in the right place
-Using 21.7.1 drivers, 2550mhz min 2650mhz max core, 2110mhz mem fast timings, MPT: 350W/375TDP, +15% PL in Adrenaline. I think the drivers upped the temps, but they might have gotten higher before and I just didn't test them before driver update
-Edge/Junction Delta goes above 30C in TS once power load goes above 360W or so, at 380W it is 35C, and at 405W 40C.
-I've got Corsair XD3 Gen.2 pump at 3500rpm, two radiators 240 (top) and 360 (front) in Lianli Lancool II Performance Mesh, both rads are corsair XR5 30mm. 240 rad has 2x best noctuas at 1650rpm push config and the 360 has push pull 6 arctic p12 at 1720rpm (I learned later that that's too many fans for the front rad, live and learn)
-With the above I see GPU temp edge max out at 60C and spikes up to 100C junction in Timespy GT2, junction hovers above 80C constantly in non-vsynced 4K games, it used to be 10C-15C lower. CPU goes up to 63C in TS test (5600X, given slight voltage boost where its absolute high spikes are at 1.31v, mostly cores stays under 1.27v)
-No warranty on the GPU, can keep using it, but I don't like that high temps, and by no means it should be that high and the delta is concerning
-Corsair Softtubes, no leaks, pump well under highest points, flow from pump to GPU to 240 rad to cpu waterblock (also corsair) to 360 rad to pump

My hypothesis is that the thermal paste quality is bad, or I applied too little of it, or this one thermal pad is loosening the mount, or I have a clogged loop. Or alternatively, this is exactly the cooling performance I can expect from the above parts, and that the drivers have upped the thermals. I still think the Delta is too large, but as it stays under 110C and does not affect perfromance, I can live with it.

IMPORTANTLY: This is what I did with it when I had the loop installed: (I know this was a mistake) I put lukewarm tap water in it first to flush out any crap from the parts (not distilled and not too hot, I made a mistake here), then I realized my mistake, and used distilled water to flush out the tap water, and after that I used clear coolant 1000ml flushing out the distilled water as well, and finally leaving the loop very much filled (1.5cm space on top of the pump.)

What would you suggest given the above, should I do a full maintenance on all the parts, or just leave as it as is?

I have four weeks left on my (paid) time off before I start working again, so I have time to flush & drain the loop, remount gpu/cpu blocks, and do a full-fledged maintenance. I'm pretty new at this so now that the system is working at least, I'm a bit wary of doing this as I might actually break something (any experience in remounting waterblocks would be appreciated, especially how to deal with thermal pads that came with the block).


----------



## J7SC

A quick fyi: I noticed that MPT beta '4' was out. It still needs work, but I could oc my VRAM well past 2150. In fact, when I let the auto VRAM overclock in Wattman do it, it came up with 2260 MHz for the VRAM on its own  ...I even 'seemed' to have 1.200v going on my non-H (Wattman accepted the 'lock') but could only run light rendering - crasheritis after...further, when you slide the VRAM above 2150, it will take out the 'max GPU MHz' setting, sadly. Still, some progress by the good folks behind MPT ! Clearly, they're on the right track.











@ptt1982

While you had an 'unusual method' to flush (while already installed ?), I wouldn't panic about that if you subsequently flushed it thoroughly with distilled water and appropriate cooling liquid.

The Hotspot delta is very high, though it will increase _significantly with the PL_ (ie 380 W) you mentioned anyhow. Re. flow, you might want to invest in a flow meter but even a visual inspection re. flow into the reservoir should tell you about blockage. I've only used D5 pumps for the last 10 years or so (usually two in series) rather than DDCs so I can't really comment directly on that setup as to what is a good flow for it. I have read though that that pump can be a bit restrictive.

Finally, is your GPU mounted vertically ? I ask because if you use thin (even highly-rated) thermal paste AND a block that has some stand-off tolerance issues, you might get an uneven mount and subsequently uneven die coverage / thermal paste displacement. I am not sure if that is the case with your setup, but the last year or so has seen a host of quality and tolerance problems with many GPU blocks (including my EKWB / 3090)...here is another OCN link which > alludes to that ...using washers (w/ plastic on the bottom re. avoidance of conductivity) can help, btw, but make sure not to over-tighten the block and also strive for even mounting pressure

If you still have a few weeks to re-do things, you might want to just give yourself some peace of mind by taking the loop apart, flushing each piece properly (as much as I don't think it is the problem) and also remount the block - perhaps after using some imprint paper and cheap thermal paste first for an imprint to see about die coverage.

I suggest you might also try some 'thicker' thermal paste such as Gelid GC Extreme (it is highly rated). Finally, if you are not sure about thermal pad size (or your block just has odd tolerances), you might want to try thermal putty (below, 10 W/mk) for the VRAM and power stages though obviously not the GPU die itself. Thermal putty is good stuff and it conforms to the available space w/o running out, even in a vertical GPU mount.

But it is worth noting again that a high PL such as 380 W will increase the delta hotspot temp no matter what...


----------



## LtMatt

Someone I know from another forum just got what looks to potentially be a golden XTXH, it was the Gigabyte Aorus Extreme and it has the highest out of the box boost clock I've ever seen for XTXH. 









The next previous highest was one of the XTXH Mercs I had at 2629Mhz.


----------



## weleh

Haplo181 said:


> New here, finally upgraded to a 6900xt. First run and just playing with settings got me this score.
> Card is a 6900xt toxic.
> Max power curve, 1175 volts, 2800mhz max and 2700 min, memory at 2150 normal timings.
> 
> 
> 
> 
> 
> 
> 
> 
> 
> I scored 21 367 in Time Spy
> 
> 
> Intel Core i9-10900K Processor, AMD Radeon RX 6900 XT x 1, 16384 MB, 64-bit Windows 10}
> 
> 
> 
> 
> www.3dmark.com


Nice one!

Is that the Extreme or the normal version?


----------



## ptt1982

Deleted. (can't get the reply function working.)


----------



## Haplo181

weleh said:


> Nice one!
> 
> Is that the Extreme or the normal version?


Extreme


----------



## ptt1982

J7SC said:


> A quick fyi: I noticed that MPT beta '4' was out. It still needs work, but I could oc my VRAM well past 2150. In fact, when I let the auto VRAM overclock in Wattman do it, it came up with 2260 MHz for the VRAM on its own  ...I even 'seemed' to have 1.200v going on my non-H (Wattman accepted the 'lock') but could only run light rendering - crasheritis after...further, when you slide the VRAM above 2150, it will take out the 'max GPU MHz' setting, sadly. Still, some progress by the good folks behind MPT ! Clearly, they're on the right track.
> 
> View attachment 2518184
> 
> 
> 
> @ptt1982
> 
> While you had an 'unusual method' to flush (while already installed ?), I wouldn't panic about that if you subsequently flushed it thoroughly with distilled water and appropriate cooling liquid.
> 
> The Hotspot delta is very high, though it will increase _significantly with the PL_ (ie 380 W) you mentioned anyhow. Re. flow, you might want to invest in a flow meter but even a visual inspection re. flow into the reservoir should tell you about blockage. I've only used D5 pumps for the last 10 years or so (usually two in series) rather than DDCs so I can't really comment directly on that setup as to what is a good flow for it. I have read though that that pump can be a bit restrictive.
> 
> Finally, is your GPU mounted vertically ? I ask because if you use thin (even highly-rated) thermal paste AND a block that has some stand-off tolerance issues, you might get an uneven mount and subsequently uneven die coverage / thermal paste displacement. I am not sure if that is the case with your setup, but the last year or so has seen a host of quality and tolerance problems with many GPU blocks (including my EKWB / 3090)...here is another OCN link which > alludes to that ...using washers (w/ plastic on the bottom re. avoidance of conductivity) can help, btw, but make sure not to over-tighten the block and also strive for even mounting pressure
> 
> If you still have a few weeks to re-do things, you might want to just give yourself some peace of mind by taking the loop apart, flushing each piece properly (as much as I don't think it is the problem) and also remount the block - perhaps after using some imprint paper and cheap thermal paste first for an imprint to see about die coverage.
> 
> I suggest you might also try some 'thicker' thermal paste such as Gelid GC Extreme (it is highly rated). Finally, if you are not sure about thermal pad size (or your block just has odd tolerances), you might want to try thermal putty (below, 10 W/mk) for the VRAM and power stages though obviously not the GPU die itself. Thermal putty is good stuff and it conforms to the available space w/o running out, even in a vertical GPU mount.
> 
> But it is worth noting again that a high PL such as 380 W will increase the delta hotspot temp no matter what...
> 
> View attachment 2518188


Thank you very much for the assistance and insights. 

I've got the GPU in a horizontal position, using the original washers that came with the Alphacool block. 

Something seems to be wrong. It did crash today once, first time ever, black screen, and had to manually reboot using power button. With further game testing, in Hitman 2 at 4K fps unlocked I saw a spike to 105C junction when the GPU Edge temp was at 60C, that's a 45C delta, and that was around 373W power consumption. This is late at night in a high rise apartment building air con targeting ambient temp of 22C. The delta is constantly getting higher whereas the Edge is as it was before at max 60C, so I need to do a remount. Maybe there's a spot the paste didn't reach, or the mount has loosened up. Ambient temp for GPU Edge is 30C and junction 35C, but the junction shoots up extremely aggressively under load.

Meanwhile, CPU temps haven't changed, but I think I will still reinstall the CPU waterblock as well, because I did change the motherboard without draining the loop and it was a bit chaotic and haphazard. I repasted the CPU then, but didn't have too much control over it due to loop being connected and full of water the whole time, and me and my wife in a strange position holding PC parts with both hands. I did a leak test afterwards for a couple of hours, and have used the computer for five weeks after that without problems, except for now. The problems are with the GPU though, and seem unrelated.

I have all that I need to reinstall everything: electric compressed air duster to drain the loop completely, paper towels, tapwater that I will turn into distiller water, 1g of Hydronaut, time, and a little bit of patience (the latter one I have the least of all.) I have to use the existing thermal pads that came with the waterblock because I'm afraid I will fall into a rabbit hole again if I'm trying to find the right aftermarket type (that is the reason I put this thing under water, the heatsink is incredibly hard to remount without the original pads, plus the temps were originally very high.)

I also might look into the thermal putty, it looks something that I would need. I want to try and fix this without spending a penny first, though!


----------



## jonRock1992

ptt1982 said:


> Thank you very much for the assistance and insights.
> 
> I've got the GPU in a horizontal position, using the original washers that came with the Alphacool block.
> 
> Something seems to be wrong. It did crash today once, first time ever, black screen, and had to manually reboot using power button. With further game testing, in Hitman 2 at 4K fps unlocked I saw a spike to 105C junction when the GPU Edge temp was at 60C, that's a 45C delta, and that was around 373W power consumption. This is late at night in a high rise apartment building air con targeting ambient temp of 22C. The delta is constantly getting higher whereas the Edge is as it was before at max 60C, so I need to do a remount. Maybe there's a spot the paste didn't reach, or the mount has loosened up. Ambient temp for GPU Edge is 30C and junction 35C, but the junction shoots up extremely aggressively under load.
> 
> Meanwhile, CPU temps haven't changed, but I think I will still reinstall the CPU waterblock as well, because I did change the motherboard without draining the loop and it was a bit chaotic and haphazard. I repasted the CPU then, but didn't have too much control over it due to loop being connected and full of water the whole time, and me and my wife in a strange position holding PC parts with both hands. I did a leak test afterwards for a couple of hours, and have used the computer for five weeks after that without problems, except for now. The problems are with the GPU though, and seem unrelated.
> 
> I have all that I need to reinstall everything: electric compressed air duster to drain the loop completely, paper towels, tapwater that I will turn into distiller water, 1g of Hydronaut, time, and a little bit of patience (the latter one I have the least of all.) I have to use the existing thermal pads that came with the waterblock because I'm afraid I will fall into a rabbit hole again if I'm trying to find the right aftermarket type (that is the reason I put this thing under water, the heatsink is incredibly hard to remount without the original pads, plus the temps were originally very high.)
> 
> I also might look into the thermal putty, it looks something that I would need. I want to try and fix this without spending a penny first, though!


Yeah I'd say putty would be your best bet. Also, are you torquing your screws tight enough?


----------



## jonRock1992

J7SC said:


> A quick fyi: I noticed that MPT beta '4' was out. It still needs work, but I could oc my VRAM well past 2150. In fact, when I let the auto VRAM overclock in Wattman do it, it came up with 2260 MHz for the VRAM on its own  ...I even 'seemed' to have 1.200v going on my non-H (Wattman accepted the 'lock') but could only run light rendering - crasheritis after...further, when you slide the VRAM above 2150, it will take out the 'max GPU MHz' setting, sadly. Still, some progress by the good folks behind MPT ! Clearly, they're on the right track...
> 
> View attachment 2518188


Can you give a link to beta 4? I can't find it.


----------



## J7SC

jonRock1992 said:


> Can you give a link to beta 4? I can't find it.


...here you > go (scroll down a bit and you see MPT 1.3.7. Beta 4 and it's download link)


----------



## lestatdk

J7SC said:


> ...here you > go (scroll down a bit and you see MPT 1.3.7. Beta 4 and it's download link)



What did you change to get the memory to 2260 ? If I change the overdrive limit of 1075 maximum memory frequency to 1200 , and set the memory to run at even 2160 it cripples my card :/


----------



## lestatdk

With mem at 2160 I get a graphics score of 5000  After resetting sppt I get 22k...


----------



## J7SC

lestatdk said:


> With mem at 2160 I get a graphics score of 5000  After resetting sppt I get 22k...


...yeah, as mentioned in my earlier post, the moment you go over 2150, it will drop your max GPU down to safemode speed (this after just increasing the VRAM frequency in MPT beta 4)

...I also had 'some' luck with loading the Aorus Xtreme Waterforce bios, then dropping GPUv from 1.2 to 1.175 before activating...After reboot, Wattman opened up all kinds of new / different options. But still way too buggy and crash-prone.

Still, MPT is clearly drilling deeper and getting to hitherto unavailable parameters

EDIT> I haven't got enough time re. wok projects right now, but I wonder if uninstalling Wattman and just using MSI AB might work - weekend project perhaps...or just wait for MPT beta 5


----------



## lestatdk

J7SC said:


> ...yeah, as mentioned in my earlier post, the moment you go over 2150, it will drop your max GPU down to safemode speed (this after just increasing the VRAM frequency in MPT beta 4)
> 
> ...I also had 'some' luck with loading the Aorus Xtreme Waterforce bios, then dropping GPUv from 1.2 to 1.175 before activating...After reboot, Wattman opened up all kinds of new / different options. But still way too buggy and crash-prone.
> 
> Still, MPT is clearly drilling deeper and getting to hitherto unavailable parameters
> 
> EDIT> I haven't got enough time re. wok projects right now, but I wonder if uninstalling Wattman and just using MSI AB might work - weekend project perhaps...or just wait for MPT beta 5


Was thinking that. I actually have an MSI card as well. Downside to MSI AB is you can't enable the "fast timing" setting for the memory


----------



## J7SC

J7SC said:


> ...yeah, as mentioned in my earlier post, the moment you go over 2150, it will drop your max GPU down to safemode speed (this after just increasing the VRAM frequency in MPT beta 4)
> 
> ...I also had 'some' luck with loading the Aorus Xtreme Waterforce bios, then dropping GPUv from 1.2 to 1.175 before activating...After reboot, Wattman opened up all kinds of new / different options. But still way too buggy and crash-prone.
> 
> Still, MPT is clearly drilling deeper and getting to hitherto unavailable parameters





lestatdk said:


> Was thinking that. I actually have an MSI card as well. Downside to MSI AB is you can't enable the "fast timing" setting for the memory


...given the latest entry fields in MPT B4 and the fact the Radeon bios are loaded into Win registry, there must be a switch somewhere for a registry hack...then again, if I can pick up 100 MHz on the VRAM w/o other efficiency issues, sacrificing fast timings 'may' be worth it, depending on the app...


----------



## lestatdk

J7SC said:


> ...given the latest entry fields in MPT B4 and the fact the Radeon bios are loaded into Win registry, there must be a switch somewhere for a registry hack...then again, if I can pick up 100 MHz on the VRAM w/o other efficiency issues, sacrificing fast timings 'may' be worth it, depending on the app...


Agree. Did see someone stating that it's the driver that detects too high voltages/frequencies. If that is really the case then going with MSI AB will not fix the issue of it downclocking to 500 MHz


----------



## J7SC

ptt1982 said:


> Thank you very much for the assistance and insights.
> 
> I've got the GPU in a horizontal position, using the original washers that came with the Alphacool block.
> 
> Something seems to be wrong. It did crash today once, first time ever, black screen, and had to manually reboot using power button. With further game testing, in Hitman 2 at 4K fps unlocked *I saw a spike to 105C junction *when the GPU Edge temp was at 60C, that's a 45C delta, and that was around 373W power consumption. This is late at night in a high rise apartment building air con targeting ambient temp of 22C. The delta is constantly getting higher whereas the Edge is as it was before at max 60C, so I need to do a remount. Maybe there's a spot the paste didn't reach, or the mount has loosened up. Ambient temp for GPU Edge is 30C and junction 35C, but the junction shoots up extremely aggressively under load.
> (...)


...doesn't surprise me that you got a shut-down...per below, a bios readout (H bios in this case) with temp limits. A remount might definitely be a good idea and I reiterate that you might want to do the paper-imprint test mount first to see about coverage (and if your block has stand-off issues).

..another quick 'free' test is to use a credit card and/or another perfectly level item and check for concavity or convexity of your GPU die and the corresponding area on the block.


----------



## CS9K

ptt1982 said:


> Many informations about your GPU troubles


Howdy! Let's check a few things first, because SOMEthing is preventing your water block from contacting the GPU's die properly. It could be one of many things, or a combination of all of them

- First off, did you use the thermal pads that came with the water block? If not, why? If not, do you still have them? If you have them, dismount and use them.
Rationale: With the reference card and EK's Quantum Vector blocks, for example, one can NOT use aftermarket thermal pads like Thermal Grizzly Minus Pad 8, because the thermal pads that come with the EK block are VERY compressible, where TG's MP8 pads are not_._ Using TG's pads on a reference card with any cooler will prevent the die from contacting the block properly

- Thermal Paste Application. I personally use Gelid GC-Extreme for bare-die applications. Perhaps read up on how to apply it, then give it a try before going with thermal putty
Rationale: I learned this one the hard way with prior hardware, and thankfully haven't had any trouble with my RX 6900 XT reference w/ek block, nor has the b/f had trouble with his 3080FE on water. A thin paste and/or oil-based emulsion pastes like NT-H1, Kryonaut, etc.etc.etc., don't work well on bare-die applications that get really hot, as the heat cycles will "pump out" the grease over time. The thicker, non-oil-based emulsion of pastes like Hydronaut, GC-Extreme, Ectotherm, et. al, are much more resistant to pump-out than their oil based counterparts. However, you can't have clearance issues as described above when using thicker pastes, for you have to use a spreader tool and apply a layer of GC-Extreme over the die, with enough paste that you can't see the die anymore, but _no_ more. An uneven spreading is unforgiving when using a water block, for the paste doesn't flow very much at all, so it won't fill in dips in the paste, especially if one is left in the center of the die. 

But! If you get the application of GC-Extreme right, you will be blessed with 15C delta-T at 400W card power draw, and (my) max hotspot temp of 80C in TS #2, which it never gets close to in 4k120 gaming loads.

TL;DR: Take a step back. Take stock of how you've mounted your water block, and which thermal pads and thermal paste you've used in that process. Think logically about temperatures: There's either a flow problem, or a die-contact problem (more-likely given your delta-T). Figure out why you have that contact issue, take some of the advice above and in other posts to heart, and go from there.

I wish the best of luck to you. Let us know what you find 💗

*edit, February 9, 2022:

Later on in this thread, I posted a video of how I applied Gelid GC-Extreme for some folks, I'll add it in here since people are still finding this post helpful :3


----------



## weleh

Digging today for couple of hours, managed to hit 23800+ Graphic Score on a custom run with GT1+GT2 only, then tried to replicate on a "leaderboard" run and managed a 23739 Graphic Score.

Anyway been playing with the new beta and with the Strix LC XTXH bios. 

Basically, in case you didn't know the XTXH bios increases VRAM Voltage to 1.406V from 1.356V of the XTX cards. This might help some of you gain a bit more performance if you're not completely stable at 2150 Mhz which I wasn't (ECC kicking in and performance drops a bit at least on 7.1 driver).

What I did was load up the XTXH bios, input the XTX values under voltage / memory DPM etc, basically match 1:1 all the XTXH specific settings to XTX.

I also disabled deep sleep, increased VSOC min a bit, increased fCLK and fCLK boost to 2000, increased vCLK and dCLK by 20 Mhz, enabled FT2 but left the driver at FT1 (rumour says it still tightens timmings not sure if true).

Also the card is very finicky with min/max clockspeed. It's not the same for everyone. Some cards like super high min clock speed, others super low, others inbetween. I did my benches at 2450-2735. Going higher or lower too much either dropped performance or made the card boost past 2730 Mhz during GT2 and crash the benchmark.

So for those of you looking to bench/squeeze more performance, test all these things.


----------



## 6u4rdi4n

The latest drivers look really good so far. Managed a new personal best of 20 308 points in Time Spy, compared to my last best of 19 243. GPU score is 23 662 vs 22 102. Not too bad considering I only have an i9-9900K, or?


----------



## LtMatt

I saw someone mention that Superposition 8K was the go to test for checking memory performance gains. Below are my results from Superposition.

*Superposition:*
2100Mhz=6774
2112Mhz=6764
2124Mhz=6797
2150Mhz=6847
2162Mhz=6859
2170Mhz=6832

2770/2162Mhz 21.7.1 (not my highest possible clocks on core)








As a comparison in my Timespy graphics score testing, I start losing performance at 2162Mhz, with 2150Mhz offering the best gains. So for me at least, Timespy seems a better gauge of memory performance gains.


----------



## Haplo181

I know not many people have them but has anyone changed the fans on the 6900xt toxic radiator. The plugs from the video card do not allow normal fans. Is there anyway to control the pump?


----------



## LtMatt

Haplo181 said:


> I know not many people have them but has anyone changed the fans on the 6900xt toxic radiator. The plugs from the video card do not allow normal fans. Is there anyway to control the pump?


@weleh has changed them, I decided to keep the stock ones as I like the rgb and they are silent sub 1000rpm which is enough to cool up to 400W+ at max clock speed.


----------



## J7SC

...not sure if you folks have been following the drama re. Amazon's New World beta and its bricking of select 3090s. So far, it seems to concern mostly select EVGA 3090 Ampere, but until the real cause is identified, it pays to be cautious with high-power (400W+) GPUs with that app. While my 500W+ 3090 is an Asus Strix, I'm still paranoid about it, and the same holds w/ my custom-MPT-PL 6900XT. Apparently, it happens in the menu when it can exceed several thousand fps.

...per below, I've been playing around w/ MPT beta 4 a bit to see what max I can get (just for the fun of it) with my 6900XT non-H  To be sure, this is using just a light artifact scanning 3D app, and those values are certainly NOT 3DM TS or Superposition stable (I wish). Also, the GPU and VRAM max speeds were achieved separately (given MPT B4's current issue on GPU speed when pushing VRAM past 2150 on non-H).

...finally, I noticed how hot the backplate can get on my 6900XT even in regular use / stock MPT, especially now as I'm back on air while updating the water-loop. In fact, it gets hotter than the backplate on my 3090 - which are notorious for 'heater backplates' due to GDDR6X and the double-sided arrangement for the 24 GB on the card. A hot backplate isn't necessary a bad thing (it means it does transfer heat well) but I'm adding the big heatsink pictured below - it came in a pack of two anyway (Amazon, heatsink for audio amps). I'm using an extra-wide sheet of Thermalright 12.8 W/mk thermalpads to mount this heatsink.


----------



## ptt1982

Hey gang! 

Finally disassembled the custom loop, and of course spilled accidentally water on mobo, so I have to let the parts dry 48hrs before turning this thing on. I had power off for 130 minutes before the spill, but had actually PSU cables attached to the mobo (PSU not connected on the wall), so I wonder if I shorted something. I shall see. At least I took 6900xt out of it before that, so that’s saved. I’m sure the PC is fine anyway. I’m such a f-up with this stuff, but I love it!

Meanwhile, I took photos of the GPU waterblock mount and it’s quite easy to see why the temps were spiking. It was a bad mount, and probably too little thermal paste, unevenly spread.

One thing I noticed on the PCB is that it has pre-attached black washers (see last pic, a bit hard to see but there’s 4x of them) around the X pattern screwholes. As per the instructions, I put on top of that the waterblock washers. There are three screws which do not have those black washers. I wonder, should I only use three washers, because the black attached washers add height? Or perhaps I should try and remove the black pre-installed washers? Advice would be much appreciated!

Another thing you may notice is that there’s one tiny thermal pad on the backside (pic 3) which seemed to get extra pressure compared to other pads, I do wonder if the height is also correct or not. The guide was for 6800xt, although the block is designed for both cards (and has room for three pins, this is common knowledge and their website says the same thing.)

What do you guys think!


----------



## LtMatt

J7SC said:


> ...not sure if you folks have been following the drama re. Amazon's New World beta and its bricking of select 3090s. So far, it seems to concern mostly select EVGA 3090 Ampere, but until the real cause is identified, it pays to be cautious with high-power (400W+) GPUs with that app. While my 500W+ 3090 is an Asus Strix, I'm still paranoid about it, and the same holds w/ my custom-MPT-PL 6900XT. Apparently, it happens in the menu when it can exceed several thousand fps.
> 
> ...per below, I've been playing around w/ MPT beta 4 a bit to see what max I can get (just for the fun of it) with my 6900XT non-H  To be sure, this is using just a light artifact scanning 3D app, and those values are certainly NOT 3DM TS or Superposition stable (I wish). Also, the GPU and VRAM max speeds were achieved separately (given MPT B4's current issue on GPU speed when pushing VRAM past 2150 on non-H).
> 
> ...finally, I noticed how hot the backplate can get on my 6900XT even in regular use / stock MPT, especially now as I'm back on air while updating the water-loop. In fact, it gets hotter than the backplate on my 3090 - which are notorious for 'heater backplates' due to GDDR6X and the double-sided arrangement for the 24 GB on the card. A hot backplate isn't necessary a bad thing (it means it does transfer heat well) but I'm adding the big heatsink pictured below - it came in a pack of two anyway (Amazon, heatsink for audio amps). I'm using an extra-wide sheet of Thermalright 12.8 W/mk thermalpads to mount this heatsink.
> 
> View attachment 2518422
> 
> 
> 
> View attachment 2518421


Yes I saw the drama involving that game. Looks like a few 3090's are not up to the task of the high menu FPS. Same could probably happen to AMD GPUs if peeps are running a silly high power limit tbf. Not a game I am interested in thankfully.


----------



## weleh

As far as I could research, it's a design flaw with EVGA, shouldn't affect any other card on the market. It's also unconfirmed if people were actually using XOC bioses on their cards when it happened or if stock cards are also affected.

Let's not forget that, earlier after release, 3090 from EVGA were already dying prior to all of this due to bad design with voltage controller. They later released a new revision PCB with it fixed. Are these cards bricking older revision? Well we don't know.

Igor from Igor's Lab says, it's actually not the voltage controller area that's affected but the ICX fan controller. 

Too many questions, too little answers. EVGA and Nvidia are supposedly looking at it and so far, no other card besides EVGA's FTW3 (or cards with ICX tech) are blowing up.


----------



## weleh

Haplo181 said:


> I know not many people have them but has anyone changed the fans on the 6900xt toxic radiator. The plugs from the video card do not allow normal fans. Is there anyway to control the pump?


I have changed them to Noctua NF A12x25's. 

The headers Sapphire uses are ARGB so I guess any ARGB fans would fit in there. I have mine plugged onto the motherboard and controlled via BIOS curve so it doesn't bother me because NF A12x25s at 1300 RPM are enough to cool 400W on this card at very low and inaudible noise levels.

About the pump, you cannot control it. I spoke with Sapphire_Edd and he told me engineers told him pump runs at 100% 24/7 which is expected and the way to go otherwise pump would last 3 times as less as it does now.


----------



## weleh

New video about the issue... Godamn...


----------



## LtMatt

weleh said:


> I have changed them to Noctua NF A12x25's.
> 
> The headers Sapphire uses are ARGB so I guess any ARGB fans would fit in there. I have mine plugged onto the motherboard and controlled via BIOS curve so it doesn't bother me because NF A12x25s at 1300 RPM are enough to cool 400W on this card at very low and inaudible noise levels.
> 
> About the pump, you cannot control it. I spoke with Sapphire_Edd and he told me engineers told him pump runs at 100% 24/7 which is expected and the way to go otherwise pump would last 3 times as less as it does now.


I hope the pump lasts a long time.

I had to take my Toxic apart as i noticed the Delta between Edge and Junction started going to 30c + when overclocked.

When i opened it up i discovered that the thermal paste was drier that one of Ghandi's flip flops.

I've ordered some Thermal Grizzly and will re-paste it.

For now i just had to try and use the old dry paste and spread it as best as possible, even though it does not spread its that dry.

Nonetheless this dropped 5c off the 30c Delta which has given me hope that a proper re-paste will improve results further.


----------



## weleh

Before applying thermal paste on GPU dies I like to heat the paste a bit by putting it inside cellophane and then inside a cup of very hot water. 
It makes the paste much more liquid and maleable making for a perfect application.

Regarding delta between edge and junction, mine's about 15C and 20C depending on the load at 22-24C ambient.


----------



## Haplo181

Well been trying to break 23k timespy graphics score and just can't. I see my hotapot going over 74 degrees and card core clocks down. Using mpt I cut the voltage from 1.2 to 1.175 to help with Temps without much change. Thought maybe change fans on radiator would help. Ram at 2150. I can't do tight timings on ram at anything above 2000 on ram. Wondering if I need to reapply paste on core.

Delta is between 25 and 35 degrees from edge to Hotspot.

Best graphic score so far is 22,968.

Just noticed 2 sticks on 2 of the waterblock screws on backside. If I were to repaste do I lose warranty?


----------



## TexasAVI03

LtMatt said:


> Yes I saw the drama involving that game. Looks like a few 3090's are not up to the task of the high menu FPS. Same could probably happen to AMD GPUs if peeps are running a silly high power limit tbf. Not a game I am interested in thankfully.


Very Odd!

In COD BO CW I see 900-1000+ FPS during menu change black screens.

My Red Devil 6900XT (3 plug) has always made the low-pitched garbled sound once under load in game.


----------



## Ipak

ptt1982 said:


> Hey gang!
> 
> Finally disassembled the custom loop, and of course spilled accidentally water on mobo, so I have to let the parts dry 48hrs before turning this thing on. I had power off for 130 minutes before the spill, but had actually PSU cables attached to the mobo (PSU not connected on the wall), so I wonder if I shorted something. I shall see. At least I took 6900xt out of it before that, so that’s saved. I’m sure the PC is fine anyway. I’m such a f-up with this stuff, but I love it!
> 
> Meanwhile, I took photos of the GPU waterblock mount and it’s quite easy to see why the temps were spiking. It was a bad mount, and probably too little thermal paste, unevenly spread.
> 
> One thing I noticed on the PCB is that it has pre-attached black washers (see last pic, a bit hard to see but there’s 4x of them) around the X pattern screwholes. As per the instructions, I put on top of that the waterblock washers. There are three screws which do not have those black washers. I wonder, should I only use three washers, because the black attached washers add height? Or perhaps I should try and remove the black pre-installed washers? Advice would be much appreciated!
> 
> Another thing you may notice is that there’s one tiny thermal pad on the backside (pic 3) which seemed to get extra pressure compared to other pads, I do wonder if the height is also correct or not. The guide was for 6800xt, although the block is designed for both cards (and has room for three pins, this is common knowledge and their website says the same thing.)
> 
> What do you guys think!
> 
> 
> Spoiler
> 
> 
> 
> 
> View attachment 2518415
> View attachment 2518416
> View attachment 2518417
> View attachment 2518418
> 
> View attachment 2518420


Had same mount problem with ekwb, both stock and hydronaut squeeze out after a while causing spiking temps. Now im on much thicker kryonaut will observe, but so far it holds fine after a month.


----------



## LtMatt

weleh said:


> Before applying thermal paste on GPU dies I like to heat the paste a bit by putting it inside cellophane and then inside a cup of very hot water.
> It makes the paste much more liquid and maleable making for a perfect application.
> 
> Regarding delta between edge and junction, mine's about 15C and 20C depending on the load at 22-24C ambient.


I just repasted mine and quite happy with the results.

Before pasting running Timespy at 375-425W i would see a delta of 30c between edge and junction. 

After trying to spread the old paste that reduced the delta to around 25c.

With a fresh application of Kryonaut, that delta is now 19-20c at 375/425W. 

Not sure it can get too much better than that tbh. Heart in mouth time waiting for it to power up a-okay and then check temps.


----------



## ptt1982

Ipak said:


> Had same mount problem with ekwb, both stock and hydronaut squeeze out after a while causing spiking temps. Now im on much thicker kryonaut will observe, but so far it holds fine after a month.


Oh I almost bought Kryonaut, but as everyone recommended I went for Hydronaut again. I thought I had some, but I used it up for the CPU. I think I will just add a whole lot more of it, and I have to think if I will use only three washers, or all seven, because the X screwholes already have three pre-installed.

I also have the issue of spilling 1.5 month old corsair coolant from the loop on the motherboard’s Vram and capacitators (maybe two small spoonfuls), but I soaked the spots with 99% IPA and put under natural sun after it evaporared. I should probably take the whole mobo out to hang it upside down, but I gave it the electric air duster treatment and could not at least see any moving water anywhere. Just in case, I’ll let it rest 72hrs and hope the water evaporates. I’m sure the mobo is fine because the PC wasn’t hooked up, but just in case.

Watercooling is lovely! You get to wait three days and, while at it, can go in the nature, watch movies, and read books!

Meanwhile, I’m going to wait opinions on the 6900xt waterblock reassembly, washer usage and paste application based on the photos before I put it together. The GPU is waiting on my desk, but I want to hear the smart people’s opinion here. I have three days before I start soloing again.

If I was working and this happened, I’d be stressed out of my mind, but am on a holiday this is just about switching gears… ha!


----------



## weleh

3dmark.com







www.3dmark.com





24.215 Graphic Score, back to #1 spot 5800X/6900XT combo.


----------



## LtMatt

weleh said:


> 3dmark.com
> 
> 
> 
> 
> 
> 
> 
> www.3dmark.com
> 
> 
> 
> 
> 
> 24.215 Graphic Score, back to #1 spot 5800X/6900XT combo.


That's a very impressive graphics score for an XTX.

Did you figure out the magical tweaks using MPT 4?


----------



## weleh

New record,

CPU score letting me down, can't get 14000 like my best run.









3dmark.com







www.3dmark.com


----------



## lestatdk

weleh said:


> 3dmark.com
> 
> 
> 
> 
> 
> 
> 
> www.3dmark.com
> 
> 
> 
> 
> 
> 24.215 Graphic Score, back to #1 spot 5800X/6900XT combo.



Wow that's a nice score. Wish I could have a similar result, but apparently no waterblock planned for my MSI 6900 XT Gaming X


----------



## weleh

LtMatt said:


> That's a very impressive graphics score for an XTX.
> 
> Did you figure out the magical tweaks using MPT 4?


I did figure out some stuff but It only led me to 23700 Graphic Score.
For these runs I'm using an EVC2SX and I'm adding a Vcore offset.


----------



## LtMatt

weleh said:


> I did figure out some stuff but It only led me to 23700 Graphic Score.
> For these runs I'm using an EVC2SX and I'm adding a Vcore offset.


Nice, do you have to solder that?

Any interesting tweaks you can share with MPT for us folk?


----------



## jfrob75

I finally broke 24K graphics score in Timespy I scored 22 640 in Time Spy.
That was achived with max freq. setting of 2805 MHz. and min freq. setting of 2700 MHz. The memory frequency was set to 2100 MHz. anything higher and it would not finish. Based on GPU-Z the max Chip power draw was 400 W. The MPT power/current settings are 420, 400, 70. I had RS set to +10% power.
Also yesterday I applied LM to the GPU and Cold plate of the WB. This reduced the delta to under 20 deg C between GPU temp and Hot Spot temp. I had been seeing deltas of 30 deg C plus. It also appears to have droped temps overall between 8-10 deg C.


----------



## LtMatt

jfrob75 said:


> I finally broke 24K graphics score in Timespy I scored 22 640 in Time Spy.
> That was achived with max freq. setting of 2805 MHz. and min freq. setting of 2700 MHz. The memory frequency was set to 2100 MHz. anything higher and it would not finish. Based on GPU-Z the max Chip power draw was 400 W. The MPT power/current settings are 420, 400, 70. I had RS set to +10% power.
> Also yesterday I applied LM to the GPU and Cold plate of the WB. This reduced the delta to under 20 deg C between GPU temp and Hot Spot temp. I had been seeing deltas of 30 deg C plus.


Well done, that is exactly the same graphics clocks (2805/2705/2100Mhz) i managed with my Toxic and our (graphics) scores are a ball hair apart. 
AMD Radeon RX 6900 XT video card benchmark result - AMD Ryzen 9 5950X,ASUSTeK COMPUTER INC. ROG CROSSHAIR VIII HERO (3dmark.com)

Perhaps our GPUs were separated at birth?

Good job on the re-paste. I have not rerun my tests since doing mine which improved my delta also, however I don't expect it to make any difference to my score.


----------



## jfrob75

What is also interesting is that when the test fails back to the main window and I check the max frequency captured by GPU-Z it is above the max freq, set in the RS. So, this tells me that there still is some kind of issue with boosting algorithm in the driver. I have seen this behavior in the past but did not think much about it at the time. I'm paying more attention to it after watching a Buildzoid youtube video where this happened to him as well. You would think that it would be simple to ensure that the driver does not allow the GPU clock to exceed the max set value.


----------



## jonRock1992

ptt1982 said:


> Oh I almost bought Kryonaut, but as everyone recommended I went for Hydronaut again. I thought I had some, but I used it up for the CPU. I think I will just add a whole lot more of it, and I have to think if I will use only three washers, or all seven, because the X screwholes already have three pre-installed.
> 
> I also have the issue of spilling 1.5 month old corsair coolant from the loop on the motherboard’s Vram and capacitators (maybe two small spoonfuls), but I soaked the spots with 99% IPA and put under natural sun after it evaporared. I should probably take the whole mobo out to hang it upside down, but I gave it the electric air duster treatment and could not at least see any moving water anywhere. Just in case, I’ll let it rest 72hrs and hope the water evaporates. I’m sure the mobo is fine because the PC wasn’t hooked up, but just in case.
> 
> Watercooling is lovely! You get to wait three days and, while at it, can go in the nature, watch movies, and read books!
> 
> Meanwhile, I’m going to wait opinions on the 6900xt waterblock reassembly, washer usage and paste application based on the photos before I put it together. The GPU is waiting on my desk, but I want to hear the smart people’s opinion here. I have three days before I start soloing again.
> 
> If I was working and this happened, I’d be stressed out of my mind, but am on a holiday this is just about switching gears… ha!


With my red devil ultimate I left the black washers as they seemed glued on, and I just used the washers that came with my wb where there wasn't a black one. I wouldn't stack washers on top of the pre-installed ones.


----------



## ZealotKi11er

I can seem to get over 2810 with my card.


----------



## cfranko

I scored 20 305 in Time Spy


AMD Ryzen 9 5900X, AMD Radeon RX 6900 XT x 1, 32694 MB, 64-bit Windows 10}




www.3dmark.com





Why is my graphics score so low? People are getting 23000-24000. Im on 21.7.1 drivers. My mpt settings are minimum soc voltage 950, TDC SoC limit 63, power limit 375 watts. Minimum frequency is 2450 and maximum frequency is 2600. 2630 mhz maximum frequency crashes didnt try 2610-2620. Is there a way I could improve this score?


----------



## ZealotKi11er

cfranko said:


> I scored 20 305 in Time Spy
> 
> 
> AMD Ryzen 9 5900X, AMD Radeon RX 6900 XT x 1, 32694 MB, 64-bit Windows 10}
> 
> 
> 
> 
> www.3dmark.com
> 
> 
> 
> 
> 
> Why is my graphics score so low? People are getting 23000-24000. Im on 21.7.1 drivers. My mpt settings are minimum soc voltage 950, TDC SoC limit 63, power limit 375 watts. Minimum frequency is 2450 and maximum frequency is 2600. 2630 mhz maximum frequency crashes didnt try 2610-2620. Is there a way I could improve this score?


2300-2400 are using 2700+


----------



## J7SC

fyi for those who like to replace thermal pads on their GPU, Igor's lab has some neat comparisons re. different W/mK on GDDR6X VRAM. Results for GDDR6 (ie 6900XT) might be slightly less dramatic, but still... 

source


----------



## ZealotKi11er

J7SC said:


> fyi for those who like to replace thermal pads on their GPU, Igor's lab has some neat comparisons re. different W/mK on GDDR6X VRAM. Results for GDDR6 (ie 6900XT) might be slightly less dramatic, but still...
> 
> source


How much are they saving using 3w pads? Probably have to do it for my 3080 TUF as memory does run hot.


----------



## CS9K

ptt1982 said:


> Hey gang!











Beautiful GPU! I have circled the areas that related to the two issues that I believe you were having:

Uneven mount will cause patterns like this, where one end of the card (where the mount gap is too wide) will have a lot of wet/smooth paste, where the other end will have patterned, dry paste where most of it squished out, and the rest of it "pumped out"
Thin paste moves around a LOT when it's at the mercy of a GPU die capable of chewing through 350W+ just itself. I personally feel like the thinner, oil based paste exacerbated the problem above.

You +rep'd my post on thermal paste choice; I really appreciate that, all of you!. Once you have your mount worked out, it may be worth your time to consider a different paste, too!








[Official] AMD Radeon RX 6900 XT Owner's Club


Someone I know from another forum just got what looks to potentially be a golden XTXH, it was the Gigabyte Aorus Extreme and it has the highest out of the box boost clock I've ever seen for XTXH. The next previous highest was one of the XTXH Mercs I had at 2629Mhz.




www.overclock.net





_edit_ I see you got Hydronaut, excellent!



jonRock1992 said:


> With my red devil ultimate I left the black washers as they seemed glued on, and I just used the washers that came with my wb where there wasn't a black one. I wouldn't stack washers on top of the pre-installed ones.


Also this!


----------



## Haplo181

So repast done on my toxic extreme. This card is about easiest ever to expose GPU Core, still heart in mouth till it booted. Delta between core and Hotspot dropped from 35 degrees to 19 to 21 degrees. Finally broke and past 23k graphics test.


----------



## LtMatt

Haplo181 said:


> So repast done on my toxic extreme. This card is about easiest ever to expose GPU Core, still heart in mouth till it booted. Delta between core and Hotspot dropped from 35 degrees to 19 to 21 degrees. Finally broke and past 23k graphics test.


Yes it was really easy to do sounds like you got near identical results to me with my Toxic Extreme.

Was your paste completely dry like mine? Wish I had taken some pics now, wonder how many Toxics are all dried out.


----------



## CS9K

LtMatt said:


> Yes it was really easy to do sounds like you got near identical results to me with my Toxic Extreme.
> 
> Was your paste completely dry like mine? Wish I had taken some pics now, wonder how many Toxics are all dried out.


That's unforunate to hear that Sapphire may have cheaped out on thermal compound on their ultimate GPU. But, Hydronaut, GC-Extreme, or another great compound will get you there!


----------



## LtMatt

CS9K said:


> That's unforunate to hear that Sapphire may have cheaped out on thermal compound on their ultimate GPU. But, Hydronaut, GC-Extreme, or another great compound will get you there!


To be fair it appears to be the normal really thick clay like paste most AIBs use. It always seems to dry out really quickly not sure why.

I actually wish Sapphire had used the Thermal pad found on the MBA and Radeon VII cards. No need for maintenance then.


----------



## Haplo181

It looked dried, mine there was so much of it. I used some kryonaut.

I am somewhat impressed with the tool kit that came with the card. That is nice.

Tonight I will try pushing it again. I know my 10900k can do 5.4 all core for a timespy run. Now to figure out what settings my card likes and doesn't. The run after repast I was able to do 2124 on memory with tight timings. Still had card now clocked as high and voltage cut to 1.175. So hoping to get near or break 24k graphics score.


----------



## HeLeX63

My latest score on TS. 6900XT Red Devil w/ Alphacool waterblock.

2650MHz Max, 2550MHz Min.

CPU locked at 4.8GHz all core.

Average GPU frequency is around 2560MHz for entire run.

How does the GPU and CPU score stack with other people ?


----------



## ptt1982

Happy days: as a result of the tips I got + changing flow direction as per the GPU waterblock instructions said + repasting the CPU, I've got a loop that is much much better. *The 6900xt edge/junction delta is 15-20C* now in TS GT2 (fresh run, probably moves to around 20-25C over time), and the *max peak I saw was 67C junction, edge was at 48C *(dropped from 105C junction, and 65C edge)*. CPU temps dropped 8C* as well. I also hit a new TS record due to lower temps allowing more OC headroom (edited this post four times due to going past record scores.)

Now this is what I expected from watercooling, incredibly happy! *Thank you all so much for the help!*


----------



## LtMatt

^ Excellent result.


----------



## HeLeX63

LtMatt said:


> ^ Excellent result.


Thanks dude. Its kinda disappointing that I can't do any higher than 2650MHz max on the core. Seen others hit 2750 or higher...


----------



## ptt1982

HeLeX63 said:


> Thanks dude. Its kinda disappointing that I can't do any higher than 2650MHz max on the core. Seen others hit 2750 or higher...


I've got the same card under water, and get 300 points less than you do with the same clocks. You've got yourself a great 6900XT Red Devil card, sir.


----------



## HeLeX63

ptt1982 said:


> I've got the same card under water, and get 300 points less than you do with the same clocks. You've got yourself a great 6900XT Red Devil card, sir.


Thx. I think that's just variance in the run. Looks to be the same as yours, except my CPU score is much higher (12 cores vs 6 cores).


----------



## ptt1982

HeLeX63 said:


> Thx. I think that's just variance in the run. Looks to be the same as yours, except my CPU score is much higher (12 cores vs 6 cores).


I tried to find info online about how the timespy graphic scores change based on CPU, but with same GPU. Didn't find any comparisons, but I would presume your 5900X renders a couple of extra frames at 1440p at that high fps vs my 5600X, and that in turn could reflect positively on your Graphics score.

Edit: I realized I'm the 4th in the world in Timespy with 6900xt + 5600X combo right now, this is without XTXH and no overclock on CPU. Hmm...


----------



## ZealotKi11er

ptt1982 said:


> I tried to find info online about how the timespy graphic scores change based on CPU, but with same GPU. Didn't find any comparisons, but I would presume your 5900X renders a couple of extra frames at 1440p at that high fps vs my 5600X, and that in turn could reflect positively on your Graphics score.
> 
> Edit: I realized I'm the 4th in the world in Timespy with 6900xt + 5600X combo right now, this is without XTXH and no overclock on CPU. Hmm...


Nobody combos 5600x + 6900 xt.


----------



## ptt1982

ZealotKi11er said:


> Nobody combos 5600x + 6900 xt.


I play at 4K60 so I don't need anything better than 5600X.


----------



## weleh

So I've been messing the past few days with MPT betas and general overclocking using Time Spy as a benchmark on my 6900XT Toxic XTX.
There are some obvious stuff everyone knows and then there's new things that might or might not do anything for you but you're free to try.

I'll try to cover everything in this small post for everyone interested in trying to get better benchmarking scores or better gaming performance.
For this, we're going to use More Power Tool and AMD driver tunning utility.

First of all, I assume everyone knows how to get More Power Tool set up and running but if not, here's a super fast and easy guide I've made that covers the basic install+power limit mod.









MORE POWER TOOLS AS FAST AS POSSIBLE FOR RDNA2


Folha1 MORE POWER TOOL EZ PZ GUIDE FOR RDNA2 GPUS by Weleleh @overclocking discord 1. DOWNLOAD MORE POWER TOOL ,<a href="https://www.igorslab.de/installer/MorePowerTool_Setup.exe">https://www.igorslab.de/installer/MorePowerTool_Setup.exe</a> 2. DOWNLOAD GPU-Z,<a href="https://www.techpowerup.co...




docs.google.com





So once you have MPT set up here's the settings I've messed with that might come handy for those trying to squeeze a bit more out of the cards.

On the first menu, you can uncheck the following to disable deep sleep on the card. This trick was used before 5.1 drivers hotfixed the suttering issue some people experienced on CPU heavy scenarios where the card would go to sleep.

Again this can be hypothetically helpful during benchmarks because it prevents the card from downclocking during transitions and therefore supposedly help with frametime and framerate consistency.










Next up we have Overdrive Limits section, here you can change Memory Timing Control from 1 to 2. This enables a second timing option on the driver but it's unstable for everyone however, rumour has it that enabling this and leaving the card on Fast Timing 1 on the driver actually improves the timings further. I cannot 100% attest to this but I felt like it does indeed help.










Next up we have the Power and Voltage section. This is where the magic happens and this is where most of the gains will come from since these cards are well capped. 
Most of the 6900XTs I've seen and my own, scale up to 400/410W. They can obviously pull much higher (I've had mine pull at least 480W and at 500W I get OCP shut down from my PSU which is only a 750W unit). However, to pull more than this it's either have to be a XTXH die (higher Vcore under load) or you have to manually mess with the Vcore using something like an EVC2SX as I have.

Some people also reported higher stability on their VRAM and Core by messing with min VSOC and min VCore on this page. I personally haven't noticed anything from this eventhough I run min VSOC at 0.9V.

For the limits themselves, I haven't noticed any particular scalling with them on my own card. My opinion is that, if you don't hit those limits you're going to be fine. You should only go higher if your card is throttleing. SOC limit is nowhere near the limit at stock so you can even leave it alone. The other two can be messed with because if you're pushing over 2600 Mhz you're going to be power thorttled during heavy load scenarios such as a benchmark of high resolution gaming.










Next up we have the frequency, this one has more settings exposed. I personally have messed with only 4 of these settings. I personally increased vCLKmax by 20 Mhz, dCLKmax by 20 Mhz, fCLKmin/max to 2000 Mhz and fCLKboostfreq to 2000 Mhz too. Due to some variance from run to run I cannot attest if these settings do anything positively to the scores or not but since fCLK is related to SAM and since supposedly, SAM helps with performance on benchmarks and some games, running a higher fCLK if stable should be benefitial. Again, feel free to test and report back.










I didn't personally mess with anything else inside MPT. There's a beta5 released today or something that also adds V/F curve behaviour and the hardcoded Vdroop associated with it but reports on another forum suggest it does nothing. The cards have a very droopy LLC, that's why you never see 1.175V unless you're idleing. During heavy benchmarks it will Vdroop to around 1.1 to 1.12V depending on your clockspeed and binning (better cards will mantain better voltage levels and drop less).

I want to add something else regarding how these cards scale with temperature. I already know these cards generally speaking do not care about being on water or air, as long as your hotspot/core is safe, and by safe I mean if your hotspot is below 110C, core below 100C and Memory below 100C on XTX cards and 10/20 or so degrees lower if you're on XTXH.

However, there is something to keep in mind here. These cards have a certain Vdroop related to their LLC and heat directly affects how this Vdroop relates just as how it works on AMD cpus and many other integrated circuits. So while going from air to water might not be too relevant (see how BZ is like Hall of Fame Graphic Score on air cooling) if you're not hard modding the card's Vcore then the lower the temp, hypothetically speaking the better the performance. 

So we've got MPT out of the way, now on to the drivers. I won't type much more about drivers because I expect people to know the basics already. However, let me just tell you how broken the boost algorithim is and how it does not respect the limits imposed by Whattman. For this reason, messing with minimum clockspeed becomes super important. I don't have any magic numbers, however, some cards prefer a 100 Mhz difference between max and mim, others way more than this and like 200 to 400 Mhz difference. 

What I would suggest for those who care is to run Hwinfo during custom time spy runs and observe clockspeed behaviour. You can easily determine what's too much and too few on min clock because at that point, effective clockspeed will drop.

Anyway I don't have anything else to add. Feel free to add stuff and your own experiences.


----------



## CS9K

ptt1982 said:


> Happy days: as a result of the tips I got + changing flow direction as per the GPU waterblock instructions said + repasting the CPU, I've got a loop that is much much better. *The 6900xt edge/junction delta is 15-20C* now in TS GT2 (fresh run, probably moves to around 20-25C over time), and the *max peak I saw was 67C junction, edge was at 48C *(dropped from 105C junction, and 65C edge)*. CPU temps dropped 8C* as well. I also hit a new TS record due to lower temps allowing more OC headroom (edited this post four times due to going past record scores.)
> 
> Now this is what I expected from watercooling, incredibly happy! *Thank you all so much for the help!*
> 
> View attachment 2518506


Yay! I am _so_ happy for you! Hopefully you can relax a bit and start enjoying your kickass new GPU!

Here is my best TS run so far, for everyone: I scored 20 876 in Time Spy


Reference RX 6900 XT
EK Quantum Vector block
Gelid GC-Extreme paste
EK included pads
420W card power limit as set in MPT; no other MPT changes
16ga PCIe 8-pin power cables (buying the crimp tool for custom, homemade PSU cables was the best $40 I've ever spent)
2715MHz max frequency, 500MHz min frequency (changing min freq never changed anything for me)
2150MHz Memory speed; Fast timings
The settings above are what I daily drive. I don't change settings for the sake of benchmarking, I tune everything in my PC for daily driver push-button stability, and that's it.










I don't have a particularly blessed sample, but I'm not complaining, the card runs great with my LG CX 48" OLED panel.



weleh said:


> On the first menu, you can uncheck the following to disable deep sleep on the card. This trick was used before 5.1 drivers hotfixed the suttering issue some people experienced on CPU heavy scenarios where the card would go to sleep.
> 
> Again this can be hypothetically helpful during benchmarks because it prevents the card from downclocking during transitions and therefore supposedly help with frametime and framerate consistency.


Ayyyy, it's good to see the Deep Sleep trick I discovered making the rounds. I originally posted it to Reddit (which is where I imagine it FINALLY caught the attention of the devs), then hardwareluxx dot DE, then Igor's Lab, now here! Neat!

weleh is correct though, as of 21.6.1, the stuttering that the downclocking was causing has been fixed, thank goodness.


----------



## J7SC

...have been doing some 'snooping' of H vs non-H variants, using what are supposedly identical PCB / VRMs etc by the same manufacturer. The differences in rom (69 diff) aren't earth shattering, but it would be nice to find the switches for voltage control ( >1.175) and VRAM (> 2150)...not holding my breath though; MPT 7 if/when will probably make that easier...










Also, with MPT, be aware that TDC (Amps) refers more to the physical VRM structure (including but not limited to Amp stages, for example re. the 90 Amp stages of the ASRock Formula oc vs 70 Amp stages of most other). 

As to 'how hot', I think it is prudent to look at max temps shown in the vbios. Water-cooling can and will help with that, but only to a point. As others already observed, for Hotspot, you want to make sure that you have decent thermal paste (I prefer Gelid or other 'thicker' to avoid issues over time). Liquid Metal of course also works, but it is tricky re. protection of other components (conformal or LET coating) and also depends to some extent on mounting orientation.


----------



## weleh

All the cards have +1000A VRMs so it's fine.

I'm more concerned about my own PSU and that's why I stopped testing today. Was doing a 2800 Mhz run at 1.2Vcore and tripped OCP on my PSU at over 500W reported (which in reality would be much higher). Actually I had PL locked to 480W and the card pulled much more which is why it probably shut off. Had to power cycle the PSU to boot.

So yea, until I upgrade I'm not sure I'll be pushing the limits much more.


----------



## CS9K

J7SC said:


> ...have been doing some 'snooping' of H vs non-H variants, using what are supposedly identical PCB / VRMs etc by the same manufacturer. The differences in rom (69 diff) aren't earth shattering, but it would be nice to find the switches for voltage control ( >1.175) and VRAM (> 2150)...not holding my breath though; MPT 7 if/when will probably make that easier...


Nice work investigating the BIOS files themselves! I wish it were as easy as finding some switches, but there is a lot of work to be done for RDNA2 before we have a flash-able bios mod available.

For context, it was "so easy" to do with the RX 5600 XT because of a few things:

To make the bios flash-able, I believe all RedPanda did was find a way to say "Yep, we're good!" to the built in checks for bios legitimacy. If you can't crack the security, go around it, right?
Once that was done, to make parameters on the RX 5600 XT modifiable, the bios had to be modified in such a way that it told the drivers "Hey, I'm an RX 5700" when the driver set the power limits, but still told the driver "Treat me like an RX 5600 XT" in regards to the _rest_ of the driver package, so that drivers didn't write to the missing 2GB of VRAM, etc.

I sadly don't think it will be _as_ easy for RDNA2 cards in regards to circumventing _both_ the GPU's onboard checks for bios legitimacy (& card model type), AND the driver's checks for the same.

Hellm is still working hard over on Igor's Lab, as am I with whatever I can help him with. We're still a ways off from seeing a bios mod for RDNA2 cards I think, but Hellm is back to work with MorePowerTool, and for that I am thankful!

Once someone convinces Hellm to set up a link for donations to Hellm himself, I'll post a link.

Link goes to the ongoing thread for the BETA versions of MPT and RBE on Igor's Lab


----------



## CS9K

weleh said:


> All the cards have +1000A VRMs so it's fine.
> 
> I'm more concerned about my own PSU and that's why I stopped testing today. Was doing a 2800 Mhz run at 1.2Vcore and tripped OCP on my PSU at over 500W reported (which in reality would be much higher). Actually I had PL locked to 480W and the card pulled much more which is why it probably shut off. Had to power cycle the PSU to boot.
> 
> So yea, until I upgrade I'm not sure I'll be pushing the limits much more.


That is a SPICY test! I love the Seasonic PRIME series, myself; especially the Titanium model. Over-engineering at its finest 💗


----------



## weleh

You can flash XTX or XTXH vbios. I've done it via Linux and it sort of works. 

There's something else on the card preventing it, maybe there's actually hardware changes since die stepping is different and supposedly theres an improved clock gen as well.


----------



## J7SC

CS9K said:


> Nice work investigating the BIOS files themselves! I wish it were as easy as finding some switches, but there is a lot of work to be done for RDNA2 before we have a flash-able bios mod available.
> 
> For context, it was "so easy" to do with the RX 5600 XT because of a few things:
> 
> To make the bios flash-able, I believe all RedPanda did was find a way to say "Yep, we're good!" to the built in checks for bios legitimacy. If you can't crack the security, go around it, right?
> Once that was done, to make parameters on the RX 5600 XT modifiable, the bios had to be modified in such a way that it told the drivers "Hey, I'm an RX 5700" when the driver set the power limits, but still told the driver "Treat me like an RX 5600 XT" in regards to the _rest_ of the driver package, so that drivers didn't write to the missing 2GB of VRAM, etc.
> 
> I sadly don't think it will be _as_ easy for RDNA2 cards in regards to circumventing _both_ the GPU's onboard checks for bios legitimacy (& card model type), AND the driver's checks for the same.
> 
> Hellm is still working hard over on Igor's Lab, as am I with whatever I can help him with. We're still a ways off from bios mods for RDNA2 cards, but Hellm is back to work with MorePowerTool, and for that I am thankful!
> 
> Link goes to the ongoing thread for the BETA versions of MPT and RBE on Igor's Lab


...good to hear about the ongoing MPT work ! And yeah, it is far more tricky than just throwing a few switches - that said, I chose the Gigabyte 3 pin and corresponding Aorus 3 pin as they apparently use the same PCB, so it limits the number of potential suspects in the room, so to speak.

...I did manage to load the Aorus 3x8 bios on my Gigabyte 3x8, and it did 'show' 1.2V but there are plenty of other checks and balances which spoil the party


----------



## cfranko

My air cooled 6900 xt reaches 115 degrees hotspot at 410 watts and shuts down during timespy to protect itself is this normal?


----------



## LtMatt

cfranko said:


> My air cooled 6900 xt reaches 115 degrees hotspot at 410 watts and shuts down during timespy to protect itself is this normal?


Yep, from memory i think shutdown temp is 118c for MBA.

The stock cooler is great for stock operation, quietness and a small overclock. But for extreme overclocks it gets overwhelmed once you go past 325-350W.


----------



## cfranko

LtMatt said:


> Yep, from memory i think shutdown temp is 118c for MBA.
> 
> The stock cooler is great for stop operation, lack of noise and a small overclock. But for extreme overclocks it gets overwhelmed once you go past 325-350W.


I think an air cooled card shouldn’t reach 115 degrees on 400 watts though. I don’t understand whats wrong its not the airflow of the case I know that for sure.


----------



## lestatdk

cfranko said:


> I think an air cooled card shouldn’t reach 115 degrees on 400 watts though. I don’t understand whats wrong its not the airflow of the case I know that for sure.


It will. I have an MSI 6900XT Gaming X and suffer from the same problem. Very high hotspot temp after I use MPT to increase power limits.
Unfortunately I seem to have one of the few cards that have manufacturers have no plans to make a waterblock for


----------



## cfranko

lestatdk said:


> It will. I have an MSI 6900XT Gaming X and suffer from the same problem. Very high hotspot temp after I use MPT to increase power limits.
> Unfortunately I seem to have one of the few cards that have manufacturers have no plans to make a waterblock for


My card has a waterblock but I need to get it from aliexpress which bothers me, also building a custom loop is really expensive in total. I am not really sure but I think I am going to build a loop. This is the first time I had a GPU shutdown automatically because of temperatures


----------



## lestatdk

cfranko said:


> My card has a waterblock but I need to get it from aliexpress which bothers me, also building a custom loop is really expensive in total. I am not really sure but I think I am going to build a loop. This is the first time I had a GPU shutdown automatically because of temperatures


Also a first for me. I have found the limit now, if I push max frequency in wattman 30 MHz it will shut down. So for daily use I have set it 100 MHz lower just to be safe


----------



## weleh

Well it's just bad air cooler really.

This is my old Nitro+SE 6800XT doing 410W at 86C hotspot.
This is at 20C ambient.

Sure the die is the same with less CU's but thermal shutdown on a 6900XT at 400W shouldn't really happen.
Look at BZ's 6900XT, he's pulling way over 500W on an air cooler nowhere near shutdown hotspot. 
Sure his ambient might be low like 17-18C or something but still.


----------



## lestatdk

weleh said:


> Well it's just bad air cooler really.
> 
> This is my old Nitro+SE 6800XT doing 410W at 86C hotspot.
> This is at 20C ambient.
> 
> Sure the die is the same with less CU's but thermal shutdown on a 6900XT at 400W shouldn't really happen.
> Look at BZ's 6900XT, he's pulling way over 500W on an air cooler nowhere near shutdown hotspot.
> Sure his ambient might be low like 17-18C or something but still.


His card is very modified, so not really a fair comparison to a stock card.

I think thermals would be improved with a re-paste, but I don't have the balls to do that (yet)


----------



## LtMatt

I had a thermal shutdown on my (now sold) 6900 XT MBA once I went past 400W+ on Timespy


----------



## lawson67

cfranko said:


> I think an air cooled card shouldn’t reach 115 degrees on 400 watts though. I don’t understand whats wrong its not the airflow of the case I know that for sure.


You can do 400w on air only if you have used LM, normal paste is not efficient enough transfer and to conduct the heat along the whole of the heatsink at 400w, my cards junction is around 96c - 98c at 400w using LM with the GPU fans at about 60%, powercolor Red Devil


----------



## lawson67

weleh said:


> Well it's just bad air cooler really.
> 
> This is my old Nitro+SE 6800XT doing 410W at 86C hotspot.
> This is at 20C ambient.
> 
> Sure the die is the same with less CU's but thermal shutdown on a 6900XT at 400W shouldn't really happen.
> Look at BZ's 6900XT, he's pulling way over 500W on an air cooler nowhere near shutdown hotspot.
> Sure his ambient might be low like 17-18C or something but still.
> 
> View attachment 2518609


Buildzoid also has a fan sat directly on the back of the GPU die.


----------



## weleh

lestatdk said:


> His card is very modified, so not really a fair comparison to a stock card.
> 
> I think thermals would be improved with a re-paste, but I don't have the balls to do that (yet)


His card is modified to pull more power.
I think some coolers are just really bad, or mounting is bad off the factory or worse, die/coldplate is uneven (more than usual) which causes poor contact spots.


----------



## cfranko

lestatdk said:


> His card is very modified, so not really a fair comparison to a stock card.
> 
> I think thermals would be improved with a re-paste, but I don't have the balls to do that (yet)


I repasted my gpu, didn't make a difference at all. I get thermal shutdown at 400 watts.


----------



## lestatdk

cfranko said:


> I repasted my gpu, didn't make a difference at all. I get thermal shutdown at 400 watts.


Ok, mine shuts down way before 400 as it hits ridiculously high hotspot temps.


----------



## LtMatt

It's normal for the MBA cards guys I wouldn't worry too much. Even my Merc 6900 XT would get close to throttle temps under extended 400W load at 100% fan speed.

It's only since moving to the Toxic with it's 400W rad and using Thermal Grizzly Kryonaut, that 400W sustained load with a silent fan proflile becomes possible, even with ambient temp of 25c.


----------



## ZealotKi11er

XTXH LC with single 120mm does 450w fine with TS.


----------



## CS9K

ZealotKi11er said:


> XTXH LC with single 120mm does 450w fine with TS.


The liquid cooler (and liquid coolers in general) wick away heat from VERY concentrated sources like an 80CU GPU die, MUCH better than heatpipes can.

I'm actually surprised you all made it to 400W on air with a reference RX 6900 XT. The RX 6900 XT is a spicy boi. The reference cooler especially could be better compared to much fatter cooler variants from Powercolor, ASUS, etc.


----------



## LtMatt

ZealotKi11er said:


> XTXH LC with single 120mm does 450w fine with TS.


Are you using one? 🤔


----------



## LtMatt

CS9K said:


> The liquid cooler (and liquid coolers in general) wick away heat from VERY concentrated sources like an 80CU GPU die, MUCH better than heatpipes can.
> 
> I'm actually surprised you all made it to 400W on air with a reference RX 6900 XT. The RX 6900 XT is a spicy boi. The reference cooler especially could be better compared to much fatter cooler variants from Powercolor, ASUS, etc.


The reference cooler is designed to cool the default power limit +15% maximum. It does a decent job of that, main thing is it's very quiet and looks nice. 

If you want more than that, you need a more exotic AIB design IMO.


----------



## J7SC

lawson67 said:


> Buildzoid also has a fan sat directly on the back of the GPU die.


...Buildzoid also runs even his 'daily' on an open-air test bench  - makes a big difference.

More generally, ramping up PL in MPT will send the Hotspot up relatively more than the regular GPU temp


----------



## marcoschaap

cfranko said:


> My card has a waterblock but I need to get it from aliexpress which bothers me, also building a custom loop is really expensive in total. I am not really sure but I think I am going to build a loop. This is the first time I had a GPU shutdown automatically because of temperatures


I ordered my Bykski from AliExpress with no problems at all (shipped to Netherlands), just to reassure you, it was in perfect condition, and delivered within 14 days.


----------



## ZealotKi11er

LtMatt said:


> Are you using one? 🤔


Yeah. Its the only XTXH I have.


----------



## LtMatt

ZealotKi11er said:


> Yeah. Its the only XTXH I have.


Is it the AMD LC version?


----------



## CS9K

Here's a TL;DR: update on the MPT beta so far:

One can only reduce voltage with the Linear Droop settings
Of the new clocks listed on the "Frequency" tab, only fclk makes a measureable, albeit small, performance difference
DCEFclk is not modifiable via SPPT
SOC clock I _believe_ is also capped at 1200MHz

Some information that I have uncovered in hours upon hours of digging through the code of AMD's Linux drivers:





News - Neue Beta-Versionen des MorePowerTools(MPT) und Red BIOS Editors – BIOS Unlock für RDNA und alle Frequenzen für RDNA2-Karten!


1090 x2 = 2180 Da brauchste schon guten Vram+XTXH und Der würde erst bei dem hohen Takt die volle neue Spannung bekommen. aber Jetzt kommt der Witz: Du kannst mit Deiner XT-Graka ohne H net >2150 also bleibst Du in DPM2 mit der Spannung. Würde daher DPM3-Takt net zu sehr hoch setzen ggü. 1000=...




www.igorslab.de




With this update too: DCEF means "_Display Controller Engine Fabric"_, which says to me that it too is part of the hardware that drives the display outputs themselves.

Some results from testing Fclk speeds at a set core-clock-speed:





AMD - RED BIOS EDITOR und MorePowerTool - BIOS-Einträge anpassen, optimieren und noch stabiler übertakten | Navi unlimited


Hab gerade nen Lauf. 1+, um net zu sagen Mehrere in kürzester Zeit, Danke! Auf den ersten Blick bleibt der effektive max.Boost-Voltage-Wert gleich zu vorher, bei mir. Habe erstmal nur mittlere States beim LinearDrop angepasst. ...geringerer Abstand = stabiler? An den Höchsten Wert getrau ich...




www.igorslab.de


----------



## ZealotKi11er

LtMatt said:


> Is it the AMD LC version?


Yeah.


----------



## ptt1982

1) Good stuff with the MPT updates, really appreciate an active and knowledgeable community like this. Still hoping XTX cards will get that 1.2v and higher Mem OC headroom.
2) Regarding the air cooled reference cards hitting 400W = that's not safe or possible. There are some who report 75C edge / 110C, which equals to thermal throttling, at stock settings. These are typically with high ambient temp and bad airflow case, default non-aggressive fan curve. You can maybe get that +15% PL with more aggressive fans to keep the junction under 110C. To go higher while being safe, you need AIB cards or watercooling.

Quick Q: Would the wise people here consider below MPT settings be safe for 24/7 usage?
360 PL GPU (+15% = 414W)
385 TDC Limit GPU (+15% = 442.75)
65 TDC Limit SoC

PSU: Seasonic Focus 850W Platinum, 12V Single-rail
CPU: 5600X pulling max package 130W
GPU: 3-pin 6900xt Red Devil XTX
MB: Gigabyte Aorus Ultra X570

With the above settings, after extended stress testing, I've seen Edge 58C Junction 83C at the absolute worst case scenario peak (=414W power draw, ambient temp 30C).


----------



## CS9K

ptt1982 said:


> 1) Good stuff with the MPT updates, really appreciate an active and knowledgeable community like this. Still hoping XTX cards will get that 1.2v and higher Mem OC headroom.
> 2) Regarding the air cooled reference cards hitting 400W = that's not safe or possible. There are some who report 75C edge / 110C, which equals to thermal throttling, at stock settings. These are typically with high ambient temp and bad airflow case, default non-aggressive fan curve. You can maybe get that +15% PL with more aggressive fans to keep the junction under 110C. To go higher while being safe, you need AIB cards or watercooling.
> 
> Quick Q: Would the wise people here consider below MPT settings be safe for 24/7 usage?
> 360 PL GPU (+15% = 414W)
> 385 TDC Limit GPU (+15% = 442.75)
> 65 TDC Limit SoC
> 
> PSU: Seasonic Focus 850W Platinum, 12V Single-rail
> CPU: 5600X pulling max package 130W
> GPU: 3-pin 6900xt Red Devil XTX
> MB: Gigabyte Aorus Ultra X570
> 
> With the above settings, after extended stress testing, I've seen Edge 58C Junction 83C at the absolute worst case scenario peak (=414W power draw, ambient temp 30C).


Temps, power supply, and power limits look good! The only thing I daily-drive MPT for, is to change the Power Limit GPU to "365", which with +15% works out to 420W (nice). That's it, I leave both TDC values at their stock setting, personally.


----------



## ptt1982

CS9K said:


> Temps, power supply, and power limits look good! The only thing I daily-drive MPT for, is to change the Power Limit GPU to "365", which with +15% works out to 420W (nice). That's it, I leave both TDC values at their stock setting, personally.


Thanks for that. May I ask, why do you leave the GPU TDC values on stock? 

I saw on a random youtube MPT guide that the GPU TDC should be typically around +20-25 higher than the GPU PL on MPT, but to be 100% honest I'm not sure why on a technical level. I understand that the TDC provides amps (current) to the GPU and reacts to the PL% hike on Wattman, and can be a bottleneck, but not sure if my TDC values make sense or are safe for long-term use. I hope my real wattage doesn't overshoot over 525W and melt the cables!


----------



## CS9K

ptt1982 said:


> Thanks for that. May I ask, why do you leave the GPU TDC values on stock?
> 
> I saw on a random youtube MPT guide that the GPU TDC should be typically around +20-25 higher than the GPU PL on MPT, but to be 100% honest I'm not sure why on a technical level. I understand that the TDC provides amps (current) to the GPU and reacts to the PL% hike on Wattman, and can be a bottleneck, but not sure if my TDC values make sense or are safe for long-term use. I hope my real wattage doesn't overshoot over 525W and melt the cables!


In my experience, changing either/both TDC values accomplish nothing. I've tested modifying them with both my overclock at max clock speed (I run out of voltage, my sample tops out at 2715MHz on the slider on 21.7.1), and with my undervolt. I'd prefer to leave them at stock so if, for whatever reason, the card _does_ try to pull some slick ****, the stock limits are in place to catch it.

Since my reference card is only dual-PCIe-8pin, and we had the crimp tool anyway, I made some 16ga cables for my card. I at least feel _slightly_ more comfortable with a 420W card power limit, knowing how high the wattage can go during power draw spikes on the RX 6900 XT.


----------



## J7SC

I'm still dreaming of having BOTH higher (> 2150) VRAM and normal GPU OC...clearly, 2260 MHz for VRAM works fine, not least as that is what Wattman indicated as a suggested VRAM OC when unshackled for my card, never mind the factory AMD XTH LC (below). Alas, 2260 VRAM doesn't do me much good with the GPU in safe mode speed :-( ...the moment I try to save any VRAM speed over 2150, the GPU speed drops to safe mode; 2150 and below is fine...i still have a few tricks and ideas I want to try next week to see if I can get past this limitation.

All that said, this 6900XT system also has some productivity functions, and I am really pleased with the performance I'm getting right out of the box...then there's the 3090 w/ 520W / and 1KW XOC vbios next to it if I really want to go crazy (which I don't). Looking forward to the updated w-loop for the 6900XT; with MPT PL > 370 W all in, I find it is a must to keep hotspot etc under control


----------



## lawson67

ptt1982 said:


> Thanks for that. May I ask, why do you leave the GPU TDC values on stock?
> 
> I saw on a random youtube MPT guide that the GPU TDC should be typically around +20-25 higher than the GPU PL on MPT, but to be 100% honest I'm not sure why on a technical level. I understand that the TDC provides amps (current) to the GPU and reacts to the PL% hike on Wattman, and can be a bottleneck, but not sure if my TDC values make sense or are safe for long-term use. I hope my real wattage doesn't overshoot over 525W and melt the cables!


Buildzoid says two 8-pin cables should be fine up to 600w


----------



## J7SC

lawson67 said:


> Buildzoid says two 8-pin cables should be fine up to 600w


...yeah, and so does DerBauer...however it still also comes down to gauge of the wire etc. Overall, I keep TDC as close as possible to spec as it relates to the hard bits on the pcb and as @CS9K also pointed out, you do want to have some safety margin re. power spikes (ie what happened to some EVGA 3090s w/ Amazon New World last week).

...btw, I've seen some YT 'MPT guides' where they kept on confusing Watts with Amps when discussing MPT PL and TDC...


----------



## CS9K

lawson67 said:


> Buildzoid says two 8-pin cables should be fine up to 600w





J7SC said:


> ...yeah, and so does DerBauer...however it still also comes down to gauge of the wire etc. Overall, I keep TDC as close as possible to spec as it relates to the hard bits on the pcb and as @CS9K also pointed out, you do want to have some safety margin re. power spikes (ie what happened to some EVGA 3090s w/ Amazon New World last week).
> 
> ...btw, I've seen some YT 'MPT guides' where they kept on confusing Watts with Amps when discussing MPT PL and TDC...


This. The RX 6900 XT has harder-hitting "hammers" on power draw when the card loads down, even compared to the 3090 (trying to find a link). Sustained current is one thing, constant, or even semi-regular, millsecond-long periods of significantly more power draw, are another.

* Edit * One source among many: Is AMD getting the crown back? Radeon RX 6900 XT 16 GB review with benchmarks and a deeper technical analysis | Page 15 | igor´sLAB

* Edit 2 * To be clear, I'm not saying anyone is wrong; Der8auer and Buildzoid aren't wrong, but there's also nothing wrong with reasonable "comfort zones" with one's gear. Take a moment and go look up some of the calculations on what is safe current draw for 80C 18ga wire when used in an 8-pin PCIe power cable.

Terrifying, eh?

Now go look at what kind of cable your PSU PCIe 8-pin cables are made of


----------



## lawson67

J7SC said:


> ...yeah, and so does DerBauer...however it still also comes down to gauge of the wire etc. Overall, I keep TDC as close as possible to spec as it relates to the hard bits on the pcb and as @CS9K also pointed out, you do want to have some safety margin re. power spikes (ie what happened to some EVGA 3090s w/ Amazon New World last week).
> 
> ...btw, I've seen some YT 'MPT guides' where they kept on confusing Watts with Amps when discussing MPT PL and TDC...


I very much doubt you could even get any of these cards to pull 600w, but i am sure even with two 8-pin you should be fine up to 500w, with three 8-pins even higher, importantly you would want to know the amount and what power phases you have.


----------



## lawson67

Any of you guys that have the powercolor Red Devil RX 6900 XT Ultimate, think Jonrock1992 has one can you check your bios and see if the thermal limit is 110c as most XTXU chips are 95c i belive yet the Liquid Devil Ultimate is 110c looking at the bios ( link below), if the Red Devil RX 6900 XT Ultimate also has a thermal limit of 110c i might think about getting one and keeping it on air








Powercolor RX 6900 XT VBIOS


16 GB GDDR6, 500 MHz GPU, 914 MHz Memory




www.techpowerup.com


----------



## BIaze

What's the best bios so far for 6900 XT? Does using this modded bios help improve performance further(especially in DX11 games) or is it useless?








Amernime Zone AMD Software: Adrenalin / Pro Driver - Release Discovery 22.12.1 WHQL


[IMG] Amernime Zone Radeon Software Adrenalin / Pro Remixed 2nd-Source for Multi-Driver Preset [ATTACH] Driver Catalog Compiled Under Windows &...




forums.guru3d.com


----------



## jfrob75

Question about power draw. 
With my GB Aorus Extreme, using GPU-Z, I am seeing a max GPU Chip Power Draw of 408 W with Time Spy and 388 W with Port Royal. Max freq for TS is set to 2810MHz and Port Royal is set to 2870MHz. My MPT power setting is 420.
What are you guys using to monitor your power draw for your graphics card?


----------



## J7SC

lawson67 said:


> I very much doubt you could even get any of these cards to pull 600w, but i am sure even with two 8-pin you should be fine up to 500w, with three 8-pins even higher, importantly you would want to know the amount and what power phases you have.


...no doubt, though the point I was trying to make relates to the actual cable gauge playing a significant role as well (never mind 'pig tails' etc), rather than just the number of PCIe pins...with mining for example, there had been an explosion (pardon the pun) of questionable power peripherals, per GN's vid below which touches on some of these issues.

Finally, electronic equipment sold in most jurisdictions has to stick to certain specs for liability reasons...doesn't mean that you couldn't run it faster. During my HWBot days, I had 4x PSUs driving custom-bios quad-SLI and an HEDT CPU (all sub-zero) spiking to beyond 4000 Watts...fortunately, I kept most of the high-power PSUs and custom cables for my current builds, including a set for my 6900XT


----------



## ptt1982

CS9K said:


> This. The RX 6900 XT has harder-hitting "hammers" on power draw when the card loads down, even compared to the 3090 (trying to find a link). Sustained current is one thing, constant, or even semi-regular, millsecond-long periods of significantly more power draw, are another.
> 
> * Edit * One source among many: Is AMD getting the crown back? Radeon RX 6900 XT 16 GB review with benchmarks and a deeper technical analysis | Page 15 | igor´sLAB
> 
> * Edit 2 * To be clear, I'm not saying anyone is wrong; Der8auer and Buildzoid aren't wrong, but there's also nothing wrong with reasonable "comfort zones" with one's gear. Take a moment and go look up some of the calculations on what is safe current draw for 80C 18ga wire when used in an 8-pin PCIe power cable.
> 
> Terrifying, eh?
> 
> Now go look at what kind of cable your PSU PCIe 8-pin cables are made of


That Igor's LAB link is eye opening. Jaysus! You scared me man. Now I'm all wobbly regarding the MPT settings.

According to this there's a possibility of around +140W (<1ms) peak on top of the PL you set for your card, and you'd probably want to put another 20W on top of that to be 100% on the safe side. If my PL is at 414W, I can expect up to 554W spikes. In theory, the 3x8pin + PCI-E slot would provide 525W, but obviously in reality they can go higher depending on your MB, GPU power delivery systems (as per others said about the two technical youtubers results). I might tone back the MPT settings, especially given what happened with Amazon's New World.

Red Devil seems to have 14+2 phases using the DrMOS as well as high polymer capacitors. I'm sure the Seasonic Focus 850W Platinum should be fine as well. Hmm... I do wonder if I'm playing with fire here. Powercolor's website says the card can deliver up to 480W of power, but I do wonder if that is the 281W stock + 15% PL = 323W + 147W spikes.

At least nobody here blew up their 6900XT...yet!


----------



## CS9K

ptt1982 said:


> That Igor's LAB link is eye opening. Jaysus! You scared me man. Now I'm all wobbly regarding the MPT settings.
> 
> According to this there's a possibility of around +140W (<1ms) peak on top of the PL you set for your card, and you'd probably want to put another 20W on top of that to be 100% on the safe side. If my PL is at 414W, I can expect up to 554W spikes. In theory, the 3x8pin + PCI-E slot would provide 525W, but obviously in reality they can go higher depending on your MB, GPU power delivery systems (as per others said about the two technical youtubers results). I might tone back the MPT settings, especially given what happened with Amazon's New World.
> 
> Red Devil seems to have 14+2 phases using the DrMOS as well as high polymer capacitors. I'm sure the Seasonic Focus 850W Platinum should be fine as well. Hmm... I do wonder if I'm playing with fire here. Powercolor's website says the card can deliver up to 480W of power, but I do wonder if that is the 281W stock + 15% PL = 323W + 147W spikes.
> 
> At least nobody here blew up their 6900XT...yet!


Aye, it actually surprised me that the RX 6900 XT hammers for higher peak power draw than the 3090 does. 

You're fine so far as your power supply is concerned. The more I think about it, the more I think I'm going to back my wattage limit to 400W and leave it be. Time Spy Test 2 is the only thing I've ever run that goes above it, so eh. 

And if it's any consolation, the 3090 FE, with only +13% power target limit set, will run the "150W" PCIe 8-pin connectors out of spec at around 165W each. You've got a good PSU and a reasonable power limit for your card's capabilities. Keep those temps in check and you're good to go!


----------



## LtMatt

ZealotKi11er said:


> Yeah.


Very nice, did you have to buy a whole system to get it?

How much difference does the faster memory make and how well does it overclock?


----------



## LtMatt

lawson67 said:


> Any of you guys that have the powercolor Red Devil RX 6900 XT Ultimate, think Jonrock1992 has one can you check your bios and see if the thermal limit is 110c as most XTXU chips are 95c i belive yet the Liquid Devil Ultimate is 110c looking at the bios ( link below), if the Red Devil RX 6900 XT Ultimate also has a thermal limit of 110c i might think about getting one and keeping it on air
> 
> 
> 
> 
> 
> 
> 
> 
> Powercolor RX 6900 XT VBIOS
> 
> 
> 16 GB GDDR6, 500 MHz GPU, 914 MHz Memory
> 
> 
> 
> 
> www.techpowerup.com


I'd be interested to know this. Having the higher Hotspot limit would be a must for air cooled cards as the Merc cooler temps got out of control at over 400W on the XTXH.


----------



## weleh

For everyone wondering, these cards are super hungry once you start pushing the limits.

I was doing TS with EVC2SX and around 1.19-1.2Vcore at 500W and it trips my Seasonic GX Focus 750W PSU instantly. This means transients are insanely high (if I had to guess over 800W).
Also remember, if your card reports 500W on software it's pulling a lot more in reality.


----------



## weleh

double post


----------



## lawson67

ptt1982 said:


> That Igor's LAB link is eye opening. Jaysus! You scared me man. Now I'm all wobbly regarding the MPT settings.
> 
> According to this there's a possibility of around +140W (<1ms) peak on top of the PL you set for your card, and you'd probably want to put another 20W on top of that to be 100% on the safe side. If my PL is at 414W, I can expect up to 554W spikes. In theory, the 3x8pin + PCI-E slot would provide 525W, but obviously in reality they can go higher depending on your MB, GPU power delivery systems (as per others said about the two technical youtubers results). I might tone back the MPT settings, especially given what happened with Amazon's New World.
> 
> Red Devil seems to have 14+2 phases using the DrMOS as well as high polymer capacitors. I'm sure the Seasonic Focus 850W Platinum should be fine as well. Hmm... I do wonder if I'm playing with fire here. Powercolor's website says the card can deliver up to 480W of power, but I do wonder if that is the 281W stock + 15% PL = 323W + 147W spikes.
> 
> At least nobody here blew up their 6900XT...yet!


I very much doubt you will be blowing up your Powercolor anytime soon, the red devil RX 6900 XT and the RX 6800 XT Red Devil share the same board and power stages from infineon being the tda27472 rated at 70 amps each and there's 19 in total 16 for the Vcore, 2 for the Vmem and one for the VDDCI, so for the Vcore alone you have 1120A power phase, it can handle way more power than you can throw at it, Furthermore the infineon tda27472 is currently one of the best power stages you can get, it has over current protection over temperature protection short circuit protection and a whole bunch more safety features built in, i monitor power draw for my PSU from the wall , with my card pulling about 400w i am pull about 650w from the wall and i have a very good quality 850w PSU as everyone should use (a good quality PSU) which will also shut down if you over load it, badly designed Molex cables, GPU risers can all catch fire for sure especially if you misuse them, however as for you blowing up your card or PSU that's highly unlikely, if they are working correctly they will simply shut down, i am not condoning running slightly over spec its up to everyone to evaluate what your doing but when your PCI-E cables start to melt and weld themselves to your cards 8-pin molex that's when you need to worry, also while under full load hold your PCI-E cables if there hot your running to much power, with my Powercolor RX 6800 XT i only have two 8-pin molex and at 400w there still cool as are the whole bunch of PSU cable's stuffed around the PSU buried in the bottom of my case


----------



## Blameless

J7SC said:


> not least as that is what Wattman indicated as a suggested VRAM OC


Wattman has recommended some blatantly unstable settings on some of my cards.



CS9K said:


> The more I think about it, the more I think I'm going to back my wattage limit to 400W and leave it be. Time Spy Test 2 is the only thing I've ever run that goes above it, so eh.


Before AMD put an app specific limiter in it's drivers, I could get the Ray Tracing feature test to pull 430-440w and ~350A TDC through my 6800XT at 2.6GHz...also allowed me to get over 32 fps in the test (edging out a stock RTX 3070), because it's always power limited or throttled in some way otherwise.

Try it with the 21.3.2 drivers sometime, if you're feeling adventurous.


----------



## ZealotKi11er

LtMatt said:


> Very nice, did you have to buy a whole system to get it?
> 
> How much difference does the faster memory make and how well does it overclock?


Got it from AMD directly eng sample. I tested XTX MBA @ 2150 FT vs XTXH 2310 and 2310 actually go lower score in TS. I was able to get 2380 FT but in terms of TS core it was maybe 100-150 point more. Not much of a different. Maybe the timing as much higher and the benefits could be with 4K and/or other games.


----------



## J7SC

PCGH / De (link below) also actually tested the 'fabled' AMD 6900XT LC and mentioned that the effective VRAM rate had been bumped from 8000 MHz to 9200 MHz, though it is not clear at what if any extra voltage for that. 

In any case, VRAM is another area (apart from ray tracing, DLSS) the 6900XT can still gain as at 4K (unlike 1440, 1080) its starts to fall behind the 3090, even w/InfinityCache, given the width of the memory bus. Having room for improvement is a big plus 

source


----------



## majestynl

lawson67 said:


> Any of you guys that have the powercolor Red Devil RX 6900 XT Ultimate, think Jonrock1992 has one can you check your bios and see if the thermal limit is 110c as most XTXU chips are 95c i belive yet the Liquid Devil Ultimate is 110c looking at the bios ( link below), if the Red Devil RX 6900 XT Ultimate also has a thermal limit of 110c i might think about getting one and keeping it on air
> 
> 
> 
> 
> 
> 
> 
> 
> Powercolor RX 6900 XT VBIOS
> 
> 
> 16 GB GDDR6, 500 MHz GPU, 914 MHz Memory
> 
> 
> 
> 
> www.techpowerup.com


I Do..

Thermal Limit
Edge: 100°C 
Hotspot: 110°C 
Memory: 100°C


----------



## lawson67

majestynl said:


> I Do..
> 
> Thermal Limit
> Edge: 100°C
> Hotspot: 110°C
> Memory: 100°C


Thanks for info mate


----------



## jonRock1992

For what it's worth, I was getting +20MHz on my core clock on Windows 11 with the latest driver. I had to go back to Windows 10 because Oculus Link doesn't work with Windows 11, and I noticed that I lost 20MHz on my core clock in Timespy. I can only do 2750MHz vs 2770MHz in Win11 during a Timespy bench. It might be worth dual booting Win11 to see if you can squeeze out a little more performance if you're chasing records.


----------



## ZealotKi11er

jonRock1992 said:


> For what it's worth, I was getting +20MHz on my core clock on Windows 11 with the latest driver. I had to go back to Windows 10 because Oculus Link doesn't work with Windows 11, and I noticed that I lost 20MHz on my core clock in Timespy. I can only do 2750MHz vs 2770MHz in Win11 during a Timespy bench. It might be worth dual booting Win11 to see if you can squeeze out a little more performance if you're chasing records.


You have to check what you clk actually is not set clk.


----------



## jonRock1992

ZealotKi11er said:


> You have to check what you clk actually is not set clk.


Average clock is higher, and so is the GPU score. So Win11 works better for my GPU. It's real easy to dual boot so it's worth a shot.


----------



## lawson67

Slightly off topic but will give you better scores in 3Dmark, i have managed a stable FLCK of 2000mhz and ram at 4000mhz on my Ryzen 7 5800x without any WHEA errors after multiple runs of cinebench using the following voltages, VSOC 1.15v - VDDG IOD 1.10v - VDDG CCD 1.00v , ram is CL16 3600mhz out the box to set 4000mhz i bumped the ram up to 1.42v, ram is Micron E-Die


----------



## LtMatt

lawson67 said:


> Slightly off topic but will give you better scores in 3Dmark, i have managed a stable FLCK of 2000mhz and ram at 4000mhz on my Ryzen 7 5800x without any WHEA errors after multiple runs of cinebench using the following voltages, VSOC 1.15v - VDDG IOD 1.10v - VDDG CCD 1.00v , ram is CL16 3600mhz out the box to set 4000mhz i bumped the ram up to 1.42v, ram is Micron E-Die


Very rare to achieve that. My 5950X tops out at 1900Mhz.


----------



## jonRock1992

lawson67 said:


> Slightly off topic but will give you better scores in 3Dmark, i have managed a stable FLCK of 2000mhz and ram at 4000mhz on my Ryzen 7 5800x without any WHEA errors after multiple runs of cinebench using the following voltages, VSOC 1.15v - VDDG IOD 1.10v - VDDG CCD 1.00v , ram is CL16 3600mhz out the box to set 4000mhz i bumped the ram up to 1.42v, ram is Micron E-Die


Very nice! I have my 5800X running at 2067MHz FCLK. Gets 690+ cpu-z single-thread bench.


----------



## weleh

Alright boys,

Bought a new PSU, Corsair HX1000 Plat and decided to give it another go at Time Spy.
This time, at ~530W, 2650-2770 Core, 2120 Mhz VRAM.

At the verge of stability on the core, upping VRAM causes core to crash so I had to compensate for this with extra Vcore.

Managed to do some 24.4K runs and am now the 3rd (technically 2nd becaus 1 and 2 are from the same guy) best 6900XT on Time Spy GraphicScore.









Result not found







www.3dmark.com













Result not found







www.3dmark.com













3DMark.com search


3DMark.com search




www.3dmark.com





The only changes I did was fCLK to 2000. Nothing else.


----------



## LtMatt

weleh said:


> Alright boys,
> 
> Bought a new PSU, Corsair HX1000 Plat and decided to give it another go at Time Spy.
> This time, at ~530W, 2650-2770 Core, 2120 Mhz VRAM.
> 
> At the verge of stability on the core, upping VRAM causes core to crash so I had to compensate for this with extra Vcore.
> 
> Managed to do some 24.4K runs and am now the 3rd (technically 2nd becaus 1 and 2 are from the same guy) best 6900XT on Time Spy GraphicScore.
> 
> 
> 
> 
> 
> 
> 
> 
> 
> Result not found
> 
> 
> 
> 
> 
> 
> 
> www.3dmark.com
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> Result not found
> 
> 
> 
> 
> 
> 
> 
> www.3dmark.com
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 3DMark.com search
> 
> 
> 3DMark.com search
> 
> 
> 
> 
> www.3dmark.com
> 
> 
> 
> 
> 
> The only changes I did was fCLK to 2000. Nothing else.


Can we see a picture of the card with the EVC?


----------



## weleh

Sure.


----------



## weleh

Soldered wires go on the EVC2SX I2C1 header and the USB is to power and connect EVC2SX to PC to be able to control it.


----------



## jonRock1992

I'll probably end up doing this down the road with mine. I was able to get 23950 without it, but would be interesting to see how far it could go.


----------



## J7SC

LtMatt said:


> Very rare to achieve that. My 5950X tops out at 1900Mhz.


...my 5950X has hit 2000/4000 after some new RAM (4x8 Sam-B 4000CL15) on its first try, but I only played around with it for a short while, and with relatively loose timings. Until I have more time, I prefer 1900/3800 and its tighter settings / latency, w/ the RAM undervolted a bit (stock voltage = 1.5v 😎). At those relatively small speed differentials, it really becomes a trade-off between MHz and latency. Some apps prefer one more than the other...



Spoiler: 5950X fun


----------



## weleh

jonRock1992 said:


> I'll probably end up doing this down the road with mine. I was able to get 23950 without it, but would be interesting to see how far it could go.


It's fun!

I managed a 23700ish run without it and with it there's potential however I'm not entirely sure on what it's safe to do with the core. Spoke with BZ and even he himself doesn't know.

XTXH cards should scale up to 25000 Graphic Score.


----------



## Nizzen

lawson67 said:


> Slightly off topic but will give you better scores in 3Dmark, i have managed a stable FLCK of 2000mhz and ram at 4000mhz on my Ryzen 7 5800x without any WHEA errors after multiple runs of cinebench using the following voltages, VSOC 1.15v - VDDG IOD 1.10v - VDDG CCD 1.00v , ram is CL16 3600mhz out the box to set 4000mhz i bumped the ram up to 1.42v, ram is Micron E-Die


Too bad you aren't using 2x16GB dualrank B-die with that imc.....


----------



## ptt1982

weleh said:


> Alright boys,
> 
> Bought a new PSU, Corsair HX1000 Plat and decided to give it another go at Time Spy.
> This time, at ~530W, 2650-2770 Core, 2120 Mhz VRAM.
> 
> At the verge of stability on the core, upping VRAM causes core to crash so I had to compensate for this with extra Vcore.
> 
> Managed to do some 24.4K runs and am now the 3rd (technically 2nd becaus 1 and 2 are from the same guy) best 6900XT on Time Spy GraphicScore.
> 
> 
> 
> 
> 
> 
> 
> 
> 
> Result not found
> 
> 
> 
> 
> 
> 
> 
> www.3dmark.com
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> Result not found
> 
> 
> 
> 
> 
> 
> 
> www.3dmark.com
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 3DMark.com search
> 
> 
> 3DMark.com search
> 
> 
> 
> 
> www.3dmark.com
> 
> 
> 
> 
> 
> The only changes I did was fCLK to 2000. Nothing else.


This might not work for everyone, but one thing I noticed recently: by changing desktop resolution to 720p from 2160p, I get 350 higher graphics score in timespy. Scaling mode centered and scaling resolution 1440p. Tested this multiple times just to be sure.


----------



## ZealotKi11er

weleh said:


> It's fun!
> 
> I managed a 23700ish run without it and with it there's potential however I'm not entirely sure on what it's safe to do with the core. Spoke with BZ and even he himself doesn't know.
> 
> XTXH cards should scale up to 25000 Graphic Score.


What does it do? Also you said you got the score with 2650-2770? Is that correct? 3DMark report much higher. I only get 23.5K with 2720-2820.


----------



## jonRock1992

weleh said:


> It's fun!
> 
> I managed a 23700ish run without it and with it there's potential however I'm not entirely sure on what it's safe to do with the core. Spoke with BZ and even he himself doesn't know.
> 
> XTXH cards should scale up to 25000 Graphic Score.


Nice! I would love to see what my GPU could do above 1.2V lol.


----------



## ptt1982

*New Drivers out: 21.7.2*. Brief findings on my Red Devil 6900XT XTX:

-Had to reduce 10mhz of maximum overclock
-*Results went up 426 points in Timespy (yet another 1.82% increase from 21.7.1)*
-Junction temps might be slightly higher (test this yourself)
-Please note it says "Graphics driver not approved" on TimeSpy, 3Dmark is due for an update to validate the new drivers


----------



## J7SC

ptt1982 said:


> *New Drivers out: 21.7.2*. Brief findings on my Red Devil 6900XT XTX:
> 
> -Had to reduce 10mhz of maximum overclock
> -*Results went up around 450 points in Timespy (yet another 2% increase from 21.7.1)*
> -Junction temps might be slightly higher (test this yourself)
> -Please note it says "Graphics driver not approved" on TimeSpy, 3Dmark is due for an update to validate the new drivers
> 
> View attachment 2518944


Neat ! Any change in results for other 3DM apps such as Time Spy Ex and especially Port Royal (re. ray tracing etc) ? I just took out my 6900 XT to integrate into a new w-cooling loop so I can't run for a bit...


----------



## ZealotKi11er

At this point RDNA2 will lead TS over Nvidia.


----------



## lawson67

ZealotKi11er said:


> At this point RDNA2 will lead TS over Nvidia.


Yep i have just added 500+ points to my TS score by just changing to the new driver, happy days


----------



## lawson67

ptt1982 said:


> *New Drivers out: 21.7.2*. Brief findings on my Red Devil 6900XT XTX:
> 
> -Had to reduce 10mhz of maximum overclock
> -*Results went up 426 points in Timespy (yet another 1.82% increase from 21.7.1)*
> -Junction temps might be slightly higher (test this yourself)
> -Please note it says "Graphics driver not approved" on TimeSpy, 3Dmark is due for an update to validate the new drivers
> 
> View attachment 2518944


Junction temps remained the same for me


----------



## jonRock1992

Sweet! I'm looking forward to trying this out. I'll post results when I test it.


----------



## BIaze

Does non WHQL work for TS scores/validation too?


----------



## LtMatt

Looking forward to trying out the new drivers, we should be able to claim Timespy as a team red benchmark now, just like the Firestrike benches. Only Timespy Extreme/Port Royal shall remain a Nvidia stronghold, for now. Won't get time to play with them properly till the weekend tho.


----------



## CantingSoup

Time Spy score with stock 6900XT Phantom Gaming (will be blocking the card soon) with the 5800X on water.
AMD Radeon RX 6900 XT video card benchmark result - AMD Ryzen 7 5800X,ASUSTeK COMPUTER INC. ROG CROSSHAIR VIII IMPACT (3dmark.com)


----------



## lestatdk

+470 graphics score for me . Still says driver is not approved though :/


----------



## lestatdk

My first 20k+ score  and +527graphics score. Gotta love those AMD developers giving us free performance


----------



## weleh

Deleted


----------



## weleh

ptt1982 said:


> This might not work for everyone, but one thing I noticed recently: by changing desktop resolution to 720p from 2160p, I get 350 higher graphics score in timespy. Scaling mode centered and scaling resolution 1440p. Tested this multiple times just to be sure.


How and where? Running Windows at 720p does nothing for my scores because 3dmark still runs at its intended resolution but smaller image.


----------



## ZealotKi11er

I wonder with all these gains in TS, if general game perf is up also. Would be a good time for HUB to go a 30 game test.


----------



## CS9K

ZealotKi11er said:


> I wonder with all these gains in TS, if general game perf is up also. Would be a good time for HUB to go a 30 game test.


I was just thinking the same thing. I've only objectively measured performance in Time Spy, Port Royal, and Superposition, and only against one another in each driver version, never really against other driver versions. Superposition does not usually see the same % improvements that Time Spy does, likewise Time Spy rarely, if ever, sees improvements to the memory subsystem that Superposition clearly shows.

It's neat, how all of this works :3


----------



## J7SC

CS9K said:


> I was just thinking the same thing. I've only objectively measured performance in Time Spy, Port Royal, and Superposition, and only against one another in each driver version, never really against other driver versions. Superposition does not usually see the same % improvements that Time Spy does, likewise Time Spy rarely, if ever, sees improvements to the memory subsystem that Superposition clearly shows.
> 
> It's neat, how all of this works :3


I don't think that AMD would do s.th. naughty these days w/ drivers like NVidia used to do for certain 3DMark benchies back in the day


----------



## lestatdk

AMD has always improved performance over time with the driver updates. With Nvidia on the other hand it's almost as if they deliberately make it worse for older cards ,to force you into an upgrade .


----------



## CS9K

J7SC said:


> I don't think that AMD would do s.th. naughty these days w/ drivers like NVidia used to do for certain 3DMark benchies back in the day


Ah, yeah, I agree. I wasn't implying that they were adjusting drivers _specifically_ for Time Spy, more that it's just neat how different benchmarks load the card down differently, and how pure-raster things like Time Spy and Heaven see larger improvements from whatever-it-is that AMD are changing.


----------



## 6u4rdi4n

lestatdk said:


> AMD has always improved performance over time with the driver updates. With Nvidia on the other hand it's almost as if they deliberately make it worse for older cards ,to force you into an upgrade .


Fake news.


----------



## J7SC

CS9K said:


> Ah, yeah, I agree. I wasn't implying that they were adjusting drivers _specifically_ for Time Spy, more that it's just neat how different benchmarks load the card down differently, and how pure-raster things like Time Spy and Heaven see larger improvements from whatever-it-is that AMD are changing.


...wasn't taking it any other way - just remembering NVidia's past 'benchmark specials'  with some detonator drivers.

AMD rasterization in BigNavi is indeed impressive. Still, the narrowish memory bus will impact at 4K+...and as mentioned before, I hope they can also improve ray tracing via driver updates, and continue to work on SuperFXR...coz...BF 2042 soon


----------



## Stopthewar

Hi guys,
wanted to share my results on my reference amd 6900xt

I did some testing in wattmann only:

My setup:

5900x / 32 gb @4000 and B550 MSI carbon gaming / amd 6900xt

Settings Ram is at fast timing 2150 and 15% performance

@2579 / 2479 min/max
mv score
1150 -> 17751
1145 -> 17935
1140 -> 18150
1135 -> 18302
1130 -> 18404
1125 -> 18352

after that I settled on mv 1125 and start checking the clocks

core score
2529 -> 19008
2504 -> 18947 / @ 1125mv -> crash
2489 -> 18966 / @ 1125mv -> 19061
2464 -> 19036
2459 -> 19035 / @ 1125mv -> 18941
2434 -> crash

I now settled with 2579 / 2479 with 1130 mv (1125 would crash at 10/20 runs). This will pass the benchmark in 3dMark. (70° at 85% fan)

Question:
I realized that going down with the clock will bring more performance. Am I right to assume that my card does not get the power? (293 always). Do I need to unlock it with MPT or can you suggest improvements with my settings? The score I get is around 19000-> 19100 on air (stock no mods).

Edit : I scored 19 061 in Time Spy

Edit 2:


----------



## lawson67

Stopthewar said:


> Hi guys,
> wanted to share my results on my reference amd 6900xt
> 
> I did some testing in wattmann only:
> 
> My setup:
> 
> 5900x / 32 gb @4000 and B550 MSI carbon gaming / amd 6900xt
> 
> Settings Ram is at fast timing 2150 and 15% performance
> 
> @2579 / 2479 min/max
> mv score
> 1150 -> 17751
> 1145 -> 17935
> 1140 -> 18150
> 1135 -> 18302
> 1130 -> 18404
> 1125 -> 18352
> 
> after that I settled on mv 1125 and start checking the clocks
> 
> core score
> 2529 -> 19008
> 2504 -> 18947 / @ 1125mv -> crash
> 2489 -> 18966 / @ 1125mv -> 19061
> 2464 -> 19036
> 2459 -> 19035 / @ 1125mv -> 18941
> 2434 -> crash
> 
> I now settled with 2579 / 2479 with 1130 mv (1125 would crash at 10/20 runs). This will pass the benchmark in 3dMark. (70° at 85% fan)
> 
> Question:
> I realized that going down with the clock will bring more performance. Am I right to assume that my card does not get the power? (293 always). Do I need to unlock it with MPT or can you suggest improvements with my settings? The score I get is around 19000-> 19100 on air (stock no mods).
> 
> Edit : I scored 19 061 in Time Spy
> 
> Edit 2:
> View attachment 2519045


Try these setting using MPT, my card was terrible for pushing higher frequency until i lowered the Maximum Voltage Soc, if you cant hit those frequencies keep lowering by 25mhz or 50mhz at a time


----------



## reqq

Grats me.. got the worst 6900xt out there amazing lol. It instantly goes up 110 on the hotspot/junction temp..

Would you guys send it back or mod it yourself?


By the way.. any of you play Battlefield V ? GPU refused to load more then 85%..really weird.


----------



## LtMatt

reqq said:


> View attachment 2519078
> 
> 
> Grats me.. got the worst 6900xt out there amazing lol. It instantly goes up 110 on the hotspot/junction temp..
> 
> Would you guys send it back or mod it yourself?
> 
> 
> By the way.. any of you play Battlefield V ? GPU refused to load more then 85%..really weird.


Depends. 

MBA or AIB?
Ambient temp?
Picture of chassis internals showing case airflow?


----------



## jfrob75

Updated to latest driver, 21.7.2, and achieved best graphics TS to date I scored 22 875 in Time Spy. actually best overall score to date as well.

As an experiment I ran TS on default settings except for increasing power to 450 W thru MPT. I scored 21 873 in Time Spy

So the driver improvements are certainly impressive when compared to a few months ago.


----------



## weleh

Just now that I was starting to understand my card's behaviour, AMD launches new drivers with more performance 
Managed a 24528 run today. Less clocks = more points... So complicated 
When drivers become approved I'll install and test.


----------



## LtMatt

jfrob75 said:


> Updated to latest driver, 21.7.2, and achieved best graphics TS to date I scored 22 875 in Time Spy. actually best overall score to date as well.
> 
> As an experiment I ran TS on default settings except for increasing power to 450 W thru MPT. I scored 21 873 in Time Spy
> 
> So the driver improvements are certainly impressive when compared to a few months ago.


Damn, that's a beastly graphics score.


----------



## Stopthewar

Got I scored 19 364 in Time Spy with new drivers. Settings: 2464 / 2364 / 1130 / 2150 / 15%


----------



## J7SC

With the reported score improvements via latest driver and w/ some apparently suggesting slightly lower clocks with higher score, I wonder if the latest driver allows for higher power draw - may be not peak, but intermediate. Temps would be telling but my card is out right now for w-cooling updates.


----------



## weleh

I'll tell you what, my card does not like > 500W, it will crash on GT2 no matter the Vcore I give it.
Going lower on min clock actually gave more performance too.


----------



## J7SC

I don't know about the latest driver, but I have the best luck w/ 2450 as min speed...max depends on app, MPT, temps etc


----------



## LtMatt

jfrob75 said:


> Updated to latest driver, 21.7.2, and achieved best graphics TS to date I scored 22 875 in Time Spy. actually best overall score to date as well.
> 
> As an experiment I ran TS on default settings except for increasing power to 450 W thru MPT. I scored 21 873 in Time Spy
> 
> So the driver improvements are certainly impressive when compared to a few months ago.


Quick run in a hot room, I'm not far off your (graphics) score. God damn these driver improvements are incredible!

My CPU score is a lot lower than it should be in this run, so can definitely improve my CPU score and I think with a better ambient I can improve my graphics score a little bit. 

Roll on the weekend!

AMD Radeon RX 6900 XT video card benchmark result - AMD Ryzen 9 5950X,ASUSTeK COMPUTER INC. ROG CROSSHAIR VIII HERO (3dmark.com)


----------



## ZealotKi11er

weleh said:


> I'll tell you what, my card does not like > 500W, it will crash on GT2 no matter the Vcore I give it.
> Going lower on min clock actually gave more performance too.



I get system reboots.


----------



## weleh

ZealotKi11er said:


> I get system reboots.


That might be a PSU issue.
I had to replace my 750W Seasonic with a Corsair 1000W unit. It's not enough to push these cards the way we do.


----------



## ZealotKi11er

weleh said:


> That might be a PSU issue.
> I had to replace my 750W Seasonic with a Corsair 1000W unit. It's not enough to push these cards the way we do.


1600W EVGA

Fixed by limiting SOC current to 75A vs 100A


----------



## weleh

ZealotKi11er said:


> 1600W EVGA
> 
> Fixed by limiting SOC current to 75A vs 100A


Oh I see, your sig still says 750W
Outdated


----------



## jfrob75

LtMatt said:


> Quick run in a hot room, I'm not far off your (graphics) score. God damn these driver improvements are incredible!
> 
> My CPU score is a lot lower than it should be in this run, so can definitely improve my CPU score and I think with a better ambient I can improve my graphics score a little bit.
> 
> Roll on the weekend!
> 
> AMD Radeon RX 6900 XT video card benchmark result - AMD Ryzen 9 5950X,ASUSTeK COMPUTER INC. ROG CROSSHAIR VIII HERO (3dmark.com)


That best TS score was with the max freq set to 2820MHz and min freq set to 2700MHz. I don't think I was ever able to run TS above 2815MHz and in most cases 2810MHz was the most consistent. I have my memory at 2112MHz. I could not go above that reliably on previous driver.


----------



## lawson67

ZealotKi11er said:


> 1600W EVGA
> 
> Fixed by limiting SOC current to 75A vs 100A


I lowered my TDC Limit GFX in MPT from 300a to 270a and gained 300 graphic points just from doing that, tested it over and over again it was deff doing that that gave me the gain


----------



## ZealotKi11er

I cant do any serious score if its not winter. Need that -20C ambient temp.


----------



## LtMatt

jfrob75 said:


> That best TS score was with the max freq set to 2820MHz and min freq set to 2700MHz. I don't think I was ever able to run TS above 2815MHz and in most cases 2810MHz was the most consistent. I have my memory at 2112MHz. I could not go above that reliably on previous driver.


Nice clocks. My run was at 2700/2800 that’s about my limit on the core I think. Memory was at 2162Mhz. Will need to rerun it at 2150Mhz to make sure I’m not losing any performance there.


----------



## reqq

LtMatt said:


> Depends.
> 
> MBA or AIB?
> Ambient temp?
> Picture of chassis internals showing case airflow?


its amd reference card from their site.. I have like the best air flow case.. silverstone fortress 2 with 3x180 fans in the bottom..ambient temp i dont know.. pretty warm..

manage to increase the score to 18600 gpu score with powerlimit to -10% and mv at 1120... if i run anything higher the computer just shuts itself off.. if i run lower 3dmark crashes..


----------



## Haplo181

Took a little pep talk and coaching but I almost got to 24k graphics score. New overall high for me with a 10900k I still didn't push to 5.4ghz. It can do 5.4 for timespy but it is not prime stable.









I scored 22 022 in Time Spy


Intel Core i9-10900K Processor, AMD Radeon RX 6900 XT x 1, 16384 MB, 64-bit Windows 11}




www.3dmark.com


----------



## majestynl

reqq said:


> By the way.. any of you play Battlefield V ? GPU refused to load more then 85%..really weird.


Locked FPS? Try uncapped, probably will go up to close 100%!!


----------



## J7SC

reqq said:


> its amd reference card from their site.. I have like the best air flow case.. silverstone fortress 2 with 3x180 fans in the bottom..ambient temp i dont know.. pretty warm..
> 
> manage to increase the score to 18600 gpu score with powerlimit to -10% and mv at 1120... if i run anything higher the computer just shuts itself off.. if i run lower 3dmark crashes..


What is your PSU make / rating ?


----------



## lawson67

I got a crash and a driver timeout error message in Metro Exedus Enhanced with the new drivers using the same settings that used to be totally stable with the last drivers


----------



## CS9K

lawson67 said:


> I got a crash and a driver timeout error message in Metro Exedus Enhanced with the new drivers using the same settings that used to be totally stable with the last drivers


Someone on a previous page had to knock 10MHz off of their max frequency, as many of us have had to do over the months. I'm down to 2715 from 2750 (in January) in the Adrenalin control panel. So long as scores stay the same or improve, I aint mad.


----------



## kairi_zeroblade

New drivers are very potent..saw some improvements on gaming as well..also on TS bench I got a +400 points uplift..finally broke 20k this time around..I hope AMD doesn't jinx it up for the next update..









Result not found







www.3dmark.com





Temps were same for me..as for the OC its has been the same profile for benching that I use..


----------



## jonRock1992

I got around +400 points as well with the latest driver. My CPU score is lower though because I had to reset my bios and I haven't tuned my ram yet.


----------



## J7SC

jonRock1992 said:


> View attachment 2519120
> 
> 
> I got around +400 points as well with the latest driver. My CPU score is lower though because I had to reset my bios and I haven't tuned my ram yet.


Nice ! Did you notice any difference in max clocks and/or temps with the new driver ?


----------



## jonRock1992

J7SC said:


> Nice ! Did you notice any difference in max clocks and/or temps with the new driver ?


Strangely, I could add +10MHz on the max clock. 2760MHz vs. 2750MHz. Temps seem the same.


----------



## jimpsar

AMD is doing magic things in SW dept  
This is mine with high ambient around 28 degrees and pretty much hot outside temp here in Greece around 38-40. 
6900xt Merc Black 









I scored 22 291 in Time Spy


AMD Ryzen 9 5950X, AMD Radeon RX 6900 XT x 1, 16384 MB, 64-bit Windows 10}




www.3dmark.com


----------



## LtMatt

My best so far, just need 3DMark to approve this driver now. 
AMD Radeon RX 6900 XT video card benchmark result - AMD Ryzen 9 5950X,ASUSTeK COMPUTER INC. ROG CROSSHAIR VIII HERO (3dmark.com)


----------



## LtMatt

I’m 118 points short of 25k graphics score, don’t think I’ve got enough in the tank to reach it though.


----------



## LtMatt

These driver gains are incredible. 

Can't go any further on graphics score unfortunately without an EVC. 

5950X @ 4.825Ghz
6900 XT Toxic 2805/2162Mhz
21.7.2

AMD Radeon RX 6900 XT video card benchmark result - AMD Ryzen 9 5950X,ASUSTeK COMPUTER INC. ROG CROSSHAIR VIII HERO (3dmark.com)

*SCORE 23 316 with AMD Radeon RX 6900 XT(1x) and AMD Ryzen 9 5950X*


Graphics Score 24 822


CPU Score 17 352


----------



## Maulet//*//

AMD Radeon RX 6900 XT video card benchmark result - AMD Ryzen 9 5900X,Gigabyte Technology Co., Ltd. X570 AORUS XTREME (3dmark.com) 
18929 points
auto undervolt in radeon suite
cpu stock
latest radeon drv
open case


----------



## lawson67

CS9K said:


> Someone on a previous page had to knock 10MHz off of their max frequency, as many of us have had to do over the months. I'm down to 2715 from 2750 (in January) in the Adrenalin control panel. So long as scores stay the same or improve, I aint mad.


I made it stable by adding an extra 10mv on Vcore I didn't need to drop any frequencies, furthermore if your having to knock off 10mhz or so with every new driver and your card is scaling correctly then your score remains the same then your not gaining anything with the new drivers especially if you can't make your previous overclock stable, you might just be stable enough to post a better benchmark score which does not translate into real world stable gaming


----------



## reqq

majestynl said:


> Locked FPS? Try uncapped, probably will go up to close 100%!!


its capped in cfg at 350.



J7SC said:


> What is your PSU make / rating ?


EVGA supernova g2 850w.(gold). you think its related? If a gpu get to warm does it downclock or computer shuts off?


----------



## lawson67

reqq said:


> its capped in cfg at 350.
> 
> 
> 
> EVGA supernova g2 850w.(gold). you think its related? If a gpu get to warm does it downclock or computer shuts off?


Most of the GPU's start to throttle (downclock) at 110c and will shutdown once they hit 118c


----------



## lestatdk

lawson67 said:


> I made it stable by adding an extra 10mv on Vcore I didn't need to drop any frequencies, furthermore if your having to knock off 10mhz or so with every new driver and your card is scaling correctly then your score remains the same then your not gaining anything with the new drivers especially if you can't make your previous overclock stable, you might just be stable enough to post a better benchmark score which does not translate into real world stable gaming


Game stable and bench stable c


reqq said:


> its capped in cfg at 350.
> 
> 
> 
> EVGA supernova g2 850w.(gold). you think its related? If a gpu get to warm does it downclock or computer shuts off?





reqq said:


> its capped in cfg at 350.
> 
> 
> 
> EVGA supernova g2 850w.(gold). you think its related? If a gpu get to warm does it downclock or computer shuts off?


Mine will hit 115+ and it will shut down at that point. I have found the limit where I can push it to, but it's a fine line between a bench pass or a shutdown


----------



## jfrob75

LtMatt said:


> These driver gains are incredible.
> 
> Can't go any further on graphics score unfortunately without an EVC.
> 
> 5950X @ 4.825Ghz
> 6900 XT Toxic 2805/2162Mhz
> 21.7.2
> 
> AMD Radeon RX 6900 XT video card benchmark result - AMD Ryzen 9 5950X,ASUSTeK COMPUTER INC. ROG CROSSHAIR VIII HERO (3dmark.com)
> 
> *SCORE 23 316 with AMD Radeon RX 6900 XT(1x) and AMD Ryzen 9 5950X*
> 
> 
> Graphics Score 24 822
> 
> 
> CPU Score 17 352


Damn, saw your previous post and said to self, I have that beat by a little bit, I scored 23 137 in Time Spy. Then I saw this post, great score!!!
It will be interesting to see if there are any more gains to be had from future driver updates.


----------



## CS9K

lawson67 said:


> I made it stable by adding an extra 10mv on Vcore I didn't need to drop any frequencies, furthermore if your having to knock off 10mhz or so with every new driver and your card is scaling correctly then your score remains the same then your not gaining anything with the new drivers especially if you can't make your previous overclock stable, you might just be stable enough to post a better benchmark score which does not translate into real world stable gaming


The "reduction" in max frequency is a known thing that just happens. Prior driver versions weren't clock-stretching, either, average clock speeds do reduce, but TS scores also improve driver-to-driver. Not a bad thing that this happens, but it is just a thing.

I overclock near the limits, but I only overclock to where it is push-button stable. The benchmarks that I achieve are using that push-button overclock; I don't overclock specifically for benching/scores, I aint got that kind of time


----------



## LtMatt

jfrob75 said:


> Damn, saw your previous post and said to self, I have that beat by a little bit, I scored 23 137 in Time Spy. Then I saw this post, great score!!!
> It will be interesting to see if there are any more gains to be had from future driver updates.


Yes, I thought I could get a bit more but not that much more tbf. 

Futurmark are taking ages to approve the driver though. 🤔

EDIT - Now showing as valid but not showing in the Hall of Fame sadly.

EDIT 2 - Hall of fame updated, No 14 in the world. My personal moment of glory, it's all downhill from hereon in.


----------



## CS9K

LtMatt said:


> EDIT 2 - Hall of fame updated, No 14 in the world. My personal moment of glory, it's all downhill from hereon in.
> 
> View attachment 2519166


Rock on! Congrats!


----------



## LtMatt

It won't last, but the fastest 6900 XT score in the world for Timespy Extreme, 61st in the world in a Nvidia stronghold.









AMD Radeon RX 6900 XT video card benchmark result - AMD Ryzen 9 5950X,ASUSTeK COMPUTER INC. ROG CROSSHAIR VIII HERO (3dmark.com)

*SCORE 11 834 with AMD Radeon RX 6900 XT(1x) and AMD Ryzen 9 5950X*
Graphics Score11 881
CPU Score11 579


----------



## jonRock1992

If AMD keeps doing these impressive driver optimization, Nvidia is no longer going to take the crown. It's been a while since I've used AMD, and they've always gotten a bad rep for drivers, but so far I'm impressed.


----------



## LtMatt

jonRock1992 said:


> If AMD keeps doing these impressive driver optimization, Nvidia is no longer going to take the crown. It's been a while since I've used AMD, and they've always gotten a bad rep for drivers, but so far I'm impressed.


My Timespy score is already faster than a 3090 user with a great sample using a chiller by almost 1000 graphics points. I think on Ln2 6900 XT would be the fastest GPU fairly easy with a decent sample. Timespy Extreme loves bandwidth, so that will be harder to crack despite the significant recent improvements.


----------



## blackzaru

On EKWB waterblock and backplate, my backside VRM components were getting pretty toasty (the backplate was very hot to the touch). Solved the problem with 0.5mm thermal pads and aluminum heatsinks.


----------



## amigafan2003

LtMatt said:


> It won't last, but the fastest 6900 XT score in the world for Timespy Extreme, 61st in the world in a Nvidia stronghold.


I'll be second when the HoF updates with my score


----------



## MietiOC

Hi, long time lurker here.

Yesterday i scored this 3950x 6900 xt phantom gaming on air in a inwin a1  and now it results as valid but still not in the hall of fame

I've done this on air in a mini itx case with 37° C ambient (here in the south of italy). Climate is getting even hotter so i cannot replicate the score unfortunately now it shows as valid but it should be the first in the world for 3950x and 6900xt. Still not showing tho.. do you think it will ever update?

Photo of the enclosure for reference.


----------



## L!ME

Liquid Devil Ultimate @ XTXH-LC BIOS
Changes on the new More Powertool
Overclocked Soc from 1200 to 1266
Overclocked Flck from 1940 to 2177
Memvoltage 1.425


----------



## J7SC

blackzaru said:


> On EKWB waterblock and backplate, my backside VRM components were getting pretty toasty (the backplate was very hot to the touch). Solved the problem with 0.5mm thermal pads and aluminum heatsinks.
> 
> View attachment 2519194


...picked up those very same heatsinks (2 per pack) a couple of weeks back...hope to have it all mounted soon (dual mobo w-cooling setup). Which 0.5mm thermal pads did you use ?


----------



## jonRock1992

L!ME said:


> Liquid Devil Ultimate @ XTXH-LC BIOS
> Changes on the new More Powertool
> Overclocked Soc from 1200 to 1266
> Overclocked Flck from 1940 to 2177
> Memvoltage 1.425
> View attachment 2519220


Damn that's a nice sample. Are you using an evc2?


----------



## L!ME

No evc at the Moment but a Lot of extra Caps 150+
this will be only the new 24/7 setting 1146mv max during load.
I will Go Higher with my 5950x when the build ist ready. The Card can do 2,95ghz + only on water.


----------



## jfrob75

LtMatt said:


> It won't last, but the fastest 6900 XT score in the world for Timespy Extreme, 61st in the world in a Nvidia stronghold.
> View attachment 2519188
> 
> 
> AMD Radeon RX 6900 XT video card benchmark result - AMD Ryzen 9 5950X,ASUSTeK COMPUTER INC. ROG CROSSHAIR VIII HERO (3dmark.com)
> 
> *SCORE 11 834 with AMD Radeon RX 6900 XT(1x) and AMD Ryzen 9 5950X*
> Graphics Score11 881
> CPU Score11 579
> View attachment 2519185


I came in 2nd, right behind you, for TimeSpy Extreme I scored 11 627 in Time Spy Extreme. I think your 5950X is better than mine but still about 1000 points better than previous TS Extreme runs.


----------



## jfrob75

LtMatt said:


> Yes, I thought I could get a bit more but not that much more tbf.
> 
> Futurmark are taking ages to approve the driver though. 🤔
> 
> EDIT - Now showing as valid but not showing in the Hall of Fame sadly.
> 
> EDIT 2 - Hall of fame updated, No 14 in the world. My personal moment of glory, it's all downhill from hereon in.
> 
> View attachment 2519166


I am now in the Hall of fame 









My graphics score for that run is 24878, my best so far.


----------



## majestynl

L!ME said:


> Liquid Devil Ultimate @ XTXH-LC BIOS
> Changes on the new More Powertool
> Overclocked Soc from 1200 to 1266
> Overclocked Flck from 1940 to 2177
> Memvoltage 1.425
> View attachment 2519220


Nice score!! Can you point me that bios. Ty


----------



## L!ME

@majestynl 








Powercolor RX 6900 XT VBIOS


16 GB GDDR6, 500 MHz GPU, 914 MHz Memory




www.techpowerup.com





It will only Work in xtxh cards! You need an ext programmer to flash


----------



## CS9K

blackzaru said:


> On EKWB waterblock and backplate, my backside VRM components were getting pretty toasty (the backplate was very hot to the touch). Solved the problem with 0.5mm thermal pads and aluminum heatsinks.
> 
> View attachment 2519194


I _really_ like the look of this! I thought it was a brand of backplate that I had not seen before! You've done an impressive job!

And yes, the backplate does get _quite_ warm to the touch after heavy, sustained load, but I do appreciate that with EK's blocks, the temperatures stay reasonably cool during that load. Do you mind tossing a link to those heatsinks?


----------



## CS9K

L!ME said:


> No evc at the Moment but a Lot of extra Caps 150+
> this will be only the new 24/7 setting 1146mv max during load.
> I will Go Higher with my 5950x when the build ist ready. The Card can do 2,95ghz + only on water.


I recall Buildzoid being upset at the lack of capacitors/input filtering on some RDNA2 boards. I was curious what approach some of the more-serious hobbyists would take to solve this. 150+ extra capacitors... yikes!

May I ask, why not fewer (but physically larger) caps? Or did you add capacitors to ALL the things \o/


----------



## L!ME

@CS9K i have done mods at every rail, when you want the lowest esr and had to Deal with space, sometimes it will be better with a bunch of small ones.


----------



## HeLeX63

New TS score.

Sustained 2570MHz on the core with 408W limit and 4.8GHz all core on 5900X.


----------



## jfrob75

OMG, broke 12K on the TS Extreme graphics score and almost 12K for the overall score.
I scored 11 987 in Time Spy Extreme


----------



## chispy

Awesome scores and findings guys ! Scores are looking great in here. I will try to run 2x 6900xtxh on crossfire ( ASRock OCF + Red Devil Ultimate ) to see if i can beat my own rtx 3090 x2 sli scores 😆


----------



## chispy

Thank you guys for sharing tweaks and your findings ! Appreciate it


----------



## blackzaru

J7SC said:


> ...picked up those very same heatsinks (2 per pack) a couple of weeks back...hope to have it all mounted soon (dual mobo w-cooling setup). Which 0.5mm thermal pads did you use ?


Used 0.5mm thermal pads (100mmx100mm that I then cut to size), although, I already had them, so good question which ones they are exactly.



CS9K said:


> I _really_ like the look of this! I thought it was a brand of backplate that I had not seen before! You've done an impressive job!
> 
> And yes, the backplate does get _quite_ warm to the touch after heavy, sustained load, but I do appreciate that with EK's blocks, the temperatures stay reasonably cool during that load. Do you mind tossing a link to those heatsinks?


https://www.amazon.com/dp/B089QHQFBS/ref=twister_B089QJTGX8?_encoding=UTF8&psc=1 This. Cheap, but efficient.


----------



## LtMatt

jfrob75 said:


> OMG, broke 12K on the TS Extreme graphics score and almost 12K for the overall score.
> I scored 11 987 in Time Spy Extreme





jfrob75 said:


> OMG, broke 12K on the TS Extreme graphics score and almost 12K for the overall score.
> I scored 11 987 in Time Spy Extreme


Well done great score, not sure I can beat that.


----------



## Memmento Mori

Just FYI - Watercool made finally a HEATKILLER V FOR RX 6800/6900XT ... Just for Reference layout but anyway  (Btw XFX Merx319 Black has a custom layout right?)


----------



## LtMatt

New personal bests on Timespy and Timespy Extreme. 

Can't catch your Extreme score though @jfrob75 

5950X @ 4.825Ghz
6900 XT Toxic 2820/2162Mhz
21.7.2

AMD Radeon RX 6900 XT video card benchmark result - AMD Ryzen 9 5950X,ASUSTeK COMPUTER INC. ROG CROSSHAIR VIII HERO (3dmark.com)

*SCORE 23 542 with AMD Radeon RX 6900 XT(1x) and AMD Ryzen 9 5950X*
Graphics Score
25 118
CPU Score
17 368


5950X @ 4.825Ghz
6900 XT Toxic 2820/2162Mhz
21.7.2

AMD Radeon RX 6900 XT video card benchmark result - AMD Ryzen 9 5950X,ASUSTeK COMPUTER INC. ROG CROSSHAIR VIII HERO (3dmark.com)

*SCORE 11 981 with AMD Radeon RX 6900 XT(1x) and AMD Ryzen 9 5950X*
Graphics Score
12 045
CPU Score
11 632


----------



## chispy

4th Global on 2x 6900xtxh in crossfireX 😁 beating almost all rtx 3090s in slix2 on ln2 🤣, 1x ASRock OC formula water + 1x Power Color Red Devil Ultimate stock cooling air.









I scored 37 237 in Time Spy


AMD Ryzen 9 5950X, AMD Radeon RX 6900 XT x 2, 32768 MB, 64-bit Windows 10}




www.3dmark.com


----------



## weleh

With an evc and cold air you can probably beat #1...


----------



## majestynl

L!ME said:


> @majestynl
> 
> 
> 
> 
> 
> 
> 
> 
> Powercolor RX 6900 XT VBIOS
> 
> 
> 16 GB GDDR6, 500 MHz GPU, 914 MHz Memory
> 
> 
> 
> 
> www.techpowerup.com
> 
> 
> 
> 
> 
> It will only Work in xtxh cards! You need an ext programmer to flash


Ty, i have a Red devil Ultimate. Just sent you a dm 🙂


----------



## chispy

weleh said:


> With an evc and cold air you can probably beat #1...


Will try on my Chiller to see if i can get #1 spot. Im already using my evc2  . This new driver is insane and unlocked the true power of this cards , amazing !


----------



## 6u4rdi4n

Memmento Mori said:


> Just FYI - Watercool made finally a HEATKILLER V FOR RX 6800/6900XT ... Just for Reference layout but anyway  (Btw XFX Merx319 Black has a custom layout right?)


The MERC 319 is a custom design, yes. I do believe there's at least two different revisions for the MERC series, so that's also something to take under consideration when looking for a block for the MERC 319.


----------



## lawson67

I've just snagged myself an Ultimate edition of the RX 6900 XT Red Devil should be here next week


----------



## LtMatt

lawson67 said:


> I've just snagged myself an Ultimate edition of the RX 6900 XT Red Devil should be here next week


Lol, I knew you'd cave eventually seeing all these 6900 XTs Timespy scores.


----------



## lawson67

LtMatt said:


> Lol, I knew you'd cave eventually seeing all these 6900 XTs Timespy scores.


I know i love my Red Devil 6800 XT but i can only hit 2300 TS graphic score on the new drivers with that but i need to see 24000 and above to help me sleep better at nights  and once Majestynl told me the hotspot throttle limit was 110c well that just made up my mind


----------



## LtMatt

lawson67 said:


> I know i love my Red Devil 6800 XT but i can only hit 2300 TS graphic score on the new drivers with that but i need to see 24000 and above to help me sleep better at nights  and once Majestynl told me the hotspot throttle limit was 110c well that just made up my mind


Yep, a 95c limit would definitely strange a XTXH on air as I found out with the Merc Limited Edition.

Which UK retailer did you get it from?


----------



## lawson67

LtMatt said:


> Yep, a 95c limit would definitely strange a XTXH on air as I found out with the Merc Limited Edition.
> 
> Which UK retailer did you get it from?


I bought it from the link below the only one they had in stock which was put on there site today 
Sotel | PowerColor Radeon RX 6900 XT Red Devil Ultimate graphic card - 16 GB GDDR6, RDNA 2, GDDR6, 3x DisplayPort, 1x HDMI 2.1


----------



## CantingSoup

Getting there a little bit at a time.


----------



## jfrob75

LtMatt said:


> New personal bests on Timespy and Timespy Extreme.
> 
> Can't catch your Extreme score though @jfrob75
> 
> 5950X @ 4.825Ghz
> 6900 XT Toxic 2820/2162Mhz
> 21.7.2
> 
> AMD Radeon RX 6900 XT video card benchmark result - AMD Ryzen 9 5950X,ASUSTeK COMPUTER INC. ROG CROSSHAIR VIII HERO (3dmark.com)
> 
> *SCORE 23 542 with AMD Radeon RX 6900 XT(1x) and AMD Ryzen 9 5950X*
> Graphics Score
> 25 118
> CPU Score
> 17 368
> 
> 
> 5950X @ 4.825Ghz
> 6900 XT Toxic 2820/2162Mhz
> 21.7.2
> 
> AMD Radeon RX 6900 XT video card benchmark result - AMD Ryzen 9 5950X,ASUSTeK COMPUTER INC. ROG CROSSHAIR VIII HERO (3dmark.com)
> 
> *SCORE 11 981 with AMD Radeon RX 6900 XT(1x) and AMD Ryzen 9 5950X*
> Graphics Score
> 12 045
> CPU Score
> 11 632


A new best time for TS Extreme, this time overall score above 12K I scored 12 001 in Time Spy Extreme.
I could score higher but with the summer temps my water temp is to high to run my CPU at higher freq and the required voltage. Your 5950X is obviously better silicone than mine, but happy overall with the results. Winter time should produce higher results. My max and min GPU freq are 2865/2725.


----------



## LtMatt

jfrob75 said:


> A new best time for TS Extreme, this time overall score above 12K I scored 12 001 in Time Spy Extreme.
> I could score higher but with the summer temps my water temp is to high to run my CPU at higher freq and the required voltage. Your 5950X is obviously better silicone than mine, but happy overall with the results. Winter time should produce higher results. My max and min GPU freq are 2865/2725.


I just can't reach the 12K overall sadly. So close but just can't eek anymore out.

I can maybe try my CPU at 4.85Ghz, but would need further tuning as it crashed during the CPU test at this frequency,.


----------



## CantingSoup

Anyone have tips to get my GPU clock past 2660MHz? Seems like everything I do causes it to crash.


----------



## ZealotKi11er

CantingSoup said:


> Anyone have tips to get my GPU clock past 2660MHz? Seems like everything I do causes it to crash.


You hit silicon lowery.


----------



## jfrob75

LtMatt said:


> I just can't reach the 12K overall sadly. So close but just can't eek anymore out.
> 
> I can maybe try my CPU at 4.85Ghz, but would need further tuning as it crashed during the CPU test at this frequency,.


Just checked the Hall of Fame TimeSpy graphics score and noticed you broke 25K, way to go!!! That puts you in the top 10!!!!


----------



## ZealotKi11er

Guys put [OCN] on your 3DMark name.


----------



## LtMatt

jfrob75 said:


> Just checked the Hall of Fame TimeSpy graphics score and noticed you broke 25K, way to go!!! That puts you in the top 10!!!!


Cheers, am doing well in that one. Think I am 12th place (total score) mainly up there with people on water, chillers and LN2 now.

Good old Toxic Extreme AIO with no EVC doing me proud.


----------



## ZealotKi11er

LtMatt said:


> Cheers, am doing well in that one. Think I am 12th place mainly up there with people on water, chillers and LN2 now.
> 
> Good old Toxic AIO with no EVC doing me proud.


360mm RAD is nothing impressive vs my 120mm RAD. Need to build my 5950x system to get on top 10 again.


----------



## LtMatt

ZealotKi11er said:


> 360mm RAD is nothing impressive vs my 120mm RAD. Need to build my 5950x system to get on top 10 again.


You got da faster mems though. Be interesting to see your reported temp vs mine though to see how good that 120 is.


----------



## jfrob75

ZealotKi11er said:


> Guys put [OCN] on your 3DMark name.


How do I do that?


----------



## weleh

Toxic rad really isn't impressive at all. It's good for ~400W with upgraded fans.

I am however, surprised by the XTXH chip being able to do 25 000's vs heavily hardmodded cards.
But then again I think there's something going on about clockspeed on these cards.

On 7.1 drivers while I was testing frequencies, doing sub 2700 Mhz runs would consistently give me 24000 points which was higher than runs at much higher clock speed.
Perhaps something about core instability or something. Also not sure if my VRAM degraded somehow or if these 2 new drivers 7.1 and 7.2 made my 2150 Mhz unstable even while mining.

Haven't tested 7.2 too much, did a couple of runs yesterday only.


----------



## ZealotKi11er




----------



## LtMatt

jfrob75 said:


> How do I do that?


Click Edit.


----------



## jimpsar

Wow Guyz excellent scores!! Keep it up!!!
Just a question how to get in HOF in Timespy graphic score for eg. Mine is 23834 but cannot seem to find it.
EDIT : Thank you LtMatt! ok done it! Put [OCN] also in my name


----------



## EastCoast

Any thoughts on the upcoming RX 6900 XT Toxic in gold and silver?
It's an xtx that is suppose to be 100Mhz less then the WC variant. But not sure on the price or availability.


----------



## chispy

Guys if anyone needs a brand new , never opened , never used Red Devil rx 6900xt full cover water block i will help you out and sell it at a loss for only $100 US Dollars + shipping. I have 2 of them , im using one and the other one was never opened hence a bit of help from me and selling it at a loss as i paid more than the price i offer + the shipping from China and customs fees i paid. My loss your gain , i want to see those Red Devil all water cool 









*Closed*


Up for sale is some great hardware that i'm not using anymore. water blocks never been used as they are brand new. Paypal only + buyer pays the shipping. No trolling , no low balls , no returns. heatware: Trader Reviews for chispy | HeatWare.com Byski Full cover water block for Powercolor...




www.overclock.net


----------



## lawson67

chispy said:


> Guys if anyone needs a brand new , never opened , never used Red Devil rx 6900xt full cover water block i will help you out and sell it at a loss for only $100 US Dollars + shipping. I have 2 of them , im using one and the other one was never opened hence a bit of help from me and selling it at a loss as i paid more than the price i offer + the shipping from China and customs fees i paid. My loss your gain , i want to see those Red Devil all water cool
> 
> 
> 
> 
> 
> 
> 
> 
> 
> *Closed*
> 
> 
> Up for sale is some great hardware that i'm not using anymore. water blocks never been used as they are brand new. Paypal only + buyer pays the shipping. No trolling , no low balls , no returns. heatware: Trader Reviews for chispy | HeatWare.com Byski Full cover water block for Powercolor...
> 
> 
> 
> 
> www.overclock.net


If you lived in the UK i would consider buying one, got an Ultimate Red Devil coming next week, gonna put LM on it first if i get the same result as my 6800 xt i might not need it anyhow but would be nice to have if ever do want to put it all underwater in the future


----------



## cfranko

I bought a waterblock for my ASRock Phantom D. Currently on air it crashes at 2620 mhz on TimeSpy however 2610 is stable. On water would this change or would it crash at the same frequency again? Also I don't get how I am so unlucky lol 2600 for time spy is horrible silicon lottery.


----------



## ZealotKi11er

cfranko said:


> I bought a waterblock for my ASRock Phantom D. Currently on air it crashes at 2620 mhz on TimeSpy however 2610 is stable. On water would this change or would it crash at the same frequency again? Also I don't get how I am so unlucky lol 2600 for time spy is horrible silicon lottery.


Water would not help. Most people with 2700+ have XTXH cards so they are generally faster.


----------



## cfranko

ZealotKi11er said:


> Water would not help. Most people with 2700+ have XTXH cards so they are generally faster.


Well, at least with water the card wouldn’t shut down because of 120 degrees hotspot. Right now on air at 375 watts power limit it shuts down in timespy. It would have been really nice if water helped with clocks.


----------



## jonRock1992

cfranko said:


> I bought a waterblock for my ASRock Phantom D. Currently on air it crashes at 2620 mhz on TimeSpy however 2610 is stable. On water would this change or would it crash at the same frequency again? Also I don't get how I am so unlucky lol 2600 for time spy is horrible silicon lottery.


I gained like 20MHz by going to water lol. But my clocks are wayyy more stable. It was like an automatic +1000 gpu score in Timespy just by switching to a waterblock. However, the red devil heatsink was pretty terrible on my GPU, so your results may vary.


----------



## cfranko

jonRock1992 said:


> I gained like 20MHz by going to water lol. But my clocks are wayyy more stable. It was like an automatic +1000 gpu score in Timespy just by switching to a waterblock. However, the red devil heatsink was pretty terrible on my GPU, so your results may vary.


Yeah my heatsink is also terrible, normally a air cooled card shouldn’t thermal shutdown at 375 power limit. I mean if my clocks don’t get affected from going to water but if scores increase at the same clocks thats good enough for me


----------



## CS9K

jonRock1992 said:


> I gained like 20MHz by going to water lol. But my clocks are wayyy more stable. It was like an automatic +1000 gpu score in Timespy just by switching to a waterblock. However, the red devil heatsink was pretty terrible on my GPU, so your results may vary.


I had a similar experience with my reference RX 6800 XT before I put it under water (and before I found a buyer and swapped in the reference RX 6900 XT that I have now). The RX 6800 XT's max core clocks were the same, but it stayed pegged at the max GPU Clock and GPU Effective Clock once on water.


----------



## J7SC

jonRock1992 said:


> I gained like 20MHz by going to water lol. But my clocks are wayyy more stable. It was like an automatic +1000 gpu score in Timespy just by switching to a waterblock. However, the red devil heatsink was pretty terrible on my GPU, so your results may vary.


I actually gained a bit more than that in clocks, but really, the main story is the high clock stability under water - especially when throwing a 80W - 120W increase in PL into the mix, courtesy of MPT. Hotspot temps became the real issue when exceeding 370W or so on air. HWbot and 3DM HoF also tell the story on temps


----------



## D1g1talEntr0py

Not bad for an XTX on air cooling
AMD Radeon RX 6900 XT video card benchmark result - AMD Ryzen 9 5950X,Gigabyte Technology Co., Ltd. X570 AORUS XTREME (3dmark.com)

*21 768 with AMD Radeon RX 6900 XT(1x) and AMD Ryzen 9 5950X*
Graphics Score 22 911
CPU Score 16 974

Min: 2500MHz
Max: 2700MHz
Voltage: 1085mV
VRAM: 2150MHz (Fast Timings)
GPU PPT: 350.750W (MPT - 305W + 15% in Wattman)
Fan Max: 75%
Driver: 21.7.2


----------



## chispy

Holy mother of uplift in performance 😄 , games actually got a nice boost as well as all 3dmarks not only timespy. Now 6900xt is faster than rtx 3090  , tested against my own rtx 3090 strix. Whatever the driver dev team did ,it was an amazing job. Best driver ever release by amd for all amd cards and not only the new rx 6xxx series card got a bump in performance , all amd cards did !

Fine wine at it's finest


----------



## J7SC

chispy said:


> Holy mother of uplift in performance 😄 , games actually got a nice boost as well as all 3dmarks not only timespy. Now 6900xt is faster than rtx 3090  , tested against my own rtx 3090 strix. Whatever the driver dev team did ,it was an amazing job. Best driver ever release by amd for all amd cards and not only the new rx 6xxx series card got a bump in performance , all amd cards did !
> 
> Fine wine at it's finest


...I haven't even tried out the latest driver yet as both my 6900XT and 3090 Strix are getting combined onto a Core P8 (each with their own new loops of course), but given the previous driver improvements already, I am looking forward to try out the new one when it is all finished (and a new XOC bios for the 3090 as well). 

...As I already posted before, my hope is that the AMD driver team not only continues with the rasterization gains, but also makes serious headway on ray tracing and SuperRes FXR as well...


----------



## LtMatt

chispy said:


> Holy mother of uplift in performance 😄 , games actually got a nice boost as well as all 3dmarks not only timespy. Now 6900xt is faster than rtx 3090  , tested against my own rtx 3090 strix. Whatever the driver dev team did ,it was an amazing job. Best driver ever release by amd for all amd cards and not only the new rx 6xxx series card got a bump in performance , all amd cards did !
> 
> Fine wine at it's finest


Very nice. Can you be specific about which games you tested, at which resolutions and image quality settings?


----------



## reqq

i found out what caused my terrrible junction/hot spot temp.. i have reference 6900xt and hotspot immediately went up to 110c when gaming. I turned out this cooler have terrible vapor chamber or whatever its called because it cant handle vertical mount. I turned my Silverstone case, which have vertical mount, up side down.. and the junction temp drop 20c !!! Unbelievable.. this was a thing 10 years ago!!! Just run a time spy run at auto normal setting and gpu score is 19179.. and hot spot was 90c and only 1300 rpm fanspeed.. thats from undervolted 17970 score at 110c hotspot and 2000 rpm fanspeed.

Imagine you go from this:









then you turn your case up side down and you get:


----------



## jfrob75

I have used the internal benchmark of Tom Clancy's Division 2 and achieved a 3.7% increase in frame rates on 1440P Ultra settings, 162 FPS to 168 FPS. I am able to set the max freq to 2890MHz.


----------



## gamesharkdaddy

Hey guys, just looking for advice on where to start when it comes to undervolting my 6900xt reference. I'd like to lower the temp and possibly improve performance, though I'm more focused on temps. Currently getting gpu temp around 80C with junction temps at 90C on games like destiny 2 but feel it could run cooler. I have it paired with a ryzen 5600x. 

Where is the simplest way to start in improving this right off the bat. Thanks!


----------



## ZealotKi11er

If its only 80/90, its is doing pretty well. Limits are 95/110. If you want to undervolt, just keep reducing the voltage slider by 25mV until you are not stable.


----------



## gamesharkdaddy

ZealotKi11er said:


> If its only 80/90, its is doing pretty well. Limits are 95/110. If you want to undervolt, just keep reducing the voltage slider by 25mV until you are not stable.


Ah cool. I've also seen people increase the power limit to 15%. Is this suggested or does it increase temps as well?


----------



## ZealotKi11er

gamesharkdaddy said:


> Ah cool. I've also seen people increase the power limit to 15%. Is this suggested or does it increase temps as well?


Yeah it will increase temp. I would keep it stock the power limit.


----------



## ossimc

Hi there.

i just got a 6900XT REF card and im planning on useing it as is with the stock cooler. But i wanna mod it with better thermal pads and paste to turn down fan noise as much as possible.
So i read: Gelid GC extreme is very suitable for the GPU die. OK and what thermal pads would i need in terms of thickness and "squishyness"


----------



## ZealotKi11er

ossimc said:


> Hi there.
> 
> i just got a 6900XT REF card and im planning on useing it as is with the stock cooler. But i wanna mod it with better thermal pads and paste to turn down fan noise as much as possible.
> So i read: Gelid GC extreme is very suitable for the GPU die. OK and what thermal pads would i need in terms of thickness and "squishyness"


Dont bother with reference cooler because it uses thermal pad. You will never get thermal paste to perform the same.


----------



## CS9K

ZealotKi11er said:


> Dont bother with reference cooler because it uses thermal pad. You will never get thermal paste to perform the same.


I would concur. I ran my reference RX 6800 XT unmodified for a while before I built my water loop. When I pulled the RX 6800 XT out of the loop and put Gelid GC-Extreme on it before selling it to a friend, core/hotspot temperatures were almost the same as the un-modified-card temperatures.

Looking at the big-picture, the AMD reference cooler is not bad by any stretch of the imagination, but the reference cooler _also_ isn't one of the ENORMOUS coolers that AIB's put on their GPU's. AIB cards will almost universally run quieter/cooler at the same power draw because of the larger heatsink.

You managed to get a reference GPU from AMD (at a not-insane price like the AIB GPU's). Just set your fan curve for the GPU and your case fans, slap on some headphones, and enjoy your new GPU.

@ossimc, if quiet is what you want with any modern 300W+ GPU, water is the way to go.


----------



## jonRock1992

I increased FCLK to 2100MHz and got a little boost in GPU score.
AMD Radeon RX 6900 XT video card benchmark result - AMD Ryzen 7 5800X,ASUSTeK COMPUTER INC. ROG CROSSHAIR VIII DARK HERO (3dmark.com)









UPDATE:
Got it a little bit higher. Settings in the description.








AMD Radeon RX 6900 XT video card benchmark result - AMD Ryzen 7 5800X,ASUSTeK COMPUTER INC. ROG CROSSHAIR VIII DARK HERO (3dmark.com)


----------



## CS9K

jonRock1992 said:


> I increased FCLK to 2100MHz and got a little boost in GPU score.
> AMD Radeon RX 6900 XT video card benchmark result - AMD Ryzen 7 5800X,ASUSTeK COMPUTER INC. ROG CROSSHAIR VIII DARK HERO (3dmark.com)
> View attachment 2519577


It does help a little, yes. I put some tests from 21.7.1 over in one of the the Igors's Lab threads. It was less than 1% in TS and Superposition.


----------



## majestynl

Let's see what this beast can do. Arriving in few days.


----------



## HeLeX63

jonRock1992 said:


> I increased FCLK to 2100MHz and got a little boost in GPU score.
> AMD Radeon RX 6900 XT video card benchmark result - AMD Ryzen 7 5800X,ASUSTeK COMPUTER INC. ROG CROSSHAIR VIII DARK HERO (3dmark.com)
> View attachment 2519577


Nice score.

What does the FCLK do ??


----------



## HeLeX63

majestynl said:


> Let's see what this beast can do. Arriving in few days.
> View attachment 2519588


OMG nice. Plz post results. Temps, max core, min etc... keen to know.


----------



## CantingSoup

jonRock1992 said:


> I increased FCLK to 2100MHz and got a little boost in GPU score.
> AMD Radeon RX 6900 XT video card benchmark result - AMD Ryzen 7 5800X,ASUSTeK COMPUTER INC. ROG CROSSHAIR VIII DARK HERO (3dmark.com)
> View attachment 2519577


I’ll try this when I get home. Hopefully it gives me those last 10-15 points to get to 24k graphics.


----------



## jfrob75

majestynl said:


> Let's see what this beast can do. Arriving in few days.
> View attachment 2519588


That is the GPU I have, so I do not think you will be disappointed. Enjoy!!!


----------



## jonRock1992

I ended up trying 2150 MHz FCLK so that it would match the 2150 MHz memory clock that I'm using. I have no idea if matching it does anything, but it doesn't seem to affect stability. Getting some good results. My settings are in the description.
AMD Radeon RX 6900 XT video card benchmark result - AMD Ryzen 7 5800X,ASUSTeK COMPUTER INC. ROG CROSSHAIR VIII DARK HERO (3dmark.com)


----------



## CS9K

As best I can tell, "Fclk" is the clock speed for the infinity cache itself. 






AMD - RED BIOS EDITOR und MorePowerTool - BIOS-Einträge anpassen, optimieren und noch stabiler übertakten | Navi unlimited


@Andybuc Da ist das Ding: 19014 GPU Score in TimeSpy: https://www.3dmark.com/3dm/64245024 +6mhz mclk und SMT ausgeschaltet, CPU sowie Gesamtscore leidet natürlich. Geil damit lieg ich mit der 6800 so ziemlich beim Median der 6900XT im GPU score :D Edit: +75mV vcore und 1.56V vmem...




www.igorslab.de





I hoped to see SOME kind of improvement in performance once we got hold of the clock speed of the Infinity Cache, but it does not appear to greatly increase performance. My tests with 3dMark Time Spy and Unigine Superposition all yielded <1% performance difference, using the average of 3 tests at each fclk speed, no power limit, and manually set voltage offset and core clock/memory speed.


----------



## jfrob75

CS9K said:


> As best I can tell, "Fclk" is the clock speed for the infinity cache itself.
> 
> 
> 
> 
> 
> 
> AMD - RED BIOS EDITOR und MorePowerTool - BIOS-Einträge anpassen, optimieren und noch stabiler übertakten | Navi unlimited
> 
> 
> @Andybuc Da ist das Ding: 19014 GPU Score in TimeSpy: https://www.3dmark.com/3dm/64245024 +6mhz mclk und SMT ausgeschaltet, CPU sowie Gesamtscore leidet natürlich. Geil damit lieg ich mit der 6800 so ziemlich beim Median der 6900XT im GPU score :D Edit: +75mV vcore und 1.56V vmem...
> 
> 
> 
> 
> www.igorslab.de
> 
> 
> 
> 
> 
> I hoped to see SOME kind of improvement in performance once we got hold of the clock speed of the Infinity Cache, but it does not appear to greatly increase performance. My tests with 3dMark Time Spy and Unigine Superposition all yielded <1% performance difference, using the average of 3 tests at each fclk speed, no power limit, and manually set voltage offset and core clock/memory speed.


I would agree with your results. I run my FCLK at 2100 which gives about 0.5 -0.75% increase in performance. At this setting I have not had any game instabilities so far. I think I tried a setting of 2300MHz once and the card did not like it.


----------



## CantingSoup

I broke 24k graphics score. Set FCLK to 2100 and PBO Curve Optimizer to -15 all core with +200MHz.


----------



## gamesharkdaddy

Is this okay for a reference 6900xt and 5600x?









I scored 16 795 in Time Spy


AMD Ryzen 5 5600X, AMD Radeon RX 6900 XT x 1, 32768 MB, 64-bit Windows 10}




www.3dmark.com


----------



## L!ME

Try to increase flck and soc my liquid devil ultimate Runs great with 1266soc and 2177 flck and my RAM works with 2380 ft2 instead of 2340ft2 whit Stock soc and flck.


----------



## LtMatt

Managed to improve my personal best scores a little.

Reached the limit now, would need proper water cooling or hard mods to go further. 

SCORE - AMD Radeon RX 6900 XT video card benchmark result - AMD Ryzen 9 5950X,ASUSTeK COMPUTER INC. ROG CROSSHAIR VIII HERO (3dmark.com)
23 621 with AMD Radeon RX 6900 XT(1x) and AMD Ryzen 9 5950X
Graphics Score 25 199
CPU Score17 436

SCORE - AMD Radeon RX 6900 XT video card benchmark result - AMD Ryzen 9 5950X,ASUSTeK COMPUTER INC. ROG CROSSHAIR VIII HERO (3dmark.com)
12 060 with AMD Radeon RX 6900 XT(1x) and AMD Ryzen 9 5950X
Graphics Score 12 115
CPU Score11 760


----------



## ZealotKi11er

LtMatt said:


> Managed to improve my personal best scores a little.
> 
> Reached the limit now, would need proper water cooling or hard mods to go further.
> 
> SCORE - AMD Radeon RX 6900 XT video card benchmark result - AMD Ryzen 9 5950X,ASUSTeK COMPUTER INC. ROG CROSSHAIR VIII HERO (3dmark.com)
> 23 621 with AMD Radeon RX 6900 XT(1x) and AMD Ryzen 9 5950X
> Graphics Score 25 199
> CPU Score17 436
> 
> SCORE - AMD Radeon RX 6900 XT video card benchmark result - AMD Ryzen 9 5950X,ASUSTeK COMPUTER INC. ROG CROSSHAIR VIII HERO (3dmark.com)
> 12 060 with AMD Radeon RX 6900 XT(1x) and AMD Ryzen 9 5950X
> Graphics Score 12 115
> CPU Score11 760



What is your stock core clock for 6900XT


----------



## jonRock1992

LtMatt said:


> Managed to improve my personal best scores a little.
> 
> Reached the limit now, would need proper water cooling or hard mods to go further.
> 
> SCORE - AMD Radeon RX 6900 XT video card benchmark result - AMD Ryzen 9 5950X,ASUSTeK COMPUTER INC. ROG CROSSHAIR VIII HERO (3dmark.com)
> 23 621 with AMD Radeon RX 6900 XT(1x) and AMD Ryzen 9 5950X
> Graphics Score 25 199
> CPU Score17 436
> 
> SCORE - AMD Radeon RX 6900 XT video card benchmark result - AMD Ryzen 9 5950X,ASUSTeK COMPUTER INC. ROG CROSSHAIR VIII HERO (3dmark.com)
> 12 060 with AMD Radeon RX 6900 XT(1x) and AMD Ryzen 9 5950X
> Graphics Score 12 115
> CPU Score11 760


Those toxic cards are insane. I'm not so sure a custom loop would even help much. Unless the water was chilled. I can't believe you can do over 2800MHz in Timespy.


----------



## LtMatt

ZealotKi11er said:


> What is your stock core clock for 6900XT


2589Mhz, so lower middle of the road in terms of averages.

Collected feedback from users here and on other forums and have seen as low as 2549Mhz and as high as 2669Mhz stock core clocks.



jonRock1992 said:


> Those toxic cards are insane. I'm not so sure a custom loop would even help much. Unless the water was chilled. I can't believe you can do over 2800MHz in Timespy.


Yeah the cooler is nice. Replacing the thermal paste really helped as well as the stock paste had completely dried and deltas went from 30c+ at 425W to 20c after using Kryonaut.


----------



## lawson67

LtMatt said:


> 2589Mhz, so lower middle of the road in terms of averages.
> 
> Collected feedback from users here and on other forums and have seen as low as 2549Mhz and as high as 2669Mhz stock core clocks.
> 
> 
> Yeah the cooler is nice. Replacing the thermal paste really helped as well as the stock paste had completely dried and deltas went from 30c+ at 425W to 20c after using Kryonaut.


Is your card the Ultimate edition or the extreme edition and which is faster out of the both???..i see both for sale but cant find out the differences between them?


----------



## LtMatt

lawson67 said:


> Is your card the Ultimate edition or the extreme edition and which is faster out of the both???..i see both for sale but cant find out the differences between them?


It's the Extreme Edition which is XTXH. 

The Limited Edition is a regular XTX. 

Other than that, they are identical.


----------



## lawson67

LtMatt said:


> It's the Extreme Edition which is XTXH.
> 
> The Limited Edition is a regular XTX.
> 
> Other than that, they are identical.


so what's this Ultimate edition linked below which is cheaper than the normal edition also linked below , the XTU is cheaper than the XTX
https://www.overclockers.co.uk/sapp...dc3103357d4-1627917651-0-gqNtZGzNAo2jcnBszQ7i

https://www.overclockers.co.uk/sapp...ddr6-pci-express-graphics-card-gx-39h-sp.html


----------



## LtMatt

lawson67 said:


> so what's this Ultimate edition linked below which is cheaper than the normal edition also linked below , the XTU is cheaper than the XTX
> https://www.overclockers.co.uk/sapp...dc3103357d4-1627917651-0-gqNtZGzNAo2jcnBszQ7i
> 
> https://www.overclockers.co.uk/sapp...ddr6-pci-express-graphics-card-gx-39h-sp.html


That's the model I have. It's actually called the Toxic Extreme, not Ultimate. Not sure why OcuK have called it that. TOXIC AMD Radeon RX 6900 XT EE 16G GDDR6 (sapphiretech.com) 

It was £1799 the other day, OcuK dropped the price again. 

They've dropped the price on that particular model multiple times. It was at £2199 at one point. I picked it up at £1899, so now £200 cheaper. 

Not for the faint hearted at this kind of money, not my first rodeo though - I bought 2x Radeon Pro Duo's Lol.


----------



## lawson67

LtMatt said:


> That's the model I have. It's actually called the Toxic Extreme, not Ultimate. Not sure why OcuK have called it that. TOXIC AMD Radeon RX 6900 XT EE 16G GDDR6 (sapphiretech.com)
> 
> It was £1799 the other day, OcuK dropped the price again.
> 
> They've dropped the price on that particular model multiple times. It was at £2199 at one point. I picked it up at £1899, so now £200 cheaper.
> 
> Not for the faint hearted at this kind of money, not my first rodeo though - I bought 2x Radeon Pro Duo's Lol.


Strange why the XTU is cheaper that the XTX sounds a great deal maybe i should get one now lol


----------



## LtMatt

lawson67 said:


> Strange why the XTU is cheaper that the XTX sounds a great deal maybe i should get one now lol


OcuK have the Limited Edition at £1799 as they only have one left. Once they have Limited (excuse the pun) stock the price goes up. 

Otherwise it would be around £100 lower than the Extreme.


----------



## lawson67

LtMatt said:


> OcuK have the Limited Edition at £1799 as they only have one left. Once they have Limited (excuse the pun) stock the price goes up.
> 
> Otherwise it would be around £100 lower than the Extreme.


My Powercolor Red devil Ultimate is supposed to be arriving on Wednesday maybe ill get both and test them and send one back with 30-day satisfaction guarantee you get when buying online, must admit though i much prefer the look of the red devil or any air cooled cards but that Ultimate Toxic is bloody fast, post a picture of your rig with it installed pls


----------



## J7SC

LtMatt said:


> OcuK have the Limited Edition at £1799 as they only have one left. Once they have Limited (excuse the pun) stock the price goes up.
> 
> Otherwise it would be around £100 lower than the Extreme.


In our online and physical stores markets here in Canada, Custom 6900XTs (XT / H) seem to have a bit better supply lately compared to the custom 3090s re. availability and even some prices - though my 6900XT (3pin non-H) model actually went up by C$250 compared to a few months ago. NV 3090 prices also pulled up a bit again after stabilizing and even backing off for select models a few weeks back.

If I would be shopping for a new primary system GPU and be looking for a 6900 XT/H, I would focus on the Gigabyte / Aorus 6900 XT Xtreme Waterforce WB, the Sapphire Toxic Extreme, and the Asus Strix 6900 XT LC TOP


----------



## LtMatt

J7SC said:


> In our online and physical stores markets here in Canada, Custom 6900XTs (XT / H) seem to have a bit better supply lately compared to the custom 3090s re. availability and even some prices - though my 6900XT (3pin non-H) model actually went up by C$250 compared to a few months ago. NV 3090 prices also pulled up a bit again after stabilizing and even backing off for select models a few weeks back.
> 
> If I would be shopping for a new primary system GPU and be looking for a 6900XT/H, I would focus on the Gigabyte / Aorus Xtreme Waterforce WB, the Sapphire Toxic Extreme, and the Asus Strix 6900 XT LC TOP


Yep 6900 XT(X and H) stock is plentiful in the Uk now, so prices are coming down a fair bit but still a ways to go prior to the madness.

I think your recommendations are good there. I don't believe there is any binning quality difference between any of the XTXH models, just comes down to the quality of the stock cooler provided and the luck of the draw.

If you are going water, best to get the Asrock 6900 XTXH IMO, or the Power color Red Devil Ultimate.


----------



## lawson67

LtMatt said:


> Yep 6900 XT(X and H) stock is plentiful in the Uk now, so prices are coming down a fair bit but still a ways to go prior to the madness.
> 
> I think your recommendations are good there. I don't believe there is any binning quality difference between any of the XTXH models, just comes down to the quality of the stock cooler provided and the luck of the draw.
> 
> If you are going water, best to get the Asrock 6900 XTXH IMO, or the Power color Red Devil Ultimate.


Post a picture of your toxic in you rig please


----------



## LtMatt

lawson67 said:


> My Powercolor Red devil Ultimate is supposed to be arriving on Wednesday maybe ill get both and test them and send one back with 30-day satisfaction guarantee you get when buying online, must admit though i much prefer the look of the red devil or any air cooled cards but that Ultimate Toxic is bloody fast, post a picture of your rig with it installed pls


Yep try both and keep whichever is better. 

My case cannot fit the 360 Rad, using a Phanteks 600s and have a 420 Arctic Freezer in the front so it sits on the roof. 

Luckily it happens to balance perfectly, so I tolerate the undesirable case setup. The RGB looks lovely though if you like that sort of thing.


----------



## J7SC

LtMatt said:


> Yep 6900 XT(X and H) stock is plentiful in the Uk now, so prices are coming down a fair bit but still a ways to go prior to the madness.
> 
> I think your recommendations are good there. I don't believe there is any binning quality difference between any of the XTXH models, just comes down to the quality of the stock cooler provided and the luck of the draw.
> 
> If you are going water, best to get the Asrock 6900 XTXH IMO, or the Power color Red Devil Ultimate.


...yeah, the list wasn't meant to be 'exclusive' insofar as the models I didn't list are 'no good', but it did take into account my personal PCB/VRM 'preference' and what is _actually available_ here in Canada (ie rare to find Powercolor or even ASRock GPUs). And since I run full custom-loops on everything, that Giga/Aorus XTH full factory w-block looks super-yummy. I've had 2x Aorus 2080 Ti Xtreme Waterforce WB's since late '18, and their quality (and up to 380W bios) is superb...


----------



## LtMatt

J7SC said:


> ...yeah, the list wasn't meant to be 'exclusive' insofar as the models I didn't list are 'no good', but it did take into account my personal PCB/VRM 'preference' and what is _actually available_ here in Canada (ie rare to find Powercolor or even ASRock GPUs). And since I run full custom-loops on everything, that Giga/Aorus XTH full factory w-block looks super-yummy. I've had 2x Aorus 2080 Ti Xtreme Waterforce WB's since late '18, and their quality (and up to 380W bios) is superb...


Yep fair, we both watched Bullzoids video then. 

I know someone using the Giga Extreme XTXH and he has the highest stock clock I've ever seen at 2669Mhz for XTXH models.


----------



## J7SC

LtMatt said:


> Yep fair, we both watched _Bullzoids_ video then.
> 
> I know someone using the Giga Extreme XTXH and he has the highest stock clock I've ever seen at 2669Mhz for XTXH models.


...I sometimes force myself to watch BZ's (< note diplomatic sidestep) vids - even with his 'unique' presentation style, there are some real gems in there to learn a lot. He does not seem to like Gigabyte GPU PCBs much, though...I almost take that as a recommendation  since I ended up with two different models not to his liking over the years....

...My 6900XT Gaming OC has the same PCB as the Aorus 6900 XTH WF WB model (input filtering, yeah !) and this non-H model has seen well beyond 2700 MHz in some apps, along with 400 W+. The 2x 2080 Ti with PCBs which BZ didn't like, well, I hit as high as 15th in 3dM HoF totally stock for PortRoyal (spoiler) - and stayed in that HoF table_ for 1 1/2 years _until the 3090s came along_._..point being that once I mounted those GPUs, I never removed them again from the mobo, and their stock bios on water seems to be more than enough. The waterblocks are semi-transparent and even all this time later, they are pristine w/ no increase in temps over ambient...with that in mind, the Giga/Aorus 6900XT Xtreme WF WB would be on my very, very short list.


Spoiler


----------



## LtMatt

J7SC said:


> ...I sometimes force myself to watch BZ's (< note diplomatic sidestep) vids - even with his 'unique' presentation style, there are some real gems in there to learn a lot. He does not seem to like Gigabyte GPU PCBs much, though...I almost take that as a recommendation  since I ended up with two different models not to his liking over the years....
> 
> ...My 6900XT Gaming OC has the same PCB as the Aorus 6900 XTH WF WB model (input filtering, yeah !) and this non-H model has seen well beyond 2700 MHz in some apps, along with 400 W+. The 2x 2080 Ti with PCBs which BZ didn't like, well, I hit as high as 15th in 3dM HoF totally stock for PortRoyal (spoiler) - and stayed in that HoF table_ for 1 1/2 years _until the 3090s came along_._..point being that once I mounted those GPUs, I never removed them again from the mobo, and their stock bios on water seems to be more than enough. The waterblocks are semi-transparent and even all this time later, they are pristine w/ no increase in temps over ambient...with that in mind, the Giga/Aorus 6900XT Xtreme WF WB would be on my very, very short list.
> 
> 
> Spoiler
> 
> 
> 
> 
> View attachment 2519682


Furry muff can't argue with those results and I can understand you sticking with what has worked well in the past. Makes complete sense. 

Not sure how important input filtering is to be fair for the normal folk that don't hard mod.


----------



## LtMatt

Now I've maxed out my Timespy scores, thought I'd try Firestrike as not touched it in a while.

I am not convinced however that 21.7.2 is the best driver for Firestrike. Has anyone tested older vs newer drivers?

5950X @ 4.825Ghz
6900 XT Toxic 2820/2162Mhz
21.7.2

AMD Radeon RX 6900 XT video card benchmark result - AMD Ryzen 9 5950X,ASUSTeK COMPUTER INC. ROG CROSSHAIR VIII HERO (3dmark.com)

SCORE 31 359 with AMD Radeon RX 6900 XT(1x) and AMD Ryzen 9 5950X
Graphics Score
33 390
Physics Score
43 635
Combined Score
16 698

5950X @ 4.825Ghz
6900 XT Toxic 2820/2162Mhz
21.7.2

AMD Radeon RX 6900 XT video card benchmark result - AMD Ryzen 9 5950X,ASUSTeK COMPUTER INC. ROG CROSSHAIR VIII HERO (3dmark.com)

SCORE 16 722 with AMD Radeon RX 6900 XT(1x) and AMD Ryzen 9 5950X
Graphics Score
16 793
Physics Score
43 620
Combined Score
8 548


----------



## lestatdk

My FS score is slightly lower with the new drivers.


----------



## weleh

I'm not sure but 21.6 drivers or one below that are the best for FS when I tested.


----------



## tps3443

AMD 6900XT Liquid Devil’s are IN-STOCK for $1,899


$1,899 at EKWB










PowerColor Liquid Devil Radeon RX 6900 XT


The prestigious Devil series graphics card from PowerColor just got even cooler! The PowerColor Liquid Devil Radeon™ RX 6800 XT is the most advanced AMD® Radeon-based graphics card on the market to date. All thanks to the custom-designed PCB by PowerColor and the full-cover EK® water block. The...




www.ekwb.com


----------



## jonRock1992

Is it important to cool the back of the GPU core on these gpu's? My red devil ultimate had a cutout in the backplate so it could breathe, but my bykski backplate is fully enclosed with no thermal pads touching the backplate.


----------



## gamesharkdaddy

No idea how you guys are able to run stable voltage under 1150. I'm using a reference 6900xt with fans 80% at max. Do you all think this is good for day to day standard gaming?









I scored 17 251 in Time Spy


AMD Ryzen 5 5600X, AMD Radeon RX 6900 XT x 1, 32768 MB, 64-bit Windows 10}




www.3dmark.com


----------



## LtMatt

gamesharkdaddy said:


> No idea how you guys are able to run stable voltage under 1150. I'm using a reference 6900xt with fans 80% at max. Do you all think this is good for day to day standard gaming?
> 
> 
> 
> 
> 
> 
> 
> 
> 
> I scored 17 251 in Time Spy
> 
> 
> AMD Ryzen 5 5600X, AMD Radeon RX 6900 XT x 1, 32768 MB, 64-bit Windows 10}
> 
> 
> 
> 
> www.3dmark.com


I never go below 1.2v when benching. Balls to the wall or gone home, assuming you have a cooler that is decent.

Looks fine to me for everyday gaming.


----------



## gamesharkdaddy

LtMatt said:


> I never go below 1.2v when benching. Balls to the wall or gone home, assuming you have a cooler that is decent.
> 
> Looks fine to me for everyday gaming.


lol, thats why I love what you guys do


----------



## majestynl

HeLeX63 said:


> OMG nice. Plz post results. Temps, max core, min etc... keen to know.





jfrob75 said:


> That is the GPU I have, so I do not think you will be disappointed. Enjoy!!!


Sending it back. Unfortunately the DP output didn't work and the card was gold instead of black as the picture 

Stock clocks 2589mhz.
Poor TS scores even with some power tuning..

Need to find another card. Any suggestions? Also need a block for it or one that's already attached.


----------



## lawson67

My best score so far and number 11th in the world with a Ryzen 7 5800x and a single RX 6800 XT, looking forward to my Powercolor RX 6900 XT Ultimate coming tomorrow

I scored 20 220 in Time Spy


----------



## majestynl

Does someone know if the Red Devil Waterblock from Alphacool or Byski will fit the Red Devil Ultimate. Looks like same PCB. Don't know for sure.









Alphacool Eisblock Aurora Acryl GPX-A Radeon RX 6800(XT)/6900XT Red Devil mit Backplate


Der Alphacool Eisblock Aurora Acryl GPX Grafikkarten Wasserkühler mit Backplate vereint Style mit Performance. Extreme Kühlperformance und eine umfangreiche digital RGB Beleuchtung zeichnen ihn aus. Erfahrung und technisches Know-how aus...




www.alphacool.com


----------



## lawson67

majestynl said:


> Does someone know if the Red Devil Waterblock from Alphacool or Byski will fit the Red Devil Ultimate. Looks like same PCB. Don't know for sure.
> 
> 
> 
> 
> 
> 
> 
> 
> 
> Alphacool Eisblock Aurora Acryl GPX-A Radeon RX 6800(XT)/6900XT Red Devil mit Backplate
> 
> 
> Der Alphacool Eisblock Aurora Acryl GPX Grafikkarten Wasserkühler mit Backplate vereint Style mit Performance. Extreme Kühlperformance und eine umfangreiche digital RGB Beleuchtung zeichnen ihn aus. Erfahrung und technisches Know-how aus...
> 
> 
> 
> 
> www.alphacool.com


Yes it will fit they are indeed the same PBC Board


----------



## HeLeX63

majestynl said:


> Does someone know if the Red Devil Waterblock from Alphacool or Byski will fit the Red Devil Ultimate. Looks like same PCB. Don't know for sure.
> 
> 
> 
> 
> 
> 
> 
> 
> 
> Alphacool Eisblock Aurora Acryl GPX-A Radeon RX 6800(XT)/6900XT Red Devil mit Backplate
> 
> 
> Der Alphacool Eisblock Aurora Acryl GPX Grafikkarten Wasserkühler mit Backplate vereint Style mit Performance. Extreme Kühlperformance und eine umfangreiche digital RGB Beleuchtung zeichnen ihn aus. Erfahrung und technisches Know-how aus...
> 
> 
> 
> 
> www.alphacool.com


Yes it will. I have a Red Devil Alphacool block. Ultimate edition has the XTXH die, everything else should be identical.


----------



## HeLeX63

majestynl said:


> Sending it back. Unfortunately the DP output didn't work and the card was gold instead of black as the picture
> 
> Stock clocks 2589mhz.
> Poor TS scores even with some power tuning..
> 
> Need to find another card. Any suggestions? Also need a block for it or one that's already attached.


That is ultra disappointing


----------



## jonRock1992

majestynl said:


> Does someone know if the Red Devil Waterblock from Alphacool or Byski will fit the Red Devil Ultimate. Looks like same PCB. Don't know for sure.
> 
> 
> 
> 
> 
> 
> 
> 
> 
> Alphacool Eisblock Aurora Acryl GPX-A Radeon RX 6800(XT)/6900XT Red Devil mit Backplate
> 
> 
> Der Alphacool Eisblock Aurora Acryl GPX Grafikkarten Wasserkühler mit Backplate vereint Style mit Performance. Extreme Kühlperformance und eine umfangreiche digital RGB Beleuchtung zeichnen ihn aus. Erfahrung und technisches Know-how aus...
> 
> 
> 
> 
> www.alphacool.com


I have the Bykski Red Devil waterblock installed on my Red Devil Ultimate. I can confirm it works great. No issues for me. I get 24.5k+ Timespy graphics score with it, so it seems to be doing its job.


----------



## LtMatt

jonRock1992 said:


> I have the Bykski Red Devil waterblock installed on my Red Devil Ultimate. I can confirm it works great. No issues for me. I get 24.5k+ Timespy graphics score with it, so it seems to be doing its job.


What delta to do you see at 375-400W between Edge and Junction with your block?


----------



## kazukun

*EK Launches Vector Water Blocks for PowerColor Red Devil RX 6800 & RX 6900 Cards*









EK Launches Vector Water Blocks for PowerColor Red Devil RX 6800 & RX 6900 Cards


EK, the leading computer cooling solutions provider, is launching the new EK-Quantum Vector Red Devil water block made for the PowerColor Red Devil version of the AMD Radeon RX 6800, RX 6800 XT, and 6900 XT graphics cards. This 2nd-generation EK-Quantum Vector water block implements an Open...




www.techpowerup.com


----------



## CantingSoup

I'm #2 on Time Spy Extreme for 5800X/6900XT single GPU.


----------



## jonRock1992

LtMatt said:


> What delta to do you see at 375-400W between Edge and Junction with your block?


It's usually around 10C delta in games, but in Timespy GT2 I've seen deltas in the 20's.


----------



## J7SC

kazukun said:


> *EK Launches Vector Water Blocks for PowerColor Red Devil RX 6800 & RX 6900 Cards*
> 
> 
> 
> 
> 
> 
> 
> 
> 
> EK Launches Vector Water Blocks for PowerColor Red Devil RX 6800 & RX 6900 Cards
> 
> 
> EK, the leading computer cooling solutions provider, is launching the new EK-Quantum Vector Red Devil water block made for the PowerColor Red Devil version of the AMD Radeon RX 6800, RX 6800 XT, and 6900 XT graphics cards. This 2nd-generation EK-Quantum Vector water block implements an Open...
> 
> 
> 
> 
> www.techpowerup.com


I have my reservations about > EK blocks as of late


----------



## LtMatt

Is anyone else hoping the next driver brings yet more Timespy improvements?  

Surely it can't get better, can it...


----------



## lestatdk

LtMatt said:


> Is anyone else hoping the next driver brings yet more Timespy improvements?
> 
> Surely it can't get better, can it...


Or, can it ???


----------



## J7SC

lestatdk said:


> Or, can it ???


...the plot thickens ! Also, in addition to TS/TSX, Super res + ray tracing improvements, please


----------



## Dijati

Waterblock for RX 6900XT Nitro+SE / Toxic Limited and Extreme Edition released by Bykski.

All the Cards use the same PCB. I ordered the Block 2days ago for my Toxic EE since the AIO cooler is Crap.









123.17US $ 10% OFF|Bykski GPU Wasser Block für SAPPHIRE RADEON RX 6900 XT 16GB NITRO + SPECIAL EDITION RX6950XT Nitro + Reine grafikkarte Gekühlt|Lüfter & Kühlung| - AliExpress


Smarter Shopping, Better Living! Aliexpress.com




de.aliexpress.com


----------



## lawson67

I am very impressed with FSR i got a game called Chernobylite its one of the first games to implement FSR and the setting (high quality FSR) at 4k takes the frames from about 70fps to about 120fps plus and i simply can not tell the difference at 4K


----------



## jonRock1992

lawson67 said:


> I am very impressed with FSR i got a game called Chernobylite its one of the first games to implement FSR and the setting (high quality FSR) at 4k takes the frames from about 70fps to about 120fps plus and i simply can not tell the difference at 4K


I've only used it in one game (Resident Evil Village). It looks really bad in that game though because of the crappy TAA implementation, and good FSR quality relies on a good TAA implementation. I was thinking about getting Chernobylite because I'm a huge fan of the setting.


----------



## lawson67

jonRock1992 said:


> I've only used it in one game (Resident Evil Village). It looks really bad in that game though because of the crappy TAA implementation, and good FSR quality relies on a good TAA implementation. I was thinking about getting Chernobylite because I'm a huge fan of the setting.


Its seems a really good game i am a few hours in now and really enjoying it, kinda a bit Metro Exodus, but it seems to runs your GPU at max watts 95% of the time so it gets very warm lol


----------



## tps3443

Did anyone pick up one of the 6900XT Liquid devil(s) for $1,899?

I thought that was a pretty good deal!!

Well, not the Liquid Devil ultimate for $2,899 anyways lol.

I really wonder why they sell two liquid devil 6900XT’s but one cost $1,000 dollars more expensive, for the exact same thing?! That’s an entire 6900XT extra lol.

You’d be better off buying the $1,899 model 6900XT Liquid Devil, then buying a reference AMD 6900XT for $999. Then make a video of you self setting it on fire and smashing it to bits, and uploading it on YouTube for millions of views.

You could call the video..

”Video cards finally available, letting out my frustration”

Or buy (3) reference models and bin the card your self. Return the others.


6900XT Liquid Devil
= $1,899

6900XT Liquid Devil Ultimate
=$2,899


----------



## HeLeX63

tps3443 said:


> Did anyone pick up one of the 6900XT Liquid devil(s) for $1,899?
> 
> I thought that was a pretty good deal!!
> 
> Well, not the Liquid Devil ultimate for $2,899 anyways lol.
> 
> I really wonder why they sell two liquid devil 6900XT’s but one cost $1,000 dollars more expensive, for the exact same thing?! That’s an entire 6900XT extra lol.
> 
> You’d be better off buying the $1,899 model 6900XT Liquid Devil, then buying a reference AMD 6900XT for $999. Then make a video of you self setting it on fire and smashing it to bits, and uploading it on YouTube for millions of views.
> 
> You could call the video..
> 
> ”Video cards finally available, letting out my frustration”
> 
> Or buy (3) reference models and bin the card your self. Return the others.
> 
> 
> 6900XT Liquid Devil
> = $1,899
> 
> 6900XT Liquid Devil Ultimate
> =$2,899


AMD have segmented their GPU dies. Out of a typical wafer, there are certain parts of a GPU that can achieve much much higher clock speeds and memory speeds than your typical 6900xt. You pay extra $$ for a guaranteed higher bin, far more overclocking potential. That's the difference in the price.


----------



## lawson67

So my Powercolor Red Devil Ultimate RX 6900 XT has finally arrived, just installed it and out the box it sits at 2604mhz, right now to see what it can do


----------



## HeLeX63

lawson67 said:


> So my Powercolor RX 6900 XT has finally arrived, just installed it and out the box it sits at 2604mhz, right now to see what it can do
> 
> View attachment 2519981


Nice. I found that the intial clock speeds mean nothing... Some guy on youtube had a standard 6900XT bin with 2484MHz, but can set 2700MHz max, yet mine is a 2504MHz initial that can only max out on 2650Mhz max on TS Extreme.


----------



## LtMatt

HeLeX63 said:


> Nice. I found that the intial clock speeds mean nothing... Some guy on youtube had a standard 6900XT bin with 2484MHz, but can set 2700MHz max, yet mine is a 2504MHz initial that can only max out on 2650Mhz max on TS Extreme.


Found similar, my Merc had 2479Mhz and could hit 2700. It’s almost as if sometimes lower is better, bizarrely. At least on air.


----------



## lawson67

LtMatt said:


> Found similar, my Merc had 2479Mhz and could hit 2700. It’s almost as if sometimes lower is better, bizarrely. At least on air.


Just pushing up frequency by 50mhz at a time so far i can do time spy benchmark at min 2750mhz and max 2850mhz, gonna try 2900mhz next, oh and temps are very good indeed over 400w


----------



## LtMatt

lawson67 said:


> Just pushing up frequency by 50mhz at a time so far i can do time spy benchmark at min 2750mhz and max 2850mhz, gonna try 2900mhz next, oh and temps are very good indeed over 400w
> 
> View attachment 2519986


Looks like you got a good one there, but what is your graphics score? Curious to see if the high clock is scaling. Hitting 95c which is throttle temp on all XTXH cards bar the powercolor 

Still though, it looks promising to me.


----------



## majestynl

Much better clocks then my Ultimate liquid. On timespy i could only set max at 2740. And Min 2450 had the highest gs scores. 

Sometimes i also can't see a logical comparison between clocks (average or set) and the TS scores. Cause the Aurus version had higher clocks but less points in TS compared to my Ultimate liquid...


----------



## weleh

95C hotspot at 400W is good? I don't think so tbh. Don't forget XTXH cards throttle at 95C hotspot not 115C like XTX.
It's a shame all of these XTXH cards on air are bottlenecked by cooling. They really deserve some water.


----------



## LtMatt

weleh said:


> 95C hotspot at 400W is good? I don't think so tbh. Don't forget XTXH cards throttle at 95C hotspot not 115C like XTX.
> It's a shame all of these XTXH cards on air are bottlenecked by cooling. They really deserve some water.


For some reason the Red Devil XTXH has a max hotspot temp of 110c though (like all XTX) before throttling. 

I can only presume that it is a mistake and it should be 95c like the other cards.


----------



## weleh

Yea mistake or not, 95C hotspot is too warm for my liking. 
Increasing operating voltages and not taking are of cooling was a bad decision.

XTXH cards should be LC mandatory or Referece design so you can get blocks easy.

The 18Gbps referece model is probably a XTXH bin judging by the core clocks some of them reach unmodded.


----------



## LtMatt

weleh said:


> Yea mistake or not, 95C hotspot is too warm for my liking.
> Increasing operating voltages and not taking are of cooling was a bad decision.
> 
> XTXH cards should be LC mandatory or Referece design so you can get blocks easy.
> 
> The 18Gbps referece model is probably a XTXH bin judging by the core clocks some of them reach unmodded.


Agree and agree.

Power usage climbs massively at this temp too. Can use significantly less watts by keeping hotspot temp low.


----------



## jonRock1992

lawson67 said:


> Just pushing up frequency by 50mhz at a time so far i can do time spy benchmark at min 2750mhz and max 2850mhz, gonna try 2900mhz next, oh and temps are very good indeed over 400w
> 
> View attachment 2519986


Sounds like you got a good bin! I can only do a Max of 2770MHz in Timespy with my red devil ultimate. It still scores pretty decent though.


----------



## J7SC

...don't know if it has been mentioned yet here, but Igor's Lab has MPT beta 6 out...just downloaded it but haven't installed it yet...so I don't know if VRAM for normal XT can be pushed beyond 2150 w/o gimping GPU OC


----------



## lestatdk

J7SC said:


> ...don't know if it has been mentioned yet here, but Igor's Lab has MPT beta 6 out...just downloaded it but haven't installed it yet...so I don't know if VRAM for normal XT can be pushed beyond 2150 w/o gimping GPU OC


Doesn't look like it


----------



## LtMatt

We need more updates from @lawson67


----------



## lawson67

LtMatt said:


> Looks like you got a good one there, but what is your graphics score? Curious to see if the high clock is scaling. Hitting 95c which is throttle temp on all XTXH cards bar the powercolor
> 
> Still though, it looks promising to me.


Only been able to get a couple of runs in so far as really busy with work today so will play more tonight, and its gonna take days maybe a few weeks before i can find out what voltages etc this sample likes, but here is a quick and dirty OC score

As for 95c being to hot i trust AMD when they say there GPU can happily run at 110c all day long so being at 95c does not bother me one bit and as Weleh said only a few days back his card don't scale better with lower temps! as long as i am no where near 110c i am happy and right now i am 15c below throttle temp and 23c below shut down temp, so don't worry Weleh i can promise you the GPU is not about to melt


----------



## LtMatt

lawson67 said:


> Only been able to get a couple of runs in so far as really busy with work today so will play more tonight, and its gonna take days maybe a few weeks before i can find out what voltages etc this sample likes, but here is a quick and dirty OC score
> 
> As for 95c being to hot i trust AMD when they say there GPU can happily run at 110c all day long so being at 95c does not bother me one bit and as Weleh said only a few days back his card don't scale better with lower temps! as long as i am no where near 110c i am happy and right now i am 15c below throttle temp and 23c below shut down temp, so don't worry Weleh i can promise you the GPU is not about to melt
> 
> View attachment 2520014


Decent score. What clocks is that with? 

Be interested to see what more you can get from it.


----------



## lawson67

LtMatt said:


> Decent score. What clocks is that with?
> 
> Be interested to see what more you can get from it.


That was with 2700mhz min 2800mhz max, just need more time to mess with MPT etc


----------



## jonRock1992

I bet with a waterblock and 420W PL you'd be able to break 25K with that GPU. I unfortunately feel like I got a lower bin of XTXH and I won't be able to break 25K without pushing more than 1.2V. My GPU is stable at 2840MHz max clock in games with 420W PL, but Timespy is just too demanding for that clock. The auto OC in Wattman actually calculates 2840MHz for me, and I confirmed it is spot on for gaming but not benching.


----------



## lawson67

jonRock1992 said:


> I bet with a waterblock and 420W PL you'd be able to break 25K with that GPU. I unfortunately feel like I got a lower bin of XTXH and I won't be able to break 25K without pushing more than 1.2V. My GPU is stable at 2840MHz max clock in games with 420W PL, but Timespy is just too demanding for that clock. The auto OC in Wattman actually calculates 2840MHz for me, and I confirmed it is spot on for gaming but not benching.


Maybe i might consider a water block for it, but even with what its doing right now with a quick OC its still a good score however are they really worth it?, i am only 10fps higher in Shadow of tomb raider than my RX 6800 XT that's really not all that much extra real life gaming speed for all the extra money you pay for them over a RX 6800 XT


----------



## jonRock1992

lawson67 said:


> Maybe i might consider a water block for it, but even with what its doing right now with a quick OC its still a good score however are they really worth it?, i am only 10fps higher in Shadow of tomb raider than my RX 6800 XT that's really not all that much extra real life gaming speed for all the extra money you pay for them over a RX 6800 XT


I mean for just gaming, I wouldn't bother with the waterblock if your temps are in check. Did you put liquid metal on it yet? My card had terrible temps out of the box and it pretty much required a waterblock if I was going to do any meaningful overclocking.


----------



## lawson67

jonRock1992 said:


> I mean for just gaming, I wouldn't bother with the waterblock if your temps are in check. Did you put liquid metal on it yet?


No not yet and i really don't feel the need to crack it open and apply LM if the temps stay like this, i only really got it as i wanted to test an XTU and see how much i could get out of it and how much real life extra speed in games i could get out of it, so lets say i can get over 25k with it in the end, that will be at most only another 1 -2 fps over the extra 10fps i have now, so that 12fps more than my RX 6800 XT, so the question i am asking myself is are they really worth the extra money, if i keep this and sell my RX 6800 XT for say £1000 this cost me £600 for an extra 12fps over my RX 6800 XT


----------



## jonRock1992

lawson67 said:


> No not yet and i really don't feel the need to crack it open and apply LM if the temps stay like this, i only really got it as i wanted to test an XTU and see how much i could get out of it and how much real life extra speed in games i could get out of it, so lets say i can get over 25k with it in the end, that will be at most only another 1 -2 fps over the extra 10fps i have now, so that 12fps more than my RX 6800 XT, so the question i am asking myself is are they really worth the extra money, if i keep this and sell my RX 6800 XT for say £1000 this cost me £600 for an extra 12fps over my RX 6800 XT


I see. Makes sense. I needed the waterblock though. The mount on the stock cooler was very bad and I got temp throttling and unstable clocks. It was also EXTREMELY loud. I probably could have RMA'd it, but you know, right to repair and what not. So I fixed it myself with a waterblock. I use open-back headphones (Hifiman Sundara) and I could here the red devil cooler over the audio with my headphones on. With the waterblock, 360mm rad, and three a12x25's, it's virtually silent and I can't here my PC over the audio in my headphones.


----------



## cfranko

I repasted my 6900 xt phantom gaming using MX-4 but I can’t get the delta between the edge and hostpot temperature to be 20 degrees. Above 300 watts the delta is 30 degrees celcius. It was not this bad in winter, it was much better. Can it be caused by my room temperature? its 33 degrees inside.


----------



## majestynl

cfranko said:


> I repasted my 6900 xt phantom gaming using MX-4 but I can’t get the delta between the edge and hostpot temperature to be 20 degrees. Above 300 watts the delta is 30 degrees celcius. It was not this bad in winter, it was much better. Can it be caused by my room temperature? its 33 degrees inside.


Looks more like a DIE contact issue... check if pads are to high, or remove/ad washers on the skrews around the DIE


----------



## majestynl

jonRock1992 said:


> I bet with a waterblock and 420W PL you'd be able to break 25K with that GPU. I unfortunately feel like I got a lower bin of XTXH and I won't be able to break 25K without pushing more than 1.2V. My GPU is stable at 2840MHz max clock in games with 420W PL, but Timespy is just too demanding for that clock. The auto OC in Wattman actually calculates 2840MHz for me, and I confirmed it is spot on for gaming but not benching.


This was the same with my Ultimate Liquid. 2749 max TS, games 2800+ !

Personally i suspect the droop!! More droop on higher watts.. We need LLC


----------



## cfranko

majestynl said:


> Looks more like a DIE contact issue... check if pads are to high, or remove/ad washers on the skrews around the DIE


I didn’t see any washers at all when opening up the card, are there supposed to be washers?


----------



## cfranko

majestynl said:


> Looks more like a DIE contact issue... check if pads are to high, or remove/ad washers on the skrews around the DIE


Oh also, if it was a die contact issue wouldn't it thermal shutdown even in a very light load?


----------



## jonRock1992

majestynl said:


> This was the same with my Ultimate Liquid. 2749 max TS, games 2800+ !
> 
> Personally i suspect the droop!! More droop on higher watts.. We need LLC


Would an evc2 fix vdroop?


----------



## majestynl

cfranko said:


> I didn’t see any washers at all when opening up the card, are there supposed to be washers?


Yeah therefore you could try to add plastic ones.
Helps in certain cases to have better contact.



cfranko said:


> Oh also, if it was a die contact issue wouldn't it thermal shutdown even in a very light load?


Nope not always. Depends how bad the contact is.



jonRock1992 said:


> Would an evc2 fix vdroop?


No droop is a silicon thing, mostly unique to your specific chip. And a EVC2 can give you the opportunity to add more voltage.


----------



## jonRock1992

Well I ended up repasting with liquid metal and replacing my radiator. I got a huge drop in hotspot temp. Dropped by around 15C. I can also bench with 10MHz more on my min / max core clocks. My Timespy graphics score went up by 176 points as a result.


















Result not found







www.3dmark.com


----------



## lawson67

jonRock1992 said:


> Well I ended up repasting with liquid metal and replacing my radiator. I got a huge drop in hotspot temp. Dropped by around 15C. I can also bench with 10MHz more on my min / max core clocks. My Timespy graphics score went up by 176 points as a result.
> 
> View attachment 2520118
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> Result not found
> 
> 
> 
> 
> 
> 
> 
> www.3dmark.com


Yep and that's how much LM can bring your temps down if applied correctly, well done Jon


----------



## lawson67

I am in 2 minds weather to keep my new Ultimate red devil or return it, its defiantly a very well binned card and can run timespy at 2850mhz and i belive will hit certainly hit over 25k in TS however to get that score or to just get the best out of it i need a water block on it so i need to buy a water cooling kit as i might as well loop the whole system, however all this will still only net me 10 -12 fps more than my RX 6800 XT, right now i can hit 24500 points in TS with the Ultimate which gives me 10fps in all my games that have benchmarks over my RX6800 XT, so if i keep the RX 6900 XT it will cost me £600 as i belive i should get £1000 for my RX 6800 XT, now add in on top a waterloop which will cost about £650 so i will pay around £1200 pounds for 10 to 12 fps as even with a 25k score in TS it will still only be a 10 - 12 FPS increase in games over my RX 6800 XT, is it worth it clearly not, But maybe i need some convincing and adding a water loop might be a fun project, so come on guys what are your thoughts?, keep it and build a loop or send it back?


----------



## weleh

LM is a good way to tame down hotspot temp on RDNA2. Same thing happened with RDNA1.


----------



## jonRock1992

lawson67 said:


> I am in 2 minds weather to keep my new Ultimate red devil or return it, its defiantly a very well binned card and can run timespy at 2850mhz and i belive will hit certainly hit over 25k in TS however to get that score or to just get the best out of it i need a water block on it so i need to buy a water cooling kit as i might as well loop the whole system, however all this will still only net me 10 -12 fps more than my RX 6800 XT, right now i can hit 24500 points in TS with the Ultimate which gives me 10fps in all my games that have benchmarks over my RX6800 XT, so if i keep the RX 6900 XT it will cost me £600 as i belive i should get £1000 for my RX 6800 XT, now add in on top a waterloop which will cost about £650 so i will pay around £1200 pounds for 10 to 12 fps as even with a 25k score in TS it will still only be a 10 - 12 FPS increase in games over my RX 6800 XT, is it worth it clearly not, But maybe i need some convincing and adding a water loop might be a fun project, so come on guys what are your thoughts?, keep it and build a loop or send it back?


That's a tough one. I'd say if you were happy with the performance of your 6800 XT, then keep that. However you're doing 2800MHz+ in Timespy on air. That's a really good bin. The 6900 XT should be better at ray tracing as well if you're into that.


----------



## majestynl

lawson67 said:


> I am in 2 minds weather to keep my new Ultimate red devil or return it, its defiantly a very well binned card and can run timespy at 2850mhz and i belive will hit certainly hit over 25k in TS however to get that score or to just get the best out of it i need a water block on it so i need to buy a water cooling kit as i might as well loop the whole system, however all this will still only net me 10 -12 fps more than my RX 6800 XT, right now i can hit 24500 points in TS with the Ultimate which gives me 10fps in all my games that have benchmarks over my RX6800 XT, so if i keep the RX 6900 XT it will cost me £600 as i belive i should get £1000 for my RX 6800 XT, now add in on top a waterloop which will cost about £650 so i will pay around £1200 pounds for 10 to 12 fps as even with a 25k score in TS it will still only be a 10 - 12 FPS increase in games over my RX 6800 XT, is it worth it clearly not, But maybe i need some convincing and adding a water loop might be a fun project, so come on guys what are your thoughts?, keep it and build a loop or send it back?


Who said a 6900 is best bang for buck when it comes to gaming ? 

80% here is for benching....


----------



## majestynl

Btw, bought this one and ordered a EK block that will ship mid August....


----------



## LtMatt

Has anyone else noticed that modifying the FCLK/FCLKBoostFreq via MPT seems to increase Hotspot temperatures a bit?

Decided to try it out in Days Gone at 4K max settings and my usual maximum peak of 20c Delta rose to 25c.


----------



## lawson67

jonRock1992 said:


> That's a tough one. I'd say if you were happy with the performance of your 6800 XT, then keep that. However you're doing 2800MHz+ in Timespy on air. That's a really good bin. The 6900 XT should be better at ray tracing as well if you're into that.


Ok so i have bought a full water cooling loop for my rig from Alphacool which includes the Alphacool Eisblock for the Powercolor Red Devil, a CPU block two 360mm Rads with an alphacool Reservoir and pump, hopfully it will all be here next week and LM will go on both CPU and GPU.


----------



## lawson67

Has anyone got the Alphacool Eisblock for the Powercolor Red Devil on here?, if so what do you think of it?


----------



## Bart

lawson67 said:


> Has anyone got the Alphacool Eisblock for the Powercolor Red Devil on here?, if so what do you think of it?


I have the Eisblock for my Asus TUF 6900XT, pasted with Kingpin KPX, loving it, but mine can't hit the clocks that yours can, so it hasn't been pushed nearly as hard. I cap out at 2750max core with 375W in MPT.


----------



## xR00Tx

Hi there!

This was the best I could get out of my water blocked 6900 XT Sapphire Nitro+: 25213 graphics points (Time Spy)

I scored 23 977 in Time Spy

My CPU score fluctuates a lot... please, any tips to get the CPU score more constant?!


----------



## majestynl

xR00Tx said:


> Hi there!
> 
> This was the best I could get out of my water blocked 6900 XT Sapphire Nitro+: 25213 graphics points (Time Spy)
> 
> I scored 23 977 in Time Spy
> 
> My CPU score fluctuates a lot... please, any tips to get the CPU score more constant?!


Nice GS score! You probably are pushing some AC air into your rads....39c is really low for Brazil 😆


----------



## xR00Tx

In fact, no AC and only windows open! 

It's winter now and, luckily, this winter is being one of the coldest ever in Brazil!!!


----------



## tolis626

Hello everyone! I recently sold my 5700XT Nitro+ and was ready to bite the bullet and get a 6800XT. But where I was shopping from, I could find the 6900XT for just 150€ more, so I ended up going for the 6900XT Nitro+ SE, which should (hopefully) be here by next week. I just wanted to come here and say hi and to ask a few rather random questions. First and foremost is, is the Nitro+ actually any good? Can the cooler handle an overclocked 6900XT? I am a bit concerned as it isn't as big or as heavy as other cards, which makes me wonder what that entails. Secondly, I read somewhere that the SE version of the Nitro+ uses the Toxic PCB and the Navi 21 XTXH die. Does anyone know if it's true? And if so, what kind of overclocking could I expect (ballpark, can't predict the results of the silicon lottery). I'd also appreciate some pointers about what values to set in MPT. And last, since you guys have a lot of experience with these chips, do you know if my PSU can handle it? It's an EVGA G3 850W, so it's a quality unit, but I've seen Igor's Lab's report about these new high end cards, which can momentarily have spikes of more than 100W over the limits, triggering OCP and stuff. 

Thanks in advance and sorry if this all has been asked before, but I'm on holiday and, well, I honestly can't be bothered to read 142 pages right now. I will do my homework once I get back though, I promise.


----------



## LtMatt

xR00Tx said:


> Hi there!
> 
> This was the best I could get out of my water blocked 6900 XT Sapphire Nitro+: 25213 graphics points (Time Spy)
> 
> I scored 23 977 in Time Spy
> 
> My CPU score fluctuates a lot... please, any tips to get the CPU score more constant?!


Great score, cpu also! Can you share your Bios settings for cpu and memory please?


----------



## gtz

LtMatt said:


> Great score, cpu also! Can you share your Bios settings for cpu and memory please?


If you have a high core count CPU (high threaded) it confuses timespy. If you have a 16 core or higher you need to disable hyperthreading. On my old 9980XE I would get around a CPU score of 19000 without hyperthreading and 16000 with hyperthreading. Time spy extreme is a little better but I always score higher with 4 threads disabled. Just a weird bug.

xrootx also disabled hyperthreading on that run (16 threads versus 32).

Also I might join this club soon. There was a person locally that wants to trade a 6900XT for a RTX card. Originally wanted a 3080 level card but I texted and said how much money would he want plus a 3070. Just got back to me and said 400. I am really considering it. Only thing holding me back is that it is a reference card.


----------



## LtMatt

LtMatt said:


> Has anyone else noticed that modifying the FCLK/FCLKBoostFreq via MPT seems to increase Hotspot temperatures a bit?
> 
> Decided to try it out in Days Gone at 4K max settings and my usual maximum peak of 20c Delta rose to 25c.


I guess no one else saw this.

I did knock the radiator off my desk with the GPU plugged in to the PCI-E slot a week or so back and it pulled on the block, after that my deltas and hotspot temps got 5-10c worse than before.

Decided to take my Toxic apart again and decided to try and re-paste it. Didn't see any improvement. Decided to re-paste it again, this time trying to add pressure to different parts of the block so the mounting screws came through further hopefully allowing for a better mount as i noticed the top left screw hole came further through the PCB than the other three.

This actually made it worse. 

Third times a charm. Took it apart again, re-did the paste, applied some to the pump copper plate too. This time i only applied pressure to the PCI-E bracket area as i was securing the mounting mechanism as I noticed this made all the screw holes appear even as they came through the PCB screw holes.

This made a significant difference, thank god. Delta of 25-28c, down to 14-17c. Big difference at 350W-400W.

Saw Jons post earlier about his improvement after repasting with LM, so this made me revisit mine as well, albeit using regular Kryonaut paste. 



gtz said:


> If you have a high core count CPU (high threaded) it confuses timespy. If you have a 16 core or higher you need to disable hyperthreading. On my old 9980XE I would get around a CPU score of 19000 without hyperthreading and 16000 with hyperthreading. Time spy extreme is a little better but I always score higher with 4 threads disabled. Just a weird bug.
> 
> xrootx also disabled hyperthreading on that run (16 threads versus 32).
> 
> Also I might join this club soon. There was a person locally that wants to trade a 6900XT for a RTX card. Originally wanted a 3080 level card but I texted and said how much money would he want plus a 3070. Just got back to me and said 400. I am really considering it. Only thing holding me back is that it is a reference card.


Thanks for sharing this, did not realise disable HT would help here. I'll give it a try next time I'm going for a bench run.


----------



## lawson67

Bart said:


> I have the Eisblock for my Asus TUF 6900XT, pasted with Kingpin KPX, loving it, but mine can't hit the clocks that yours can, so it hasn't been pushed nearly as hard. I cap out at 2750max core with 375W in MPT.


Thanks for the reply and yeah really looking forward to my water block and loop arriving as i am hoping it will realise the cards full potential and as you say my card benches TS at 2850mhz so i guess by the sounds of it its a very well binned card, i didn't even know what others were getting on there XTXH, so being new to XTXH and trying to find out how well binned my card is what's the highest frequency anyone can do in TS on here with an XTXH card?


----------



## lawson67

LtMatt said:


> I guess no one else saw this.
> 
> I did knock the radiator off my desk with the GPU plugged in to the PCI-E slot a week or so back and it pulled on the block, after that my deltas and hotspot temps got 5-10c worse than before.
> 
> Decided to take my Toxic apart again and decided to try and re-paste it. Didn't see any improvement. Decided to re-paste it again, this time trying to add pressure to different parts of the block so the mounting screws came through further hopefully allowing for a better mount as i noticed the top left screw hole came further through the PCB than the other three.
> 
> This actually made it worse.
> 
> Third times a charm. Took it apart again, re-did the paste, applied some to the pump copper plate too. This time i only applied pressure to the PCI-E bracket area as i was securing the mounting mechanism as I noticed this made all the screw holes appear even as they came through the PCB screw holes.
> 
> This made a significant difference, thank god. Delta of 25-28c, down to 14-17c. Big difference at 350W-400W.
> 
> Saw Jons post earlier about his improvement after repasting with LM, so this made me revisit mine as well, albeit using regular Kryonaut paste.
> 
> 
> Thanks for sharing this, did not realise disable HT would help here. I'll give it a try next time I'm going for a bench run.


Glad you got it sorted mate and don't attempt to put LM on your AIO as its bare copper i belive?


----------



## LtMatt

lawson67 said:


> Glad you got it sorted mate and don't attempt to put LM on your AIO as its bare copper i belive?


I thought it was okay on copper but not an aluminium. But yeah, got a nice mount so not planning on touching it again.


----------



## majestynl

Damn this thing is huge!!
Only fit in Ghetto style.. just for time being till the EK block arrives.

It has a nice stock clock (2649mhz). Lets see what it can do on air.


----------



## LtMatt

majestynl said:


> Damn this thing is huge!!
> Only fit in Ghetto style.. just for time being till the EK block arrives.
> 
> It has a nice stock clock (2649mhz). Lets see what it can do on air.
> 
> View attachment 2520208


Nothing from with a bit of ghetto style IMO. 

That is a nice stock clock. I hope you can break the jinx of cards with a high stock clock not being the best clockers.


----------



## cfranko

I don’t get how 3090’s are so cool, It pulls around 350 watts and generally it sits at 60-65 edge temperature and 75 hotspot. While both my 6900 xt Phantom D and my friends 6900 xt nitro+ gets around 100-105 hotspot at 350 watts. Is it the cooler being bad or what? Does anyone know? Also the delta is much better on the 3090 as well, kind of upset that I didn’t get a 3090. Now I have to build a custom loop for my 6900 xt.


----------



## LtMatt

cfranko said:


> I don’t get how 3090’s are so cool, It pulls around 350 watts and generally it sits at 60-65 edge temperature and 75 hotspot. While both my 6900 xt Phantom D and my friends 6900 xt nitro+ gets around 100-105 hotspot at 350 watts. Is it the cooler being bad or what? Does anyone know? Also the delta is much better on the 3090 as well, kind of upset that I didn’t get a 3090. Now I have to build a custom loop for my 6900 xt.


I'm not sure they are. The FE 3090/3080s all have memory temps hitting 105/110c from what I've seen. Their delta might between edge and junction might be close though.


----------



## cfranko

LtMatt said:


> I'm not sure they are. The FE 3090/3080s all have memory temps hitting 105/110c from what I've seen. Their delta might between edge and junction might be close though.


Im not talking about memory temps though, that could be solved with replacing pads, the core on the 3090 just heats up much less when you look at the power draw. Sure, you could repaste the 6900 xt but I actually did repaste it and it did not change anything the core is just too hot, even at stock power limit. I don’t know why.


----------



## jonRock1992

cfranko said:


> Im not talking about memory temps though, that could be solved with replacing pads, the core on the 3090 just heats up much less when you look at the power draw. Sure, you could repaste the 6900 xt but I actually did repaste it and it did not change anything the core is just too hot, even at stock power limit. I don’t know why.


RDNA2 just runs hot. Even with a waterblock my hotspot temp was in the mid to low 80's in a Timespy graphics test 2 loop. The only way I could get it lower was with liquid metal. Now my hotspot temp is in the mid to low 60's in Timespy GT2.


----------



## cfranko

jonRock1992 said:


> RDNA2 just runs hot. Even with a waterblock my hotspot temp was in the mid to low 80's in a Timespy graphics test 2 loop. The only way I could get it lower was with liquid metal. Now my hotspot temp is in the mid to low 60's in Timespy GT2.


I bought a loop with a Corsair XD5, 2 Bykski 360mm radiators, Bykski 6900 xt block and a bykski AM4 block. I hope my temps get better, my card literally starts throttling at 300 watts right now so this custom loop should benefit me well. I spent 500$ on the whole loop, its pretty funny that these cards come with air coolers that are just sufficient for stock power limit and even insufficient in rare instances. Spending an extra 500 is kinda frustrating. I am personally not comfortable when my gpu core is sitting at 105 degrees celcius.


----------



## LtMatt

jonRock1992 said:


> RDNA2 just runs hot. Even with a waterblock my hotspot temp was in the mid to low 80's in a Timespy graphics test 2 loop. The only way I could get it lower was with liquid metal. Now my hotspot temp is in the mid to low 60's in Timespy GT2.


That is some drop!


----------



## jonRock1992

LtMatt said:


> That is some drop!


Yeah it was crazy. Couldn't believe it. I don't think the drop was entirely due to the liquid metal though because I put in a new radiator at the same time (Black Ice Nemesis 360 GTS).


----------



## lawson67

LtMatt said:


> I thought it was okay on copper but not an aluminium. But yeah, got a nice mount so not planning on touching it again.


The Gallium in LM loves to alloy with other metals; with copper the metal ions will migrate in to the copper metal, gradually creating a copper-gallium alloy that is grey-silverish in color so i would not use on copper unless it was nickel plated of course


----------



## lawson67

majestynl said:


> Damn this thing is huge!!
> Only fit in Ghetto style.. just for time being till the EK block arrives.
> 
> It has a nice stock clock (2649mhz). Lets see what it can do on air.
> 
> View attachment 2520208


Nice mine sits out the box at 2604mhz, be interested to see what's the Max frequency you can bench TS at, i think the Red Devil Ultimate is a great card with the benefit over the other XTXH cards of having the hotspot at 110c which give you much more room to work with and even allows a high overclock on air, it would be nice if your cooler is mounted as good as mine i have pushed over 420w though mine without the hotspot going over 100c, however until my water block gets here i am running it at 340w MPT plus 15% in wattman


----------



## Enzarch

LtMatt said:


> I thought it was okay on copper but not an aluminium. But yeah, got a nice mount so not planning on touching it again.





lawson67 said:


> The Gallium in LM loves to alloy with other metals; with copper the metal ions will migrate in to the copper metal, gradually creating a copper-gallium alloy that is grey-silverish in color so i would not use on copper unless it was nickel plated of course


LM on copper is fine; as Lawson said, the copper will 'absorb' some of it and become 'stained', And will thus require a re-application after some time, but this will not effect its performance.
I have used LM on bare copper blocks for years.


----------



## jonRock1992

Enzarch said:


> LM on copper is fine; as Lawson said, the copper will 'absorb' some of it and become 'stained', And will thus require a re-application after some time, but this will not effect its performance.
> I have used LM on bare copper blocks for years.


I've done the same. It just stains it. I didn't notice a measurable difference in performance after it stains.


----------



## J7SC

...finally getting ready to integrate the 6900XT (3950X) and RTX 3090 (5950X) systems into one TT Core P8 'case' build for work + play - both systems had been running separately for a while. ...taking the opportunity to do a high-pressure (dual D5 / 100%) leak-test of the blocks w/o GPUs as they're getting re-pasted...will of course do another leak-test after final assembly...


----------



## Maracus

Anyone here have the MSI 6900XT Trio X, IF so what's your hotspot temps like and have you tried repasting?

Ran Timespy with the power limit max out before using MPT and I guess I wont be using MPT.


----------



## LtMatt

Maracus said:


> Anyone here have the MSI 6900XT Trio X, IF so what's your hotspot temps like and have you tried repasting?
> 
> Ran Timespy with the power limit max out before using MPT and I guess I wont be using MPT.
> View attachment 2520329


That delta between edge and junction looks as bad as what i saw with the 2x Merc XTXH's i tried.

Were you running at 100% fan speed?

I would consider re-pasting that. Perhaps the stock paste has completely dried.


----------



## jimpsar

cfranko said:


> I bought a loop with a Corsair XD5, 2 Bykski 360mm radiators, Bykski 6900 xt block and a bykski AM4 block. I hope my temps get better, my card literally starts throttling at 300 watts right now so this custom loop should benefit me well. I spent 500$ on the whole loop, its pretty funny that these cards come with air coolers that are just sufficient for stock power limit and even insufficient in rare instances. Spending an extra 500 is kinda frustrating. I am personally not comfortable when my gpu core is sitting at 105 degrees celcius.


Really 105 degrees core???
Mine is around 65-70 max now it is hot in summer. Are you sure is not defective?
I have merc 319 black air cooled.


----------



## lestatdk

Maracus said:


> Anyone here have the MSI 6900XT Trio X, IF so what's your hotspot temps like and have you tried repasting?
> 
> Ran Timespy with the power limit max out before using MPT and I guess I wont be using MPT.


Yes, I have the Gaming X Trio as well. These are the temps with MPT:










Too bad there's no waterblock for these cards. It's not as if the stock cooler isn't big and bulky, but either the paste is dried out or it's just a bad design.


----------



## lestatdk

LtMatt said:


> That delta between edge and junction looks as bad as what i saw with the 2x Merc XTXH's i tried.
> 
> Were you running at 100% fan speed?
> 
> I would consider re-pasting that. Perhaps the stock paste has completely dried.


What paste do you suggest ? I might just re-paste mine I'm getting fed up with this card atm


----------



## cfranko

jimpsar said:


> Mine is around 65-70 max now it is hot in summer. Are you sure is not defective?


I am talking about hotspot.


----------



## LtMatt

lestatdk said:


> Yes, I have the Gaming X Trio as well. These are the temps with MPT:
> 
> View attachment 2520338
> 
> 
> Too bad there's no waterblock for these cards. It's not as if the stock cooler isn't big and bulky, but either the paste is dried out or it's just a bad design.


Holy crap that is absolutely terrible.



lestatdk said:


> What paste do you suggest ? I might just re-paste mine I'm getting fed up with this card atm


Send it back, get a different model.


----------



## cfranko

LtMatt said:


> Holy crap that is absolutely terrible.


Mine is even worse lol.


----------



## lestatdk

LtMatt said:


> Holy crap that is absolutely terrible.
> 
> 
> Send it back, get a different model.


Unfortunately can't send it back. Might try and sell it though


----------



## cfranko

lestatdk said:


> Unfortunately can't send it back. Might try and sell it though


Its weird that the 6900 xt Gaming X doesn’t have a waterblock I mean its MSI. Even my ASrock card has a waterblock made by bykski.


----------



## gamervivek

The MSI cooler was noticed to be not good for hotspot temperatures, kitguru got 100C on two samples.









MSI RX 6900 XT Gaming X Trio Review - KitGuru


Launched last month, today we take a look at MSI's flagship RX 6900 XT graphics card - the Gaming X




www.kitguru.net





MSI also have XTXH based gamingZ trio with likely the same cooler, haven't been able to find reviews for it. It's the lowest priced XTXH card around here, but I'd rather have an AIO cooled XTX card than bothering with a 100C hotspot and have it throttle clocks.


----------



## lestatdk

gamervivek said:


> The MSI cooler was noticed to be not good for hotspot temperatures, kitguru got 100C on two samples.
> 
> 
> 
> 
> 
> 
> 
> 
> 
> MSI RX 6900 XT Gaming X Trio Review - KitGuru
> 
> 
> Launched last month, today we take a look at MSI's flagship RX 6900 XT graphics card - the Gaming X
> 
> 
> 
> 
> www.kitguru.net
> 
> 
> 
> 
> 
> MSI also have XTXH based gamingZ trio with likely the same cooler, haven't been able to find reviews for it. It's the lowest priced XTXH card around here, but I'd rather have an AIO cooled XTX card than bothering with a 100C hotspot and have it throttle clocks.


Guess I won't bother trying to re-paste mine. That would be a wasted effort.

I did see one poster on Reddit saying the Alphacool waterblock for the 6800XT gaming x can be modified to fit.



http://imgur.com/a/XSqWCia


----------



## jimpsar

cfranko said:


> I am talking about hotspot.


OK you mentioned cpu. There is something going wrong with you card unfortunate.


----------



## cfranko

@majestynl You’re the best bro, I actually opened up my card today and Repasted it again, but this time I tightened up the screws a lot, I mean a lot, it was almost about to break I guess. My hotspot temp decreased about 15 degrees. You were right the mounting pressure was the issue.


----------



## LtMatt

cfranko said:


> @majestynl You’re the best bro, I actually opened up my card today and Repasted it again, but this time I tightened up the screws a lot, I mean a lot, it was almost about to break I guess. My hotspot temp decreased about 15 degrees. You were right the mounting pressure was the issue.


Great job. Which model are you using?


----------



## cfranko

LtMatt said:


> Great job. Which model are you using?


Asrock Phantom Gaming


----------



## ptt1982

A quick and unfortunate update from Red Devil 6900xt (XTX) + Alphacool waterblock saga you all are dying to hear more about:

-Edge/Junction delta back to 35C-40C, TS GT2 gives max 89C junction at 409W peak (which I consider still acceptable)
-I suspect the junction will climb back over 100C overtime
-The above testing was at 28C ambient room temp
-The temp gain is only with the GPU, CPU stays the same, suggesting another mounting problem

I'm starting to think the PCB is somehow uneven or has been twisted. There's simply something that keeps the core getting a good contact with whatever heatsink or waterblock I put on it, and it always keeps loosening up over time. Any suggestions on how to make sure this won't happen the next time I try and fix it? I've done everything by the book, triple checked manuals, suggestions and bear in mind I have 20 years of PC building experience. This is the first time something doesn't work as it should. First time for everything, huh!

Rant section
Buying this expensive used gpu has been nothing but trouble. Including the custom loop I built around it, the waterblock, multiple thermal pastes and pads I bought to test different mounts, countless of days of testing and remounting blocks/heatsinks, it is just not worth it. I probably spent 2000€ only on the GPU to get to this point where the temps keep creeping up again and I can't keep them under control. I'm simply too tired to start tuning it again, so I'll undervolt it if the temps go high again, and if it starts to automatically shut down, I'll drain the loop and reinstall everything only at that point. It can blow up and burn at this stage, I don't care anymore. My time is more expensive than this crap.

Meanwhile, I'm back to work after my exceptionally long (and fantastic) holiday in two weeks. It is time to make more money so that buying a top of the line gaming PC in 2022 Q4 doesn't make a dent to the wallet. At the same time, I want to forget this expensive drama. It was my first splurge mistake in my life. The next GPU I buy will be an AIO unit with an extended warranty. I won't go back to air cooling, but I also cannot do these endless remounts of GPUs while I'm dealing with work and family. This whole thing should "just work" when I put that amount of money down and do everything by the book. If I come home after a 60-hour week full of stress on a Friday night, and possibly have a case I need to work on over the weekend, I want to play a game when I have that little time on a frikking GPU that I paid full price to do so instead of thinking "is mount getting loose again, are the temps out of control, do I have to carve out my whole weekend's schedule for redoing the custom loop again?"

Lesson learned, moving forward. Next time I build a PC, it will be with parts that have full warranty, nothing will be used, only reputable manufacturers/stores that have a great customer service and loose warranties. I take that 2000€ loss and just leave it as it is, a mistake. Meanwhile, I'll deal with this mistake somehow when I have time or energy in the next 18 months.

Sorry for the rant. This whole thing has been a great source of stress, when it's supposed to do the very opposite, relieve it!


----------



## jonRock1992

ptt1982 said:


> A quick and unfortunate update from Red Devil 6900xt (XTX) + Alphacool waterblock saga you all are dying to hear more about:
> 
> -Edge/Junction delta back to 35C-40C, TS GT2 gives max 89C junction at 409W peak (which I consider still acceptable)
> -I suspect the junction will climb back over 100C overtime
> -The above testing was at 28C ambient room temp
> -The temp gain is only with the GPU, CPU stays the same, suggesting another mounting problem
> 
> I'm starting to think the PCB is somehow uneven or has been twisted. There's simply something that keeps the core getting a good contact with whatever heatsink or waterblock I put on it, and it always keeps loosening up over time. Any suggestions on how to make sure this won't happen the next time I try and fix it? I've done everything by the book, triple checked manuals, suggestions and bear in mind I have 20 years of PC building experience. This is the first time something doesn't work as it should. First time for everything, huh!
> 
> Rant section
> Buying this expensive used gpu has been nothing but trouble. Including the custom loop I built around it, the waterblock, multiple thermal pastes and pads I bought to test different mounts, countless of days of testing and remounting blocks/heatsinks, it is just not worth it. I probably spent 2000€ only on the GPU to get to this point where the temps keep creeping up again and I can't keep them under control. I'm simply too tired to start tuning it again, so I'll undervolt it if the temps go high again, and if it starts to automatically shut down, I'll drain the loop and reinstall everything only at that point. It can blow up and burn at this stage, I don't care anymore. My time is more expensive than this crap.
> 
> Meanwhile, I'm back to work after my exceptionally long (and fantastic) holiday in two weeks. It is time to make more money so that buying a top of the line gaming PC in 2022 Q4 doesn't make a dent to the wallet. At the same time, I want to forget this expensive drama. It was my first splurge mistake in my life. The next GPU I buy will be an AIO unit with an extended warranty. I won't go back to air cooling, but I also cannot do these endless remounts of GPUs while I'm dealing with work and family. This whole thing should "just work" when I put that amount of money down and do everything by the book. If I come home after a 60-hour week full of stress on a Friday night, and possibly have a case I need to work on over the weekend, I want to play a game when I have that little time on a frikking GPU that I paid full price to do so instead of thinking "is mount getting loose again, are the temps out of control, do I have to carve out my whole weekend's schedule for redoing the custom loop again?"
> 
> Lesson learned, moving forward. Next time I build a PC, it will be with parts that have full warranty, nothing will be used, only reputable manufacturers/stores that have a great customer service and loose warranties. I take that 2000€ loss and just leave it as it is, a mistake. Meanwhile, I'll deal with this mistake somehow when I have time or energy in the next 18 months.
> 
> Sorry for the rant. This whole thing has been a great source of stress, when it's supposed to do the very opposite, relieve it!


I'm so sorry to hear about the struggles you've had with that GPU. I was really upset too when I first got the the red devil ultimate. The extremely high temps were just unacceptable for a $2300 GPU. I finally got mine under control with a waterblock and liquid metal. I really wish I could help you figure out what would be causing your temps to creep up over time. 

Maybe it has something to do with how it's mounted in your case? Sometimes the slots in a PC case don't line up well with the pci-e slots on the motherboard. This could cause the pcb of the GPU to warp over time and possibly cause mounting issues. It's a stretch, but it's all I can think of atm.


----------



## majestynl

LtMatt said:


> Nothing from with a bit of ghetto style IMO.
> 
> That is a nice stock clock. I hope you can break the jinx of cards with a high stock clock not being the best clockers.


lol indeed! with all that part swapping etc. im finding creative ways to minimize the draining of my loop 

Cant do to much right now on air. Im temp limited (hotspot). Cant push more power... Will update when my block arrives!



lawson67 said:


> Nice mine sits out the box at 2604mhz, be interested to see what's the Max frequency you can bench TS at, i think the Red Devil Ultimate is a great card with the benefit over the other XTXH cards of having the hotspot at 110c which give you much more room to work with and even allows a high overclock on air, it would be nice if your cooler is mounted as good as mine i have pushed over 420w though mine without the hotspot going over 100c, however until my water block gets here i am running it at 340w MPT plus 15% in wattman


yeah i cant push to much right now. Hotspot is burning! I will wait for the block. I can set 2780 in wattman and run the ts loops but that doesn't matter cause i need to limit the power otherwise it is shutting down due the hotspot temps.




cfranko said:


> @majestynl You’re the best bro, I actually opened up my card today and Repasted it again, but this time I tightened up the screws a lot, I mean a lot, it was almost about to break I guess. My hotspot temp decreased about 15 degrees. You were right the mounting pressure was the issue.


I'm glad it helped! Its common issue with AMD


----------



## cfranko

majestynl said:


> lol indeed! with all that part swapping etc. im finding creative ways to minimize the draining of my loop
> 
> Cant do to much right now on air. Im temp limited (hotspot). Cant push more power... Will update when my block arrives!
> 
> 
> 
> yeah i cant push to much right now. Hotspot is burning! I will wait for the block. I can set 2780 in wattman and run the ts loops but that doesn't matter cause i need to limit the power otherwise it is shutting down due the hotspot temps.
> 
> 
> 
> 
> I'm glad it helped! Its common issue with AMD


I actually was able to get my best score ever on Time Spy today with 400 watts, previously it would thermal shutdown at 400 watts.








I scored 21 111 in Time Spy


AMD Ryzen 9 5900X, AMD Radeon RX 6900 XT x 1, 32768 MB, 64-bit Windows 10}




www.3dmark.com




Is this score good?


----------



## lawson67

cfranko said:


> I actually was able to get my best score ever on Time Spy today with 400 watts, previously it would thermal shutdown at 400 watts.
> 
> 
> 
> 
> 
> 
> 
> 
> I scored 21 111 in Time Spy
> 
> 
> AMD Ryzen 9 5900X, AMD Radeon RX 6900 XT x 1, 32768 MB, 64-bit Windows 10}
> 
> 
> 
> 
> www.3dmark.com
> 
> 
> 
> 
> Is this score good?


I must of got really lucky with my card temps are well below 110c even at at 420w the hotspot hits 95c, earlier today i banged 460w thought it and that made it a bit toasty at 104c, but for an air cooled card the power i can put though it is incredible, i have backed it down now to 400w until my water block gets here, what i want to know is when will this card stop drawing current?, i didn't think it would pull 460w but it did, i had to buy a 1000w PSU today as my RM850X was not cutting it


----------



## lawson67

cfranko said:


> I actually was able to get my best score ever on Time Spy today with 400 watts, previously it would thermal shutdown at 400 watts.
> 
> 
> 
> 
> 
> 
> 
> 
> I scored 21 111 in Time Spy
> 
> 
> AMD Ryzen 9 5900X, AMD Radeon RX 6900 XT x 1, 32768 MB, 64-bit Windows 10}
> 
> 
> 
> 
> www.3dmark.com
> 
> 
> 
> 
> Is this score good?


I think you should be getting more than that my RX 6800 XT was doing about the same graphic score, mined you after i put LM on that card it bloody flew


----------



## cfranko

lawson67 said:


> I think you should be getting more than that my RX 6800 XT was doing about the same graphic score, mined you after i put LM on that card it bloody flew
> 
> View attachment 2520383


I could probably push 23.000 graphics score if I tweak it a bit more. But I lost silicon lottery real bad so I probably won’t be able to go beyond 23.000. 2600 MHz maximum frequency is the best I can do in timespy, 2605 MHz crashes but 2600 is stable pretty interesting.


----------



## cfranko

lawson67 said:


> I must of got really lucky with my card temps are well below 110c even at at 420w the hotspot hits 95c, earlier today i banged 460w thought it and that made it a bit toasty at 104c, but for an air cooled card the power i can put though it is incredible, i have backed it down now to 400w until my water block gets here, what i want to know is when will this card stop drawing current?, i didn't think it would pull 460w but it did, i had to buy a 1000w PSU today as my RM850X was not cutting it
> View attachment 2520381


the card pulling 460 watts is incredible, I thought that with the stock voltage, 1175mV it won’t pull more than 410 watts even if you set a higher power limit than 410. The card actually pulling 460 is pretty interesting.


----------



## lawson67

cfranko said:


> the card pulling 460 watts is incredible, I thought that with the stock voltage, 1175mV it won’t pull more than 410 watts even if you set a higher power limit than 410. The card actually pulling 460 is pretty interesting.


I am sure it will pull more and that was pulling 460w at only 1133mv this thing can bench TS at 2850mhz at incredibly low voltage of 1141mv so still lots of core voltage to go, i think this will fly on a water block


----------



## weleh

tolis626 said:


> Hello everyone! I recently sold my 5700XT Nitro+ and was ready to bite the bullet and get a 6800XT. But where I was shopping from, I could find the 6900XT for just 150€ more, so I ended up going for the 6900XT Nitro+ SE, which should (hopefully) be here by next week. I just wanted to come here and say hi and to ask a few rather random questions. First and foremost is, is the Nitro+ actually any good? Can the cooler handle an overclocked 6900XT? I am a bit concerned as it isn't as big or as heavy as other cards, which makes me wonder what that entails. Secondly, I read somewhere that the SE version of the Nitro+ uses the Toxic PCB and the Navi 21 XTXH die. Does anyone know if it's true? And if so, what kind of overclocking could I expect (ballpark, can't predict the results of the silicon lottery). I'd also appreciate some pointers about what values to set in MPT. And last, since you guys have a lot of experience with these chips, do you know if my PSU can handle it? It's an EVGA G3 850W, so it's a quality unit, but I've seen Igor's Lab's report about these new high end cards, which can momentarily have spikes of more than 100W over the limits, triggering OCP and stuff.
> 
> Thanks in advance and sorry if this all has been asked before, but I'm on holiday and, well, I honestly can't be bothered to read 142 pages right now. I will do my homework once I get back though, I promise.


Not sure anyone has replied to you but here goes my take.

I've had a 6800XT Nitro+ SE and it could handle 400W benching at 100% fan speed at 85C hotspot (I've posted a screenshot here somewhere).
The card is super light but the cooler is super good, as usual by Sapphire.

About the die stepping, I'm not so sure it's a XTXH. I doubt it is judging by advertised clocks. The Toxic PCB is nice but there's nothing special about it. In fact, the best PCBs are Asrock OC Formula and Gaming Z Trio accordingly to Buildzoid. The rest is just a slightly better variation of the reference design but nothing special.

Your 850W PSU is fine, for daily and even overclocks since you're going to be limited by voltage anyway. I had my own 6900XT running on a 750W Unit and it was fine. It was only after I volt modded the card that I had to upgrade to a 1000W PSU because at 500W+ it was triggering OCP on my PSU.


----------



## cfranko

lawson67 said:


> I am sure it will pull more and that was pulling 460w at only 1133mv this thing can bench TS at 2850mhz at incredibly low voltage of 1141mv so still lots of core voltage to go, i think this will fly on a water block
> View attachment 2520385


I guess the newer cards are better, I bought my 6900 xt very near to launch date and its a terrible bin


----------



## lawson67

weleh said:


> Not sure anyone has replied to you but here goes my take.
> 
> I've had a 6800XT Nitro+ SE and it could handle 400W benching at 100% fan speed at 85C hotspot (I've posted a screenshot here somewhere).
> The card is super light but the cooler is super good, as usual by Sapphire.
> 
> About the die stepping, I'm not so sure it's a XTXH. I doubt it is judging by advertised clocks. The Toxic PCB is nice but there's nothing special about it. In fact, the best PCBs are Asrock OC Formula and Gaming Z Trio accordingly to Buildzoid. The rest is just a slightly better variation of the reference design but nothing special.
> 
> Your 850W PSU is fine, for daily and even overclocks since you're going to be limited by voltage anyway. I had my own 6900XT running on a 750W Unit and it was fine. It was only after I volt modded the card that I had to upgrade to a 1000W PSU because at 500W+ it was triggering OCP on my PSU.


GPU-Z should tell you if you have an XTXH GPU at least it does on my Powercolor


----------



## tolis626

weleh said:


> Not sure anyone has replied to you but here goes my take.
> 
> I've had a 6800XT Nitro+ SE and it could handle 400W benching at 100% fan speed at 85C hotspot (I've posted a screenshot here somewhere).
> The card is super light but the cooler is super good, as usual by Sapphire.
> 
> About the die stepping, I'm not so sure it's a XTXH. I doubt it is judging by advertised clocks. The Toxic PCB is nice but there's nothing special about it. In fact, the best PCBs are Asrock OC Formula and Gaming Z Trio accordingly to Buildzoid. The rest is just a slightly better variation of the reference design but nothing special.
> 
> Your 850W PSU is fine, for daily and even overclocks since you're going to be limited by voltage anyway. I had my own 6900XT running on a 750W Unit and it was fine. It was only after I volt modded the card that I had to upgrade to a 1000W PSU because at 500W+ it was triggering OCP on my PSU.


Right as I was beginning to think that my post was buried and forgotten, there you are! Thanks a bunch mate, appreciate your input!

Well, that's all great to hear. I keep thinking that, with the cooler being light, it should at some point lead to heat soak with the power these things draw. But if what you say holds true I'm golden, as I usually crank the fans when overclocking anyway (I use a pretty aggressive fan curve) because I play with headphones and like the extra performance/lower temps. I hope that your 6800XT experience transfers to the 6900XT Nitro+. A few days and I'll know first hand, but I can't wait, it's my first high end GPU in forever.

As for the die stepping yeah, it seems weird to me that it would be an XTXH, but that's what I read, I'm quite sure. Maybe I misunderstood and I can't seem to be able to find that post, but whatevs. I kinda hope that it actually ends up being one, although I know that on air I shouldn't really care. I just hope I don't actually lose the silicon lottery again.

PSU should be fine, I guess, yeah. EVGA PSUs are high quality units, so it should hold up nicely. It does make me a bit sad that I sold my 1300W EVGA G2, but that thing made too much noise and too little sense to keep, so I gave it the boot. Would've come in handy if I decided to go water with this beauty.

Anyway, enough of my rambling. If all goes well, it should be in my hands by the end of this week and I'll see everything for myself. If it doesn't, I'm screwed.


----------



## jonRock1992

I got a little closer to 25K. I don't think my gpu is capable of getting 25K without hardware mods.










AMD Radeon RX 6900 XT video card benchmark result - AMD Ryzen 7 5800X,ASUSTeK COMPUTER INC. ROG CROSSHAIR VIII DARK HERO (3dmark.com)


----------



## LtMatt

lawson67 said:


> I am sure it will pull more and that was pulling 460w at only 1133mv this thing can bench TS at 2850mhz at incredibly low voltage of 1141mv so still lots of core voltage to go, i think this will fly on a water block
> View attachment 2520385


That's incredible, and amazing that your junction temp is not higher considering air and power draw. You surely have a golden sample!


----------



## weleh

tolis626 said:


> Right as I was beginning to think that my post was buried and forgotten, there you are! Thanks a bunch mate, appreciate your input!
> 
> Well, that's all great to hear. I keep thinking that, with the cooler being light, it should at some point lead to heat soak with the power these things draw. But if what you say holds true I'm golden, as I usually crank the fans when overclocking anyway (I use a pretty aggressive fan curve) because I play with headphones and like the extra performance/lower temps. I hope that your 6800XT experience transfers to the 6900XT Nitro+. A few days and I'll know first hand, but I can't wait, it's my first high end GPU in forever.
> 
> As for the die stepping yeah, it seems weird to me that it would be an XTXH, but that's what I read, I'm quite sure. Maybe I misunderstood and I can't seem to be able to find that post, but whatevs. I kinda hope that it actually ends up being one, although I know that on air I shouldn't really care. I just hope I don't actually lose the silicon lottery again.
> 
> PSU should be fine, I guess, yeah. EVGA PSUs are high quality units, so it should hold up nicely. It does make me a bit sad that I sold my 1300W EVGA G2, but that thing made too much noise and too little sense to keep, so I gave it the boot. Would've come in handy if I decided to go water with this beauty.
> 
> Anyway, enough of my rambling. If all goes well, it should be in my hands by the end of this week and I'll see everything for myself. If it doesn't, I'm screwed.


My 6800XT was god tier bin, 2750 Mhz stable which was only 50 Mhz shy of it's 2800 limit.

The 6800XT will always have slightly better temps due to same die but less active CU's however it also holds voltage during load higher than my 6900XT did so there's that to balance things out.
I would expect the experience to be pretty similar and the card and it's fans were really great, 0 complaints.

Sapphire makes quality products and I have no issues recommending them or expecting high quality from them. 

The stepping thing for me is pretty clear because they don't advertise any special Trixx boost or Toxic boost. My Toxic is normal stepping and Trixx boost does 2660 Mhz on the core which is way higher than any other non XTXH card ever. It seems strange to me that if the Nitro+SE would be XTXH, it would only advertise 2365 Mhz boost without any special boost via their own software.

Good luck


----------



## Joe007

Dear Members, was hoping you can help or advise

-I recently purchased and installed a Sapphire 6900xt Toxic limited edition AIO card - great performance but way too loud - Latest AMD drivers set to automatic and default, tried profiles, rage and quiet no difference.
-I can start any game fresh and they start ramping up very quickly to 100%.
-The clocks are amazing as it hits 2500+ sometimes and is stable but man just too noisy (made sure case fans are not the problem).

My experience with any of the recent cards is that the fans are pretty quiet now a days until of course I got this AIO one?

I am very technical so please advise in depth if necessary - much appreciate your help...


----------



## weleh

Replace the stock fans with better ones. That's what I did.


----------



## LtMatt

Joe007 said:


> Dear Members, was hoping you can help or advise
> 
> -I recently purchased and installed a Sapphire 6900xt Toxic limited edition AIO card - great performance but way too loud - Latest AMD drivers set to automatic and default, tried profiles, rage and quiet no difference.
> -I can start any game fresh and they start ramping up very quickly to 100%.
> -The clocks are amazing as it hits 2500+ sometimes and is stable but man just too noisy (made sure case fans are not the problem).
> 
> My experience with any of the recent cards is that the fans are pretty quiet now a days until of course I got this AIO one?
> 
> I am very technical so please advise in depth if necessary - much appreciate your help...


I've just answered you on OcuK, should sort you out.


----------



## Joe007

LtMatt said:


> I've just answered you on OcuK, should sort you out.


Thank you guys that's great help - so it looks like a known issue and OC will honour the fix?


----------



## LtMatt

Joe007 said:


> Thank you guys that's great help - so it looks like a known issue and OC will honour the fix?


You've lost me, what known issue?


----------



## Joe007

LtMatt said:


> You've lost me, what known issue?



Sorry I though you meant if you bought from Overclockers.co.uk, they swap the noisy fans ??? 
- I see what you mean as in fan speeds and bios switch settings on your OCuk post - my bad got mixed up

Thx again...


----------



## tolis626

weleh said:


> My 6800XT was god tier bin, 2750 Mhz stable which was only 50 Mhz shy of it's 2800 limit.
> 
> The 6800XT will always have slightly better temps due to same die but less active CU's however it also holds voltage during load higher than my 6900XT did so there's that to balance things out.
> I would expect the experience to be pretty similar and the card and it's fans were really great, 0 complaints.
> 
> Sapphire makes quality products and I have no issues recommending them or expecting high quality from them.
> 
> The stepping thing for me is pretty clear because they don't advertise any special Trixx boost or Toxic boost. My Toxic is normal stepping and Trixx boost does 2660 Mhz on the core which is way higher than any other non XTXH card ever. It seems strange to me that if the Nitro+SE would be XTXH, it would only advertise 2365 Mhz boost without any special boost via their own software.
> 
> Good luck


Honestly, I'd pay top dollar for a card like yours. I know that in gaming there will be little to no discernible difference in performance between 2650 and 2750MHz, but I'm a sucker for big numbers. I am more of a hardware enthusiast than a gamer, I like pushing my stuff. 

Anyway, yeah, what you're saying makes total sense. At this point I can't remember exactly what I read. It could be that I saw the Toxic PCB and immediately thought that it's an XTXH, I dunno. Or maybe they sort their XTXH dies and the better ones go to the Toxic cards? No idea. Card should be in my hands in 2-3 days, so we'll know for sure then. I really do hope that mine overclocks well, regardless of stepping. That 2750MHz sounds so good. Nice, round number and all. I couldn't live with like 2684, my OCD would go on overdrive.


----------



## ZealotKi11er

tolis626 said:


> Honestly, I'd pay top dollar for a card like yours. I know that in gaming there will be little to no discernible difference in performance between 2650 and 2750MHz, but I'm a sucker for big numbers. I am more of a hardware enthusiast than a gamer, I like pushing my stuff.
> 
> Anyway, yeah, what you're saying makes total sense. At this point I can't remember exactly what I read. It could be that I saw the Toxic PCB and immediately thought that it's an XTXH, I dunno. Or maybe they sort their XTXH dies and the better ones go to the Toxic cards? No idea. Card should be in my hands in 2-3 days, so we'll know for sure then. I really do hope that mine overclocks well, regardless of stepping. That 2750MHz sounds so good. Nice, round number and all. I couldn't live with like 2684, my OCD would go on overdrive.


You can always reduce to 2675.


----------



## Henry Owens

lawson67 said:


> I must of got really lucky with my card temps are well below 110c even at at 420w the hotspot hits 95c, earlier today i banged 460w thought it and that made it a bit toasty at 104c, but for an air cooled card the power i can put though it is incredible, i have backed it down now to 400w until my water block gets here, what i want to know is when will this card stop drawing current?, i didn't think it would pull 460w but it did, i had to buy a 1000w PSU today as my RM850X was not cutting it
> 
> View attachment 2520381


Should I get the rmx 1000 or evga 1300w g+ or g2??


----------



## ZealotKi11er

G+ is upgrade to G2 so probably best to get. Also has 10 year warranty.


----------



## weleh

RMX1000 is much cheaper than a 1200W G+ or G2 though isn't it?

I got a new HX1000 for 200€ to replace my Seasonic Unit, very happy with it since it's just as quiet due to hybrid mode and also reduced my coil whine on my card by 90% NO JOKE!!!


----------



## LtMatt

weleh said:


> RMX1000 is much cheaper than a 1200W G+ or G2 though isn't it?
> 
> I got a new HX1000 for 200€ to replace my Seasonic Unit, very happy with it since it's just as quiet due to hybrid mode and also reduced my coil whine on my card by 90% NO JOKE!!!


Using the same PSU here, it's very good but the cables are a bit janky.


----------



## lawson67

Henry Owens said:


> Should I get the rmx 1000 or evga 1300w g+ or g2??


What everyone else is saying get the Corsair RM1000X i also have bought the Red Corsair Pro PSU custom cables which i really like


----------



## Henry Owens

ZealotKi11er said:


> G+ is upgrade to G2 so probably best to get. Also has 10 year warranty.


It is the upgrade but Linus tech tips forum ranks the g+ as B tier and the G2 as A tier. Only thing throwing me off.


----------



## ZealotKi11er

Henry Owens said:


> It is the upgrade but Linus tech tips forum ranks the g+ as B tier and the G2 as A tier. Only thing throwing me off.


Need to have @shilka comment on it.


----------



## CS9K

If quality is what you're worried about, @Henry Owens , a 1000W Platinum or Titanium from Seasonic, Corsair, or EVGA will get you where you want to be. You'll pay out the nose for it, but those power supplies, especially the titanium units, are overbuilt to hell... and I'm okay with that!


----------



## Henry Owens

CS9K said:


> If quality is what you're worried about, @Henry Owens , a 1000W Platinum or Titanium from Seasonic, Corsair, or EVGA will get you where you want to be. You'll pay out the nose for it, but those power supplies, especially the titanium units, are overbuilt to hell... and I'm okay with that!


It's a huge jump once you are into platinum territory. I'm leaning toward the 1300g+ right now if @shilka says it's ok.


----------



## J7SC

CS9K said:


> If quality is what you're worried about, @Henry Owens , a 1000W Platinum or Titanium from Seasonic, Corsair, or EVGA will get you where you want to be. You'll pay out the nose for it, but those power supplies, especially the titanium units, are overbuilt to hell... and I'm okay with that!


I have three Antec HPC 1300 W Platinum in use for years (those PSUs have Delta innards) and yeah, for a high-powered PSU in applications where _big power spikes are known to happen_, you might as well go platinum or titanium.

Elsewhere...as I finished leak-testing the Bykski block yesterday and getting ready to mount it, I looked up some older air-cooled results below as baseline. Totally stock air 'max load' run on the left, MPT PL+ run 'air' on the right.

MPT PL+ does push up Hotspot relatively more and I suspect that's also why effective clock starts to diverge a bit more from nominal clocks - w-cooling should help that. Card is a 3pin Gigabyte OC (non-H)


----------



## CS9K

Henry Owens said:


> It's a huge jump once you are into platinum territory. I'm leaning toward the 1300g+ right now if @shilka says it's ok.


This is my opinion, so not everyone may agree with me, but:
_Generally speaking_, the higher quality and/or more-overbuilt the power supply is, the less "over the top" you need to buy in terms of the big number that's painted on the side of the unit.

My Seasonic PRIME Titanium TX-750, for example... There's no way in hell the Seasonic FOCUS GX-750, Corsair RM750, or EVGA G3 750W would be happy with my GPU's power limit set to 400W like it is. But, the Titanium unit is so absurdly overbuilt, it can't even be arsed to run the fan until I'm pulling more than 350W or so through it. Likewise, at 600W folding @ Home load with three GPU's in my PC in early-mid 2020, vdroop on the 12V rail was basically nil.

Where you may be looking at a 1200-1300W gold-rated PSU, a 1000W Platinum or Titanium unit would do just as well for the same setup, in my opinion.


----------



## J7SC

CS9K said:


> This is my opinion, so not everyone may agree with me, but:
> _Generally speaking_, the higher quality and/or more-overbuilt the power supply is, the less "over the top" you need to buy in terms of the big number that's painted on the side of the unit.
> 
> My Seasonic PRIME Titanium TX-750, for example... There's no way in hell the Seasonic FOCUS GX-750, Corsair RM750, or EVGA G3 750W would be happy with my GPU's power limit set to 400W like it is. But, the Titanium unit is so absurdly overbuilt, it can't even be arsed to run the fan until I'm pulling more than 350W or so through it. Likewise, at 600W folding @ Home load with three GPU's in my PC in early-mid 2020, vdroop on the 12V rail was basically nil.
> 
> Where you may be looking at a 1200-1300W gold-rated PSU, a 1000W Platinum or Titanium unit would do just as well for the same setup, in my opinion.


....or become a risk taker _extra ordinaire _and try s.th. new  ...no making fun of the vendor's name, btw


----------



## CS9K

J7SC said:


> ....or become a risk taker _extra ordinaire _and try s.th. new  ...no making fun of the vendor's name, btw
> 
> View attachment 2520474


I would _love_ to see how little power draw it takes to bring this thing to its knees


----------



## xR00Tx

LtMatt said:


> Great score, cpu also! Can you share your Bios settings for cpu and memory please?


Hey @LtMatt , sorry for the late reply.

For Time Spy benchmark, the 5950X works best with SMT turned off. 

Here are my memory timings:


----------



## tolis626

ZealotKi11er said:


> You can always reduce to 2675.


I can't. I. MUST. GO. HIGHER! I think I need help. 



xR00Tx said:


> Hey @LtMatt , sorry for the late reply.
> 
> For Time Spy benchmark, the 5950X works best with SMT turned off.
> 
> Here are my memory timings:
> 
> View attachment 2520477


That memory kit looks like G.Skill TridentZ with B-dies, am I right? What kind of voltage are you pushing through it? I would suppose 1.45-1.5V.


----------



## xR00Tx

tolis626 said:


> That memory kit looks like G.Skill TridentZ with B-dies, am I right? What kind of voltage are you pushing through it? I would suppose 1.45-1.5V.


That's correct, they are Samsung B-die (G.Skill Trident Z F4-3200C14-16GTZR).
DRAM voltage = 1.525v


----------



## tolis626

xR00Tx said:


> That's correct, they are Samsung B-die (G.Skill Trident Z F4-3200C14-16GTZR).
> DRAM voltage = 1.525v


Nice, thanks. And how are they holding up at that voltage? I have the 2x8GB TridentZ Neo B-die at 3600MHz 16-16-16-42. I did overclock them hastily to 3800MHz with timings I got from Ryzen DRAM calculator (couldn't be arsed to do it manually) but I got errors and backed down to DOCP. I need to push mine higher, I'm probably leaving quite a bit of performance on the table with these. Although my IMC doesn't seem like the greatest, 2000MHz FCLK doesn't seem to want to work. I get a boot but everything goes down the gutter after that.


----------



## ptt1982

jonRock1992 said:


> I'm so sorry to hear about the struggles you've had with that GPU. I was really upset too when I first got the the red devil ultimate. The extremely high temps were just unacceptable for a $2300 GPU. I finally got mine under control with a waterblock and liquid metal. I really wish I could help you figure out what would be causing your temps to creep up over time.
> 
> Maybe it has something to do with how it's mounted in your case? Sometimes the slots in a PC case don't line up well with the pci-e slots on the motherboard. This could cause the pcb of the GPU to warp over time and possibly cause mounting issues. It's a stretch, but it's all I can think of atm.


Could be! I'm thinking this might be the case, because in the beginning the temps were brilliant (edge /junction delta was 25C at worst). I have a cheap GPU support bracket that holds the waterblock from one side (near where the power cables are), so this could be the culprit. The GPU holder might warp the waterblock in a way that lifts the pressure of the block to the other side. Maybe I should try without it and just put the pressure on the PCIe slot.

Changed my settings on MPT to 330W + 15% PL, and OC to 2650core /2110mem (max rock solid TS GT2 loop clocks), and did stresstesting seeing junction up to 95C whereas the edge was at around 57C. The GPU peaked at 400W (it goes above the limit 20W it seems), so the highest delta peak I could produce was 38C. It's not unheard of at 400W with budget watercooling, but still 10C too much for my taste. In normal 4K60 gaming that I do with Vsync on, I see highest up to maybe 87C in junction, which is still well within the specs (this can go up to 110C and is hotter than XTXH cards by default, so comparing this with XTXH cards is not fair.)


----------



## lawson67

ptt1982 said:


> Could be! I'm thinking this might be the case, because in the beginning the temps were brilliant (edge /junction delta was 25C at worst). I have a cheap GPU support bracket that holds the waterblock from one side (near where the power cables are), so this could be the culprit. The GPU holder might warp the waterblock in a way that lifts the pressure of the block to the other side. Maybe I should try without it and just put the pressure on the PCIe slot.
> 
> Changed my settings on MPT to 330W + 15% PL, and OC to 2650core /2110mem (max rock solid TS GT2 loop clocks), and did stresstesting seeing junction up to 95C whereas the edge was at around 57C. The GPU peaked at 400W (it goes above the limit 20W it seems), so the highest delta peak I could produce was 38C. It's not unheard of at 400W with budget watercooling, but still 10C too much for my taste. In normal 4K60 gaming that I do with Vsync on, I see highest up to maybe 87C in junction, which is still well within the specs (this can go up to 110C and is hotter than XTXH cards by default, so comparing this with XTXH cards is not fair.)


I find it hard to belive the PCB board could warp on the Red Devil powercoler as the whole of the aluminium back plate on the red devil card is there to precisely stop any warping of the PCB board with at least 9 screws bolted though the PCB board to the backplate and i belive you was having problems with the hotspot before you put it on the water block so it also can not be that the water block that has warped the PCB board, only thing i can think of is that if you are using kryonaut thermal paste you are getting pump effect at over 80c as kryonaut is only rated to 80c and this would precisely explain why you are getting your temps rising over time, i would use LM or if you are to worried to use LM i would use Gelid GC Extreme


----------



## ptt1982

Question: Has anyone tried external mount tighteners for GPUs? 

I'm talking about hardcore, pressure tightening tools. I think I saw some of these when Jayztwocents was doing his RTX3090 Port Royal Kingpin LN2 OC runs. I wonder if these would damage the GPU block/PCB in long-term use. I don't care about aesthetics at all, I'm 100% performance only (my PC is in another room in a dark well ventilated case), so the tricky part would be to fit such a tool in the case in a way it doesn't add too much stress on the MB's PCIe slot.


----------



## CS9K

ptt1982 said:


> Question: Has anyone tried external mount tighteners for GPUs?
> 
> I'm talking about hardcore, pressure tightening tools. I think I saw some of these when Jayztwocents was doing his RTX3090 Port Royal Kingpin LN2 OC runs. I wonder if these would damage the GPU block/PCB in long-term use. I don't care about aesthetics at all, I'm 100% performance only (my PC is in another room in a dark well ventilated case), so the tricky part would be to fit such a tool in the case in a way it doesn't add too much stress on the MB's PCIe slot.


I'm not aware of anyone trying this on a daily-driver GPU, but to be honest, one shouldn't _need_ to even consider using one for a water block application.

I'm still convinced that either the thermal pads are too thick, or something is physically blocking the die from making complete contact with your GPU block, be it a design defect, something with the thermal pads, something else getting in the way.


----------



## Maracus

lestatdk said:


> Yes, I have the Gaming X Trio as well. These are the temps with MPT:
> 
> View attachment 2520338
> 
> 
> Too bad there's no waterblock for these cards. It's not as if the stock cooler isn't big and bulky, but either the paste is dried out or it's just a bad design.


Yeah that's rough and although after some research I think its a bad design I will still try and repaste it and see if I can at least shave off 10c on the hotspot. Just wish there was a water block for these cards.


----------



## weleh

Holy **** 50C delta ***...


----------



## tolis626

Aha! I found the article I had read! Sapphire Radeon RX 6900 XT NITRO+ Special Edition features TOXIC Extreme PCB - VideoCardz.com

I don't know why I took an article from Videocardz at face value. I don't doubt that the Nitro+ SE features the Toxic PCB, but I have yet to see evidence of it using the XTXH die. Would be sweet if it does, but I'm not holding my breath.


----------



## LtMatt

21.8.1 is out, who is going to be the guinea pig? I nominate @weleh, @jonRock1992 and @J7SC as testers. 

AMD Radeon™ RX 6900 XT Drivers & Support | AMD


----------



## jonRock1992

LtMatt said:


> 21.8.1 is out, who is going to be the guinea pig? I nominate @weleh, @jonRock1992 and @J7SC as testers.
> 
> AMD Radeon™ RX 6900 XT Drivers & Support | AMD


Lol I'll definitely try it out as soon as I can. I'm at work until 7pm tonight, and then I'm having some people over. So I probably won't get to it until around midnight lol. But fingers crossed we get another performance uplift in Timespy.


----------



## cfranko

LtMatt said:


> 21.8.1 is out, who is going to be the guinea pig? I nominate @weleh, @jonRock1992 and @J7SC as testers.
> 
> AMD Radeon™ RX 6900 XT Drivers & Support | AMD


I am trying 21.8.1 now, will let you know what happens.


----------



## cfranko

My card decided to thermal shutdown with 400w PL even though it was fine yesterday with 21.7.2, I won’t be able to test 21.8.1


----------



## weleh

LtMatt said:


> 21.8.1 is out, who is going to be the guinea pig? I nominate @weleh, @jonRock1992 and @J7SC as testers.
> 
> AMD Radeon™ RX 6900 XT Drivers & Support | AMD


----------



## weleh

Thermal shutdown is a good sign, probably more performance.


----------



## jonRock1992

I'm sooo close to 25k Timespy GPU score with 21.7.2. I only need 182 more points to break 25k. I'm really hoping the newest driver will help with that, but I'm not getting my hopes up lol.


----------



## LtMatt

I had quick test of 21.7.2 vs 21.8.1 and it was margin of error stuff, actually scored around 10-20 less graphics score points on 21.8.1. Not sure anything has changed, so will be interesting to see what others find.


----------



## cfranko

jonRock1992 said:


> but I'm not getting my hopes up lol.


The new driver was released to support the 6600 XT, I don't think they did something to improve performance since the whole point was adding support to 6600 xt.


----------



## jonRock1992

LtMatt said:


> I had quick test of 21.7.2 vs 21.8.1 and it was margin of error stuff, actually scored around 10-20 less graphics score points on 21.8.1. Not sure anything has changed, so will be interesting to see what others find.


Ah well that's a bummer. Thanks for testing 🤘


----------



## D-EJ915

Looks like prices are coming down some more, powercolor ultimate aircooled now 2k on newegg. I wonder how low the prices will drop or if this is about how low they will go. I picked up one of those 6800 XT Strix LC cards a few weeks ago, wonder if I should sell it and get a 6900 instead.


----------



## J7SC

LtMatt said:


> 21.8.1 is out, who is going to be the guinea pig? I nominate @weleh, @jonRock1992 and @J7SC as testers.
> 
> AMD Radeon™ RX 6900 XT Drivers & Support | AMD


...love to, but going to be late to the party...my 6900XT is apart (finally), as is the 3090, re dual D5 pump pressure / leak testing...I don't think I'll need a flow meter


----------



## xR00Tx

I've just ran a few more rounds of Time Spy and Time Spy Extreme and improved my graphics results:

Time Spy - 25,318 points








Result not found







www.3dmark.com





Time Spy Extreme - 12,134 points








Result not found







www.3dmark.com





Later I will try to run it again with the new driver 21.8.1 to see the results.


----------



## xR00Tx

I had to lower my clock by 10mhz with driver 21.8.1. 
Don't know if it was a temperature issue or not. Will give it another try when it gets colder.

Meanwhile, I went back to 21.7.2 and got 1 more graphic point on TS...


----------



## 99belle99

lawson67 said:


> GPU-Z should tell you if you have an XTXH GPU at least it does on my Powercolor
> 
> View attachment 2520387


Mine in GPU-z says it is a XTX when it is a reference model so what's up with that?


----------



## HeLeX63

99belle99 said:


> Mine in GPU-z says it is a XTX when it is a reference model so what's up with that?


That's exactly right. Its an XTX which is standard. Its not the XTXH.


----------



## LtMatt

xR00Tx said:


> I had to lower my clock by 10mhz with driver 21.8.1.
> Don't know if it was a temperature issue or not. Will give it another try when it gets colder.
> 
> Meanwhile, I went back to 21.7.2 and got 1 more graphic point on TS...
> 
> View attachment 2520626


Great score. Are you using any MPT tweaks other than power limits? Also is this an XTXH or XTX 6900 XT?


----------



## lestatdk

Got my first GPU score above 23k with the new driver. Total GPU score increase with the last 3 driver updates are more than 1k points 

Also, have ordered a water setup for my card from Alphacool. Let's see how it goes. It was either that or sell it at a loss, so might as well. Will be my first water build


----------



## xR00Tx

LtMatt said:


> Great score. Are you using any MPT tweaks other than power limits? Also is this an XTXH or XTX 6900 XT?


MPT Settings:
PL = 450w
Power Feature controls = disabled
Vclk = 1497mhz (max)
Dclk = 1286mhz (max)
Fclk = 2177mhz (min/max/boost)


Wattman settings (for TS):
2770/2870mhz - 1060mv
2150mhz - Fast Timing


It's an "XTX" 6900 XT Saphire Nitro+ (non-SE). I got lucky with this chip!


----------



## LtMatt

xR00Tx said:


> MPT Settings:
> PL = 450w
> Power Feature controls = disabled
> Vclk = 1497mhz (max)
> Dclk = 1286mhz (max)
> Fclk = 2177mhz (min/max/boost)
> 
> 
> Wattman settings (for TS):
> 2770/2870mhz - 1060mv
> 2150mhz - Fast Timing
> 
> 
> It's an "XTX" 6900 XT Saphire Nitro+ (non-SE). I got lucky with this chip!
> 
> View attachment 2520699


Amazing, has to be one of the best XTX samples ever. Well done.


----------



## tootall123

Just decided to join this thread as it seems to be a good source of info. 

Currently have an Aorus Xtreme Waterforce and is a very good example. 

This is my best score so far with 2890/2770 and 2150 memory. 









I scored 23 628 in Time Spy


AMD Ryzen 9 5950X, AMD Radeon RX 6900 XT x 1, 32768 MB, 64-bit Windows 10}




www.3dmark.com





My cpu score is now lacking behind a few of the more recent posts.


----------



## tolis626

IT'S HERE BOYS!

Ok, first impressions. The card is bigger than I thought. Not because I thought it'd be small, but because in pictures it looked similar to the 5700XT Nitro+. Nope, it's a girthy boy. It's also really quiet, thankfully with no coil whine. And at stock it's easily double the performance of my overclocked 5700XT. Nice.

Now, temps aren't the best, but that can easily be attributed to my dust clogged fan filters and generally high ambient temperatures. I mean, it's summer in Greece, my AC is crappy and can't keep up. But at +15% PL (no MPT yet, so like 330W) the hotspot did go to 100-105C. Yikes.

I have two questions for you guys. Firstly, what version of MPT are you using? I'm using 1.3.5 and I can't find any FCLK etc adjustments. And second, is Afterburner usable with RDNA2? I have been using it for like a decade now and I'm so used to it that everything else feels wrong. Especially Wattman.


----------



## lestatdk

tolis626 said:


> IT'S HERE BOYS!
> 
> Ok, first impressions. The card is bigger than I thought. Not because I thought it'd be small, but because in pictures it looked similar to the 5700XT Nitro+. Nope, it's a girthy boy. It's also really quiet, thankfully with no coil whine. And at stock it's easily double the performance of my overclocked 5700XT. Nice.
> 
> Now, temps aren't the best, but that can easily be attributed to my dust clogged fan filters and generally high ambient temperatures. I mean, it's summer in Greece, my AC is crappy and can't keep up. But at +15% PL (no MPT yet, so like 330W) the hotspot did go to 100-105C. Yikes.
> 
> I have two questions for you guys. Firstly, what version of MPT are you using? I'm using 1.3.5 and I can't find any FCLK etc adjustments. And second, is Afterburner usable with RDNA2? I have been using it for like a decade now and I'm so used to it that everything else feels wrong. Especially Wattman.


I use MPT 1.3.7 beta 6.










You can use MSI AB if you want it works fine withRDNA2 . Good luck with your card, I'm sure it performs better than mine with regards to hotspot, so I've ordered a waterblock now


----------



## tolis626

lestatdk said:


> I use MPT 1.3.7 beta 6.
> 
> View attachment 2520717
> 
> 
> You can use MSI AB if you want it works fine withRDNA2 . Good luck with your card, I'm sure it performs better than mine with regards to hotspot, so I've ordered a waterblock now


Aha! I need to find the beta then!

Not home atm, but I will try overclocking later in the day when temperatures drop a bit. I will also try AB and see how it goes.

You're the guy with the MSI card, right? If so, man, that's a bad delta you got there. Hope you get is sorted out under water. I think it'll be fine!

Thanks for your help mate! Will be back later! Cheers!


----------



## lestatdk

tolis626 said:


> Aha! I need to find the beta then!
> 
> Not home atm, but I will try overclocking later in the day when temperatures drop a bit. I will also try AB and see how it goes.
> 
> You're the guy with the MSI card, right? If so, man, that's a bad delta you got there. Hope you get is sorted out under water. I think it'll be fine!
> 
> Thanks for your help mate! Will be back later! Cheers!


Yup that's me  Probably have a new record in delta 

If you want to run pure AB, I suggest you do the "driver only" install of the Radeon drivers. Then let AB handle the rest.

You can download the MPT beta here:









MorePowerTool (MPT) and Red BIOS Editor (RBE) Beta Program - MPT 1.3.8 Beta 1 (Debug Overrides and Throttler Control) | Page 2 | igor'sLAB


Disclaimer: The following article is machine translated from the original German, and has not been edited or checked for errors. Thank you for understanding!




www.igorslab.de


----------



## LtMatt

tolis626 said:


> IT'S HERE BOYS!
> 
> Ok, first impressions. The card is bigger than I thought. Not because I thought it'd be small, but because in pictures it looked similar to the 5700XT Nitro+. Nope, it's a girthy boy. It's also really quiet, thankfully with no coil whine. And at stock it's easily double the performance of my overclocked 5700XT. Nice.
> 
> Now, temps aren't the best, but that can easily be attributed to my dust clogged fan filters and generally high ambient temperatures. I mean, it's summer in Greece, my AC is crappy and can't keep up. But at +15% PL (no MPT yet, so like 330W) the hotspot did go to 100-105C. Yikes.
> 
> I have two questions for you guys. Firstly, what version of MPT are you using? I'm using 1.3.5 and I can't find any FCLK etc adjustments. And second, is Afterburner usable with RDNA2? I have been using it for like a decade now and I'm so used to it that everything else feels wrong. Especially Wattman.


No need for Afterburner, just use GPU Tuning in Radeon Software.


----------



## weleh

AB doesn't play well if you install the full package driver.

Also with AB you can't set fast timings so performance lost there too.


----------



## lestatdk

weleh said:


> AB doesn't play well if you install the full package driver.
> 
> Also with AB you can't set fast timings so performance lost there too.


Yeah that's one thing missing. Wish they would just make it possible to set as a registry parameter


----------



## xR00Tx

I was about to say just that... as far as I know, AB doesn't allow you to set Fast Timings.


----------



## lestatdk

LtMatt said:


> No need for Afterburner, just use GPU Tuning in Radeon Software.


Tbf , AB does give you much more control of things than Wattman. I stopped counting the times Wattman has decided to revert my settings to default, and not because there was a crash or anything. Annoying


----------



## jonRock1992

tootall123 said:


> Just decided to join this thread as it seems to be a good source of info.
> 
> Currently have an Aorus Xtreme Waterforce and is a very good example.
> 
> This is my best score so far with 2890/2770 and 2150 memory.
> 
> 
> 
> 
> 
> 
> 
> 
> 
> I scored 23 628 in Time Spy
> 
> 
> AMD Ryzen 9 5950X, AMD Radeon RX 6900 XT x 1, 32768 MB, 64-bit Windows 10}
> 
> 
> 
> 
> www.3dmark.com
> 
> 
> 
> 
> 
> My cpu score is now lacking behind a few of the more recent posts.


Wow. I kind of want to sell my red devil ultimate for one of those. Seems like they are all binned very well. I've never hit the silicon lottery with a GPU, which is why I went with XTXH; However, my freaking XTXH isn't even binned that well lol.


----------



## lestatdk

jonRock1992 said:


> Wow. I kind of want to sell my red devil ultimate for one of those. Seems like they are all binned very well. I've never hit the silicon lottery with a GPU, which is why I went with XTXH; However my freaking XTXH isn't even binned that well lol.


I managed to miss the lottery with both my GPU and my CPU this time


----------



## LtMatt

lestatdk said:


> Tbf , AB does give you much more control of things than Wattman. I stopped counting the times Wattman has decided to revert my settings to default, and not because there was a crash or anything. Annoying


It causes more problems than its worth as it conflicts. In the past sure, nowadays its better to use GPU Tuning. 

WattMan is no longer a thing btw.


----------



## lestatdk

LtMatt said:


> It causes more problems than its worth as it conflicts. In the past sure, nowadays its better to use GPU Tuning.
> 
> WattMan is no longer a thing btw.


I know it's not called Wattman anymore. Still I have those issues with it resetting out of the blue. AB will for sure be more reliable. Also, no problems if just installing the driver only of Radeon sw


----------



## jonRock1992

Have you guys seen a gigabyte 6900 XT waterforce Xtreme not do at least 2800 MHz in Timespy? I'm seriously thinking about getting one lol. They are $1900 on Amazon right now.


----------



## LtMatt

jonRock1992 said:


> Have you guys seen a gigabyte 6900 XT waterforce Xtreme not do at least 2800 MHz in Timespy? I'm seriously thinking about getting one lol. They are $1900 on Amazon right now.


It's all luck of the draw. That said, at least if you order off Amazon you can return it and get your money back if it's not a good sample.


----------



## gtz

jonRock1992 said:


> Have you guys seen a gigabyte 6900 XT waterforce Xtreme not do at least 2800 MHz in Timespy? I'm seriously thinking about getting one lol. They are $1900 on Amazon right now.


If you want to save 50 bucks you can get it from newegg.









GIGABYTE AORUS Radeon RX 6900 XT Video Card GV-R69XTAORUSX WB-16GD - Newegg.com


Buy GIGABYTE AORUS Radeon RX 6900 XT 16GB GDDR6 PCI Express 4.0 ATX Video Card GV-R69XTAORUSX WB-16GD with fast shipping and top-rated customer service. Once you know, you Newegg!




www.newegg.com





But like mentioned Amazon's return policy can't be beat.


----------



## J7SC

jonRock1992 said:


> Have you guys seen a gigabyte 6900 XT waterforce Xtreme not do at least 2800 MHz in Timespy? I'm seriously thinking about getting one lol. They are $1900 on Amazon right now.


That's very tempting for that card which I like a lot...I would go for it if I wouldn't already have a 'full house' of late-gen GPUs


----------



## jonRock1992

Where would be the best place to sell my old GPU if I were to get the gigabyte. I'd usually go to eBay, but they want 10% of the profit. And I don't plan on giving eBay that much money lol.


----------



## gtz

What type are you selling? Might be interested. Do have on ebay or other forums?


----------



## jonRock1992

gtz said:


> What type are you selling? Might be interested. Do have on ebay or other forums?


Haven't decided if I want to sell it yet. It's a 6900 XT Red Devil Ultimate. There is a Bykski waterblock installed on it right now with Liquid Metal. I'm probably not going to put the air cooler back on it, but I'd include it if I decided to sell it.


----------



## amigafan2003

jonRock1992 said:


> Have you guys seen a gigabyte 6900 XT waterforce Xtreme not do at least 2800 MHz in Timespy? I'm seriously thinking about getting one lol. They are $1900 on Amazon right now.


Mine doesn't do 2800. I can set 2600 min / 2885 max but in Timespy it only reaches 2736.

Also not that impressed with the pasting - hits 95c hotspot and thus throttles back.


----------



## tolis626

Well, my Nitro+ doesn't seem to be the greatest at first glance. I can do 2700 target at 1150MHz, but that's about it. Although it's kinda weird. I've set up to 325W PL in MPT (340A TDC, 60A SOC) and gave it +15% in Radeon Software and Afterburner, but it seems to be doing its own thing and will mostly stay at 305W with some peaks towards 330W but nothing above that. At least in Superposition. Even so, even without raising power limits, temps aren't the greatest. I'm hitting 75C edge and 105C (peak) on the hotspot, which mostly hovers around 98C. If it wasn't so expensive, I'd be more keen to take it apart and repaste it with Kryonaut or Gelid GC Extreme. For now, I'm keeping it at 2650MHz/1.15V to enjoy gaming on the thing without problems, but I'd appreciate some pointers as to where I could find improvements. Still, over double the performance of my 5700XT. Max I could get out of that thing in Superposition was in the 5600 range, and that was with the GPU holding on for dear life. The 6900XT broke 12300 with a quick n' dirty OC. I mean... Damn.

PS : Unless I put my ear literally on my case, I can hear no coil whine, and even then it's a low frequency hum instead of the high pitched annoyance I was used to with older GPUs. Me likey.


----------



## LtMatt

jonRock1992 said:


> Where would be the best place to sell my old GPU if I were to get the gigabyte. I'd usually go to eBay, but they want 10% of the profit. And I don't plan on giving eBay that much money lol.


You could wait for an Ebay offer on selling so that the fee for selling is only a few bucks. They tend to come along every few weeks for me.

I sold my Merc there a month or so ago and only paid a few pound in fees, sold for £1200.

The guy i sold it to was nice as well, so I shared my 24/7 undervolt profile with him which was nice and quiet and use up to 0.975v for stock clock operation.


----------



## tootall123

jonRock1992 said:


> Haven't decided if I want to sell it yet. It's a 6900 XT Red Devil Ultimate. There is a Bykski waterblock installed on it right now with Liquid Metal. I'm probably not going to put the air cooler back on it, but I'd include it if I decided to sell it.


I have an Alphacool block left over from a previous 6900red devil ultimate card. 

I keep meaning to out that on ebay.


----------



## jonRock1992

LtMatt said:


> You could wait for an Ebay offer on selling so that the fee for selling is only a few bucks. They tend to come along every few weeks for me.
> 
> I sold my Merc there a month or so ago and only paid a few pound in fees, sold for £1200.
> 
> The guy i sold it to was nice as well, so I shared my 24/7 undervolt profile with him which was nice and quiet and use up to 0.975v for stock clock operation.


I'm not sure what you mean. Can you elaborate please? Are you saying you can bypass the selling fees? I have that gigabyte card in my cart lol. I'm hoping someone will talk me out of it 😅. It just pisses me off that my card can only do 2775MHz max in Timespy, but in games I set 2850 MHz max. It's just that second graphics test that always trips up my GPU.


----------



## gtz

jonRock1992 said:


> I'm not sure what you mean. Can you elaborate please? Are you saying you can bypass the selling fees? I have that gigabyte card in my cart lol. I'm hoping someone will talk me out of it 😅. It just pisses me off that my card can only do 2775MHz max in Timespy, but in games I set 2850 MHz max. It's just that second graphics test that always trips up my GPU.


ebay likes sending coupons. Every once in a while they send coupons to post for sale. Like maximum selling fee of 10 bucks regardless of how much it sold for, or instead send you a coupon they will only charge 2 percent of the total sale instead of the 10-17%.

Look for them in your ebay messages.

They don't send those as frequently like they used to.


----------



## LtMatt

jonRock1992 said:


> I'm not sure what you mean. Can you elaborate please? Are you saying you can bypass the selling fees? I have that gigabyte card in my cart lol. I'm hoping someone will talk me out of it 😅. It just pisses me off that my card can only do 2775MHz max in Timespy, but in games I set 2850 MHz max. It's just that second graphics test that always trips up my GPU.


Post below you explained it well.


----------



## cfranko

jonRock1992 said:


> I'm not sure what you mean. Can you elaborate please? Are you saying you can bypass the selling fees? I have that gigabyte card in my cart lol. I'm hoping someone will talk me out of it 😅. It just pisses me off that my card can only do 2775MHz max in Timespy, but in games I set 2850 MHz max. It's just that second graphics test that always trips up my GPU.


Yeah in the second graphics test the part where it shows the books on the shelf inside the library always crashes my gpu as well


----------



## LtMatt

Other than my CPU score being 200 points lower than my best, I have maxed out my Timespy score on my XTXH Toxic.

Can't score any higher on graphics points now without further driver improvements or mods.

3DMark Time Spy Hall Of Fame

*SCORE 24 091 with AMD Radeon RX 6900 XT(1x) and AMD Ryzen 9 5950X*
Graphics Score 25 291
CPU Score 18 987

I also managed 1 extra graphics points to draw level with the legendary Kingjohn (Hall of fame Timespy Graphics) in a overall worse run. 
AMD Radeon RX 6900 XT video card benchmark result - AMD Ryzen 9 5950X,ASUSTeK COMPUTER INC. ROG CROSSHAIR VIII HERO (3dmark.com)

Think I can perhaps add a few points onto my previous best Timespy Extreme graphics score, so that is my next target.


----------



## cfranko

LtMatt said:


> Other than my CPU score being 200 points lower than my best, I have maxed out my Timespy score on my XTXH Toxic.
> 
> Can't score any higher on graphics points now without further driver improvements or mods.
> 
> 3DMark Time Spy Hall Of Fame
> 
> *SCORE 24 091 with AMD Radeon RX 6900 XT(1x) and AMD Ryzen 9 5950X*
> Graphics Score 25 291
> CPU Score 18 987
> 
> I also managed 1 extra graphics points to draw level with the legendary Kingjohn (Hall of fame Timespy Graphics) in a overall worse run.
> AMD Radeon RX 6900 XT video card benchmark result - AMD Ryzen 9 5950X,ASUSTeK COMPUTER INC. ROG CROSSHAIR VIII HERO (3dmark.com)
> 
> Think I can perhaps add a few points onto my previous best Timespy Extreme graphics score, so that is my next target.


I scored 22.800 graphics score on air at 400 watts in timespy, I am doing a custom loop right now do you think I could get on the hall of fame with the loop?


----------



## LtMatt

cfranko said:


> I scored 22.800 graphics score on air at 400 watts in timespy, I am doing a custom loop right now do you think I could get on the hall of fame with the loop?


I am sure you could. How high depends purely on your GPU and CPU silicon quality. My sample will only do 2825Mhz in Timespy, and it does not pass every time at this clock.

I have squeezed every last drop out of it to score what I do. I suspect the people around me are clocked higher on the GPU.


----------



## cfranko

LtMatt said:


> I am sure you could.


I set the maximum frequency to 2600 MHz, Even 2605 MHz crashes in timespy so the hard limit is 2600 for me. I don't really think water would benefit me but it may boost higher so it may also help, ill just see what happens when the loop is finished, I hope ill get in the hall of fame.


----------



## LtMatt

cfranko said:


> I set the maximum frequency to 2600 MHz, Even 2605 MHz crashes in timespy so the hard limit is 2600 for me. I don't really think water would benefit me but it may boost higher so it may also help, ill just see when the loop is finished


Water will definitely help improve your score and clocks, but by how much is difficult to say. For me lower temps add another 25Mhz+ onto what is normal stable at 25c+ higher temps.


----------



## lawson67

So this will be my first ever water loop build and i have just received everything from Alphacool that i bought in a bundle off them and i noticed they sent me 2 slim 360 radiators (picture's below) i am not to sure why they sent me these or why anyone would want radiator's that have cores that don't fill the height of the radiator frames which appear to be 30mm in height which is the perfect height for my pc case i just would of liked radiators that have cores that use all of the 30mm height of the radiators, surly these will not be great at dissipating heat?, can anyone who's been build loops for years suggest 2 better radiators or give there opinion on these radiators?, one is a cross flow that i want to use in the top of my PC and one has the holes next to each other at the bottom of the radiator, both models are linked below, are these good enough radiator's or can anyone suggest 2 better radiators thanks 
Alphacool NexXxoS ST30 Full Copper 360mm radiator
Alphacool NexXxoS ST30 Full Copper X-Flow 360mm radiator


----------



## J7SC

lawson67 said:


> So this will be my first ever water loop build and i have just received everything from Alphacool that i bought in a bundle off them and i noticed they sent me 2 slim 360 radiators (picture's below) i am not to sure why they sent me these or why anyone would want radiator's that have cores that don't fill the height of the radiator frames which appear to be 30mm in height which is the perfect height for my pc case i just would of liked radiators that have cores that use all of the 30mm height of the radiators, surly these will not be great at dissipating heat?, can anyone who's been build loops for years suggest 2 better radiators or give there opinion on these radiators?, one is a cross flow that i want to use in the top of my PC and one has the holes next to each other at the bottom of the radiator, both models are linked below, are these good enough radiator's or can anyone suggest 2 better radiators thanks
> Alphacool NexXxoS ST30 Full Copper 360mm radiator
> Alphacool NexXxoS ST30 Full Copper X-Flow 360mm radiator
> 
> View attachment 2520892
> View attachment 2520893


...the extra spacing is normal - every rad I've had for the last decade has a similar differential; it is there so that fan screws can be properly mounted w/o piercing the actual core, though even with the extra spacing, it pays to make sure that your fan / rad /screw combo can be tightened without reaching the core.


----------



## lestatdk

It's also better for the flow to not have the fan jammed up against the radiator fins . You can even buy a small spacer at Alphacool to add some additional space in between


----------



## lawson67

Thanks for advice guys was getting a bit worried there, the only radiator i have seen is the 360mm on my Corsair 150 icue AIO that has screws that go thought fins so i thought these Rads might not be up to scratch as i wanna be whacking lots of power though my card, hoping that these two 360mm Rads with a D5 pump will keep it all cool enough, anyhow ill get on with building it tomorrow then


----------



## lestatdk

lawson67 said:


> Thanks for advice guys was getting a bit worried there, the only radiator i have seen is the 360mm on my Corsair 150 icue AIO that has screws that go thought fins so i thought these Rads might not be up to scratch as i wanna be whacking lots of power though my card, hoping that these two 360mm Rads with a D5 pump will keep it all cool enough, anyhow ill get on with building it tomorrow then


I'll be getting my Alphacool parts soon as well. I have opted for a single 280mm rad 45mm thick. It will be for my GPU only as I have an AIO on my CPU. Did not want to overcomplicate my first custom water build. Will be interesting to see if it's enough cooling I sure hope so 
Anyway, it can't possibly be worse than the current stock cooler


----------



## tootall123

lawson67 said:


> Thanks for advice guys was getting a bit worried there, the only radiator i have seen is the 360mm on my Corsair 150 icue AIO that has screws that go thought fins so i thought these Rads might not be up to scratch as i wanna be whacking lots of power though my card, hoping that these two 360mm Rads with a D5 pump will keep it all cool enough, anyhow ill get on with building it tomorrow then


Id have a look through this article and others on this site. 









Radiator Review Round Up 2016 - ExtremeRigs.net


360 Radiator Review - hardware labs, koolance, ek, xspc, aquacomputer, watercool, water cooling, black ice, liquid cooling, phobya, alphacool, coolgate




www.xtremerigs.net





Its the most comprehensive comparison of radiators available to the best of my knowledge.


----------



## J7SC

tootall123 said:


> Id have a look through this article and others on this site.
> 
> 
> 
> 
> 
> 
> 
> 
> 
> Radiator Review Round Up 2016 - ExtremeRigs.net
> 
> 
> 360 Radiator Review - hardware labs, koolance, ek, xspc, aquacomputer, watercool, water cooling, black ice, liquid cooling, phobya, alphacool, coolgate
> 
> 
> 
> 
> www.xtremerigs.net
> 
> 
> 
> 
> 
> Its the most comprehensive comparison of radiators available to the best of my knowledge.


 ...I loved xtremerigs.net, but it is no more afaik - the radiator roundup is from 2016 and is missing all kinds of newer rads and updates on existing ones.


----------



## weleh

Custom run I did today while doing some voltage scalling testing.

3DMARK is bugged AF which makes me wonder how many of the scores are actually legitimate.

25900 Graphic Score









I scored 0 in Time Spy Custom


AMD Ryzen 7 5800X, AMD Radeon RX 6900 XT x 1, 16384 MB, 64-bit Windows 10}




www.3dmark.com


----------



## jonRock1992

weleh said:


> Custom run I did today while doing some voltage scalling testing.
> 
> 3DMARK is bugged AF which makes me wonder how many of the scores are actually legitimate.
> 
> 25900 Graphic Score
> 
> 
> 
> 
> 
> 
> 
> 
> 
> I scored 0 in Time Spy Custom
> 
> 
> AMD Ryzen 7 5800X, AMD Radeon RX 6900 XT x 1, 16384 MB, 64-bit Windows 10}
> 
> 
> 
> 
> www.3dmark.com


I'm testing out the new driver with 3DMark right now. There was also an update for 3DMark. Maybe it messed something up?


----------



## jonRock1992

I was able to increase my min/max GPU core clocks by 20 MHz with the latest driver in Timespy. I was able to do 2690 MHz min / 2790 MHz max. Really wasn't expecting that at all. AMD is killing it with driver updates. Got 25 more points for my graphics score.








AMD Radeon RX 6900 XT video card benchmark result - AMD Ryzen 7 5800X,ASUSTeK COMPUTER INC. ROG CROSSHAIR VIII DARK HERO (3dmark.com)

UPDATE:
I can run Timespy Extreme at 2750 MHz min / 2850 MHz max. Broke 12K graphics score.








AMD Radeon RX 6900 XT video card benchmark result - AMD Ryzen 7 5800X,ASUSTeK COMPUTER INC. ROG CROSSHAIR VIII DARK HERO (3dmark.com)


----------



## HeLeX63

lawson67 said:


> So this will be my first ever water loop build and i have just received everything from Alphacool that i bought in a bundle off them and i noticed they sent me 2 slim 360 radiators (picture's below) i am not to sure why they sent me these or why anyone would want radiator's that have cores that don't fill the height of the radiator frames which appear to be 30mm in height which is the perfect height for my pc case i just would of liked radiators that have cores that use all of the 30mm height of the radiators, surly these will not be great at dissipating heat?, can anyone who's been build loops for years suggest 2 better radiators or give there opinion on these radiators?, one is a cross flow that i want to use in the top of my PC and one has the holes next to each other at the bottom of the radiator, both models are linked below, are these good enough radiator's or can anyone suggest 2 better radiators thanks
> Alphacool NexXxoS ST30 Full Copper 360mm radiator
> Alphacool NexXxoS ST30 Full Copper X-Flow 360mm radiator
> 
> View attachment 2520892
> View attachment 2520893


Nice. I have the ST30 560 from Alphacool. When compared to my EK 480 thick, I notice that the tubes are much wider, and fin density is really loose. Great for flow of water and air at the expense of slight cooling efficiency.


----------



## LtMatt

weleh said:


> Custom run I did today while doing some voltage scalling testing.
> 
> 3DMARK is bugged AF which makes me wonder how many of the scores are actually legitimate.
> 
> 25900 Graphic Score
> 
> 
> 
> 
> 
> 
> 
> 
> 
> I scored 0 in Time Spy Custom
> 
> 
> AMD Ryzen 7 5800X, AMD Radeon RX 6900 XT x 1, 16384 MB, 64-bit Windows 10}
> 
> 
> 
> 
> www.3dmark.com


I'm able to score much higher in custom runs also, it's weird as hell because that never carries over into legitimate runs.


----------



## LtMatt

jonRock1992 said:


> I was able to increase my min/max GPU core clocks by 20 MHz with the latest driver in Timespy. I was able to do 2690 MHz min / 2790 MHz max. Really wasn't expecting that at all. AMD is killing it with driver updates. Got 25 more points for my graphics score.
> View attachment 2520942
> 
> AMD Radeon RX 6900 XT video card benchmark result - AMD Ryzen 7 5800X,ASUSTeK COMPUTER INC. ROG CROSSHAIR VIII DARK HERO (3dmark.com)
> 
> UPDATE:
> I can run Timespy Extreme at 2750 MHz min / 2850 MHz max. Broke 12K graphics score.
> View attachment 2520957
> 
> AMD Radeon RX 6900 XT video card benchmark result - AMD Ryzen 7 5800X,ASUSTeK COMPUTER INC. ROG CROSSHAIR VIII DARK HERO (3dmark.com)


Very nice Jon! I am also able to run Timespy Extreme at higher clocks, but not quite as high as 2850, i think i managed 2840 last time i ran it on 21.7.2.


----------



## tolis626

Well, damn. No matter what I do, it seems I'm getting high hotspot temps. When I have time I'll see about first making sure all screws are tightened properly, hoping that that fixes it somewhat. But I won't be doing much overclocking with hotspot reaching 105-109C at stock (granted, fans don't ramp up a lot at stock, but still).


----------



## majestynl

jonRock1992 said:


> Wow. I kind of want to sell my red devil ultimate for one of those. Seems like they are all binned very well. I've never hit the silicon lottery with a GPU, which is why I went with XTXH; However, my freaking XTXH isn't even binned that well lol.





jonRock1992 said:


> Have you guys seen a gigabyte 6900 XT waterforce Xtreme not do at least 2800 MHz in Timespy? I'm seriously thinking about getting one lol. They are $1900 on Amazon right now.


I had one and it was terrible! So not al perfect!


----------



## jonRock1992

majestynl said:


> I had one and it was terrible! So not al perfect!


Thank you for the info! I've decided to keep my RED Devil Ultimate. It's only 10MHz shy of 2800MHz max in Timespy, and it can do 2850MHz max in Timespy Extreme. I think I'm satisfied with that. With my luck, I'd get a different XTXH gpu and it will be worse. I think with an EVC2SX I'd be able to break 25k on Timespy, but I'm not going to do that just yet.


----------



## lestatdk

My packet just arrived from Alphacool 

By this time tomorrow if everything goes well, this sucker is on water and hopefully doing better than with the stock cooler ( can't imagine it's possible to do worse ) .

Friend of mine coming over to machine a bit off the block to match the PCB as this was for the 6800xt model. Still no model for my specific card. Fingers crossed


----------



## The EX1

EK summer sale has the copper block for 6800/6900 reference available for $75.









EK-Quantum Vector RX 6800/6900 - Copper + Plexi


EK-Quantum Vector RX 6800/6900 is a 2nd generation Vector GPU water block from the EK® Quantum Line. It is made for graphics cards based on the latest AMD® RDNA2™ architecture. This water block fits most reference PCB designs of the Radeon RX 6800, RX 6800XT, and RX 6900 GPUs. For a precise...




www.ekwb.com













EK-Quantum Vector RX 6800/6900 - Copper + Acetal


EK-Quantum Vector RX 6800/6900 is a 2nd generation Vector GPU water block from the EK® Quantum Line. It is made for graphics cards based on the latest AMD® RDNA2™ architecture. This water block fits most reference PCB designs of the Radeon RX 6800, RX 6800XT, and RX 6900 GPUs. For a precise...




www.ekwb.com


----------



## Stopthewar

Can someone link and recomend a tutorial how to get the most out of an aircooled gpu?


----------



## lestatdk

Check this series out by TheGrayingTech. It's really good


----------



## jonRock1992

Stopthewar said:


> Can someone link and recomend a tutorial how to get the most out of an aircooled gpu?


I mean, if you really want to get the absolute most out of an air-cooled GPU, you'll first need to repaste with liquid metal (if the base of your heatsink is copper or nickle-plated copper). After that, use MPT to unlock your power limit. Then just dial in your oc with whatever software you choose to use for stability testing. There are also various other tweaks you can do in MPT, but they only make a very minor difference.


----------



## lawson67

lestatdk said:


> My packet just arrived from Alphacool
> 
> By this time tomorrow if everything goes well, this sucker is on water and hopefully doing better than with the stock cooler ( can't imagine it's possible to do worse ) .
> 
> Friend of mine coming over to machine a bit off the block to match the PCB as this was for the 6800xt model. Still no model for my specific card. Fingers crossed


Got my card in the Alphacool water block now with die coated in LM, all Rads and fans in , pump in, CPU block in tomorrow and finish it all off


----------



## Thanh Nguyen

Hi guys, I just bought a 6900xt red devil ultimate. Does it have the same pcb as the red devil because I cant find a waterblom for the ultimate. I just see a red devil waterblock. Which brand of waterblok should I go? Thanks.


----------



## L!ME

Yes it have the Same PCB.

I manage to geht some more Points with my 5950x








I scored 24 445 in Time Spy


AMD Ryzen 9 5950X, AMD Radeon RX 6900 XT x 1, 32768 MB, 64-bit Windows 10}




www.3dmark.com


----------



## tootall123

Thanh Nguyen said:


> Hi guys, I just bought a 6900xt red devil ultimate. Does it have the same pcb as the red devil because I cant find a waterblom for the ultimate. I just see a red devil waterblock. Which brand of waterblok should I go? Thanks.


The pcb is the same. 

I had an Alphacool block and can confirm it fits.


----------



## lawson67

All finished and running Super cool, hot spot 64c at 420w, very pleased indeed that Alphacool Water block with LM is amazing!


----------



## LtMatt

lawson67 said:


> All finished and running Super cool, hot spot 64c at 420w, very pleased indeed that Alphacool Water block with LM is amazing!
> 
> View attachment 2521125


Looks nice that, can’t wait to see what you can do with that great sample you have.

About to validate my new best Timespy and Timespy Extreme scores.


----------



## jonRock1992

lawson67 said:


> All finished and running Super cool, hot spot 64c at 420w, very pleased indeed that Alphacool Water block with LM is amazing!
> 
> View attachment 2521125


Looks nice! Excited for you! Looking forward to seeing how it performs in Timespy.

I'm getting the same hotspot temp with my bykski waterblock and liquid metal. That's also with a 420W PL. In games though it's in the 50's. Liquid metal is the way to go with XTXH.


----------



## airisom2

Can confirm that Byski has an Asrock 6900XT OC Formula PCB in house and is currently developing a block for it. Should be a few more months until release.


----------



## LtMatt

jonRock1992 said:


> Looks nice! Excited for you! Looking forward to seeing how it performs in Timespy.
> 
> I'm getting the same hotspot temp with my bykski waterblock and liquid metal. That's also with a 420W PL. In games though it's in the 50's. Liquid metal is the way to go with XTXH.


Wish I had used that now on my Toxic. Don’t want to risk the lottery of taking it apart again now though after the mounting problems I had previously.


----------



## kairi_zeroblade

lawson67 said:


> All finished and running Super cool, hot spot 64c at 420w, very pleased indeed that Alphacool Water block with LM is amazing!
> 
> View attachment 2521125


OMG!!! RGB!!! MY EYES!!!!!!!!!!!!!!!!!!!!!!!!!!


----------



## LtMatt

Stand by gentlemen, results incoming from the Toxic.


----------



## LtMatt

LtMatt said:


> Stand by gentlemen, results incoming from the Toxic.


Put in a session today, delighted with the results. 









*Timespy*
5950X @ 4.825Ghz
6900 XT Toxic @ 2825Mhz/2162Mhz
21.7.2
AMD Radeon RX 6900 XT video card benchmark result - AMD Ryzen 9 5950X,ASUSTeK COMPUTER INC. ROG CROSSHAIR VIII HERO (3dmark.com)

SCORE 24 229 with AMD Radeon RX 6900 XT(1x) and AMD Ryzen 9 5950X
Graphics Score
25 501
CPU Score
18 892


*Timespy Extreme*
5950X @ 4.825Ghz
6900 XT Toxic @ 2860Mhz/2162Mhz
21.7.2
AMD Radeon RX 6900 XT video card benchmark result - AMD Ryzen 9 5950X,ASUSTeK COMPUTER INC. ROG CROSSHAIR VIII HERO (3dmark.com)

SCORE 12 185 with AMD Radeon RX 6900 XT(1x) and AMD Ryzen 9 5950X
Graphics Score
12 309
CPU Score
11 532


*Firestrike Extreme*
5950X @ 4.825Ghz
6900 XT Toxic @ 2845Mhz/2162Mhz
21.7.2
AMD Radeon RX 6900 XT video card benchmark result - AMD Ryzen 9 5950X,ASUSTeK COMPUTER INC. ROG CROSSHAIR VIII HERO (3dmark.com)

SCORE 31 667 with AMD Radeon RX 6900 XT(1x) and AMD Ryzen 9 5950X
Graphics Score
33 735
Physics Score
43 939
Combined Score
16 856



Spoiler















Ran out of time for Firestrike Ultra, but can improve that score too.

3rd in the world for Firestrike Extreme, 7th for Timespy, 26th for Timespy Extreme. Who needs hardware mods, LN2/Chiller or (true) water cooling?


----------



## xR00Tx

LtMatt said:


> *Timespy*
> 5950X @ 4.825Ghz
> 6900 XT Toxic @ 2825Mhz/2162Mhz
> 21.7.2
> AMD Radeon RX 6900 XT video card benchmark result - AMD Ryzen 9 5950X,ASUSTeK COMPUTER INC. ROG CROSSHAIR VIII HERO (3dmark.com)
> 
> SCORE 24 229 with AMD Radeon RX 6900 XT(1x) and AMD Ryzen 9 5950X
> Graphics Score
> 25 501
> CPU Score
> 18 892
> 
> 3rd in the world for Firestrike Extreme, 7th for Timespy, 26th for Timespy Extreme. Who needs hardware mods, LN2/Chiller or (true) water cooling?


Congratulations man!!! You got extraordinary results!!!

Don't even know what else I could try to do to get closer to your score! lol 👍👍👍


----------



## Petet1990

wrong forum i know but anyone know of a waterblock for the red devil 6700xt?


----------



## lestatdk

It's done ! My first water build. Glad I had a friend over to help there were many obstacles on the way 

Setup is (all parts are Alphacool) : waterblock (modified 6800XT block) , 250mm reservoir, VPP755 pump, 280mm crossflow radiator 45mm thick and TPV tubes. Red coolant.

Unfortunately not easy to take a good picture in the dark, but I just love the look of it 

My hotspot has dropped 30-40 degrees now at a raised power limit. Not sure I want to push it much further, but instead I have plenty of headroom and don't have to worry about thermal shutdown anymore


----------



## J7SC

...just hooked up a LG C1 48 to the 6900XT on one HDMI 2.1 channel, w/ 3090 on another - home-office build for productivity and entertainment...4K 120 HDR OLED is simply jaw-dropping


----------



## tolis626

After having somewhat won the silicon lottery with my 5700XT (could do 2150MHz at 1.14V), I am disappointed to say that my 6900XT doesn't seem to be much of a winner. No matter what I seem to do, it doesn't want to do 2700MHz in Timespy, it crashes in the demo. I tried raising power limits, lowering them, undervolting, cranking the fans, nothing seems to help, I crash a few seconds in the demo. My only "hope" at this point is that it's unstable because of high hotspot temps (>105C with raised power limits), but I ain't holding my breath. Might end up taking it apart and putting some Kryonaut on it, or maybe even LM, although I'm still kind of afraid of the latter, I've never used it.

Any tips on MPT tweaks that could help? I haven't seen anyone here trying to tame the beast, most of you guys seem to be going balls to the wall, but my card can't keep up with silicon lottery winners and XTXH's. I'm thinking about lowering the SoC voltage to see if that helps bring the temps down, but I haven't found any data about it. I've also stupidly not tried with the memory at stock clocks, I immediately went to 2150MHz with FT and left it there, although it doesn't seem to be causing any issues, it works fine on lower core clocks, at least in gaming.

TL;DR : Please drop some wisdom for a poor silicon lottery loser.


----------



## ZealotKi11er

tolis626 said:


> After having somewhat won the silicon lottery with my 5700XT (could do 2150MHz at 1.14V), I am disappointed to say that my 6900XT doesn't seem to be much of a winner. No matter what I seem to do, it doesn't want to do 2700MHz in Timespy, it crashes in the demo. I tried raising power limits, lowering them, undervolting, cranking the fans, nothing seems to help, I crash a few seconds in the demo. My only "hope" at this point is that it's unstable because of high hotspot temps (>105C with raised power limits), but I ain't holding my breath. Might end up taking it apart and putting some Kryonaut on it, or maybe even LM, although I'm still kind of afraid of the latter, I've never used it.
> 
> Any tips on MPT tweaks that could help? I haven't seen anyone here trying to tame the beast, most of you guys seem to be going balls to the wall, but my card can't keep up with silicon lottery winners and XTXH's. I'm thinking about lowering the SoC voltage to see if that helps bring the temps down, but I haven't found any data about it. I've also stupidly not tried with the memory at stock clocks, I immediately went to 2150MHz with FT and left it there, although it doesn't seem to be causing any issues, it works fine on lower core clocks, at least in gaming.
> 
> TL;DR : Please drop some wisdom for a poor silicon lottery loser.


If you have only XTX, dont expect anything more than 2600MHz in TS.


----------



## LtMatt

tolis626 said:


> After having somewhat won the silicon lottery with my 5700XT (could do 2150MHz at 1.14V), I am disappointed to say that my 6900XT doesn't seem to be much of a winner. No matter what I seem to do, it doesn't want to do 2700MHz in Timespy, it crashes in the demo. I tried raising power limits, lowering them, undervolting, cranking the fans, nothing seems to help, I crash a few seconds in the demo. My only "hope" at this point is that it's unstable because of high hotspot temps (>105C with raised power limits), but I ain't holding my breath. Might end up taking it apart and putting some Kryonaut on it, or maybe even LM, although I'm still kind of afraid of the latter, I've never used it.
> 
> Any tips on MPT tweaks that could help? I haven't seen anyone here trying to tame the beast, most of you guys seem to be going balls to the wall, but my card can't keep up with silicon lottery winners and XTXH's. I'm thinking about lowering the SoC voltage to see if that helps bring the temps down, but I haven't found any data about it. I've also stupidly not tried with the memory at stock clocks, I immediately went to 2150MHz with FT and left it there, although it doesn't seem to be causing any issues, it works fine on lower core clocks, at least in gaming.
> 
> TL;DR : Please drop some wisdom for a poor silicon lottery loser.


Get an XTXH if you want the best chance to achieve 2700Mhz in Timespy, but even then it's not completely guaranteed.

Unless you have a legendary XTX sample (like xROOTx  ) then you won't be able to keep up with XTXH's due to their better silicon quality and extra voltage headroom.

Unless you are going water cooling (in which case pick any XTXH with support for a block) then you should get either a Strix or Toxic 6900 XTXH AIO, boost clock listed at 2525Mhz for those models.

Hope you have a few quid.


----------



## Petet1990

Anyone here know if there is a waterblock for the red devil 6700xt?


----------



## LtMatt

@lawson67 - How you getting on with your new setup?


----------



## ptt1982

Quick positive 21.8.1 driver update on my Red Devil 6900XT (non-ultimate):

-Timespy went through at 2705mhz, which is +55mhz and a new record! This is with 330W MPT limit, going to test with more power now (edits incoming)
-Further testing: at MPT 360W/340W TS passed at 2680mhz (+30mhz from previous drivers). Graphics score slightly higher (+70 points from MPT), scaling for the card ends here or is completely marginal after
-Junction temp dropped by 5C, Edge stays mostly under 52C even in TS GT2, saw junction spikes to 86C max, mostly 70-80C (MPT 360W +15%, power goes to 420W).
-Edge/Junction delta dropped 5C to 30-35C mostly, spiking to 38C sometimes (probably extensive use 45C is absolute max peak), but in 4K60 Vsynced gameplay it spikes to only 30C even when power consumption goes past 370W, mostly staying at 20-25C
-Disco Elysium shadow bug is FINALLY fixed, going to dig into the game at last
-Driver performance boost around 1.2-1.5% depending on settings (based on TS score)
-Additionally, game temperatures have dropped 5-10C with the new drivers and by keeping TDP at stock (320) and only increasing the wattage on MPT (from 281 to 340), the temps stay much better in control. For example FFXIV 4K60 (Vsynced) all maxed out in expansion dungeons spike max to 61C junction, Edge stays at under 46C 

I'm a bit happier again, because the temps are not going higher over time as they did last time and the driver brought them down. Probably the mount could be better, but maybe this is simply my cooling performance with the current parts. After all, it's really hot in Tokyo right now, and my parts are budget watercooling parts, and the GPU runs 5-10C hotter than the XTXH variants. Junction only spirals out of control after 380W mark.

Been gaming happily for the last few weeks while checking temps, and so far no strange behavior! And now again another good driver update with slightly lower temps and more performance.

Interestingly, the driver update have given me a 60-85mhz boost (depending on settings) from the previous 2620mhz max I had five months ago, plus a nice extra 3100 points in TS. I have to give it to AMD. Now, if they could somehow do the same trick with ray tracing optimization (I mean look at Ratchet and Clank and Spiderman on PS5, on a gpu equivalent to 6600XT)...


----------



## xR00Tx

tolis626 said:


> After having somewhat won the silicon lottery with my 5700XT (could do 2150MHz at 1.14V), I am disappointed to say that my 6900XT doesn't seem to be much of a winner. No matter what I seem to do, it doesn't want to do 2700MHz in Timespy, it crashes in the demo. I tried raising power limits, lowering them, undervolting, cranking the fans, nothing seems to help, I crash a few seconds in the demo. My only "hope" at this point is that it's unstable because of high hotspot temps (>105C with raised power limits), but I ain't holding my breath. Might end up taking it apart and putting some Kryonaut on it, or maybe even LM, although I'm still kind of afraid of the latter, I've never used it.
> 
> Any tips on MPT tweaks that could help? I haven't seen anyone here trying to tame the beast, most of you guys seem to be going balls to the wall, but my card can't keep up with silicon lottery winners and XTXH's. I'm thinking about lowering the SoC voltage to see if that helps bring the temps down, but I haven't found any data about it. I've also stupidly not tried with the memory at stock clocks, I immediately went to 2150MHz with FT and left it there, although it doesn't seem to be causing any issues, it works fine on lower core clocks, at least in gaming.
> 
> TL;DR : Please drop some wisdom for a poor silicon lottery loser.


I have an XTX 6900 XT Sapphire Nitro+ and initially the hotspot temperature was also going over 100c. 
Right away, I took it apart and replaced the original thermal paste with Kryonaut. Hotspot temperature dropped about 30 degrees celsius and the delta between die and hostspot temperature was 20 - 25c maximum.

Now I use a waterblock and liquid metal and the delta between the temperatures is about 10c @ 360w.

I recommend that you replace your board's thermal paste!

I don't remember what was the maximum core clock on Time Spy before replacing the thermal compound, but today I can get 2860mhz (2870 on colder days) with 21.7.2 driver.


----------



## Henry Owens

xR00Tx said:


> I have an XTX 6900 XT Sapphire Nitro+ and initially the hotspot temperature was also going over 100c.
> Right away, I took it apart and replaced the original thermal paste with Kryonaut. Hotspot temperature dropped about 30 degrees celsius and the delta between die and hostspot temperature was 20 - 25c maximum.
> 
> Now I use a waterblock and liquid metal and the delta between the temperatures is about 10c @ 360w.
> 
> I recommend that you replace your board's thermal paste!
> 
> I don't remember what was the maximum core clock on Time Spy before replacing the thermal compound, but today I can get 2860mhz (2870 on colder days) with 21.7.2 driver.


Liquid metal, are you worried about corroding the block or you die?


----------



## Henry Owens

lawson67 said:


> Got my card in the Alphacool water block now with die coated in LM, all Rads and fans in , pump in, CPU block in tomorrow and finish it all off
> 
> 
> View attachment 2521042
> View attachment 2521043


So what do we have here top/side exhaust and front Intake with no radiator?


----------



## Henry Owens

gtz said:


> If you want to save 50 bucks you can get it from newegg.
> 
> 
> 
> 
> 
> 
> 
> 
> 
> GIGABYTE AORUS Radeon RX 6900 XT Video Card GV-R69XTAORUSX WB-16GD - Newegg.com
> 
> 
> Buy GIGABYTE AORUS Radeon RX 6900 XT 16GB GDDR6 PCI Express 4.0 ATX Video Card GV-R69XTAORUSX WB-16GD with fast shipping and top-rated customer service. Once you know, you Newegg!
> 
> 
> 
> 
> www.newegg.com
> 
> 
> 
> 
> 
> But like mentioned Amazon's return policy can't be beat.


Yes it is, I returned a 5800x which was quite the golden sample unknowingly to me at the time.


----------



## cfranko

I have a Bykski 6900 xt waterblock but I couldn’t see which hole is the inlet and which hole is the outlet in the manual. Can someone who uses a Bykski block help?


----------



## MietiOC

Ok cracked 23k graphics score with my asrock phantom D on air in my mitx without any additional cooling on 36°C ambient. I think this is it right now, if i don't get any different cooling solution. Fastest 6900xt in italy and fastest 3950x + 6900xt worldwide. I think i can call it a day. Should i call it a day?


----------



## xR00Tx

Henry Owens said:


> Liquid metal, are you worried about corroding the block or you die?


My old 2080Ti had liquid metal for almost 2 years and I didn't have any problems at all. 
Both the gpu die and the waterblock base (nickel) were as new after removing the liquid metal... So I'm not worried about corrosion on the 6900xt waterblock as it's also nickel plated.


----------



## lestatdk

cfranko said:


> I have a Bykski 6900 xt waterblock but I couldn’t see which hole is the inlet and which hole is the outlet in the manual. Can someone who uses a Bykski block help?


It should not matter which you use as in/out. So use them in whichever way is the most convenient for your build


----------



## Henry Owens

xR00Tx said:


> My old 2080Ti had liquid metal for almost 2 years and I didn't have any problems at all.
> Both the gpu die and the waterblock base (nickel) were as new after removing the liquid metal... So I'm not worried about corrosion on the 6900xt waterblock as it's also nickel plated.


Maybe I'll get into that one day, does it hurt resale?


----------



## Henry Owens

cfranko said:


> I have a Bykski 6900 xt waterblock but I couldn’t see which hole is the inlet and which hole is the outlet in the manual. Can someone who uses a Bykski block help?


Just follow the path to whichever route goes to the top of the of the gpu. If it is an acrylic top.


----------



## Henry Owens

1. Does minimum clock have any effect on overclocking/ stability? I'm currently leaving mine on 500.
2. Time spy has hit my 400w power limit, should I increase this?


----------



## lestatdk

Henry Owens said:


> 1. Does minimum clock have any effect on overclocking/ stability? I'm currently leaving mine on 500.
> 2. Time spy has hit my 400w power limit, should I increase this?


In Timespy I can score slightly higher with minimum at 500 + increasing max a little as well, but with increased instability. So I usually leave minimum 100 MHz below the maximum. This is more stable for my setup.
Have not yet done a test in Firestrike to determine which way works the best


----------



## Henry Owens

lestatdk said:


> In Timespy I can score slightly higher with minimum at 500 + increasing max a little as well, but with increased instability. So I usually leave minimum 100 MHz below the maximum. This is more stable for my setup.
> Have not yet done a test in Firestrike to determine which way works the best


Timespy non extreme is a tough one but I'm using that for my stability


----------



## xR00Tx

Henry Owens said:


> Maybe I'll get into that one day, does it hurt resale?


When I sold the 2080Ti, I went back to the original air block and left it with kryonaut. And I didn't even mention that I had used liquid metal.


----------



## lestatdk

Henry Owens said:


> Timespy non extreme is a tough one but I'm using that for my stability


Yes, it's tough. I can do higher frequencies in FS . TS is a good indication of stability for sure


----------



## LtMatt

Henry Owens said:


> Timespy non extreme is a tough one but I'm using that for my stability


Yes, Timespy insists on the lowest GPU clocks for me out of all the 3Dmark benches. I think it's because of the high FPS.


----------



## Henry Owens

Here she is 2675/ 1130mV
400/400 MPT
Out of all my runs max gpu temp 47 max hotspot 65
Update: 2675 wasn't stable but passed 20 runs timespy stress test at 2670


----------



## jonRock1992

cfranko said:


> I have a Bykski 6900 xt waterblock but I couldn’t see which hole is the inlet and which hole is the outlet in the manual. Can someone who uses a Bykski block help?


So, for my bykski wb, the left one is the inlet and the right is the outlet.


----------



## xR00Tx

cfranko said:


> I have a Bykski 6900 xt waterblock but I couldn’t see which hole is the inlet and which hole is the outlet in the manual. Can someone who uses a Bykski block help?





jonRock1992 said:


> So, for my bykski wb, the left one is the inlet and the right is the outlet.


Same here. The inlet is on the left (waterblock seen from the front).


----------



## cfranko

jonRock1992 said:


> So, for my bykski wb, the left one is the inlet and the right is the outlet.





xR00Tx said:


> Same here. The inlet is on the left (waterblock seen from the front).


Also, the stock backplate of my gpu has thermal pads on them. However in the bykski block manual it doesn’t show that I need to put thermal pads on the backplate, it only shows that I need thermal pads on the memory and other stuff in the PCB that I don’t know what they are. When installing the waterblock backplate do I put thermal pads on the backplate or not?


----------



## lawson67

LtMatt said:


> @lawson67 - How you getting on with your new setup?


Mate i had a nightmare lol, i was trying to get the Air out the loop, tipped the PC up on one end to move the air though the loop but forgot i had the fill port still open on the Reservoir, water everywhere, the whole lot shut off straight away with QR code 00, water all over Ram and board thought i had blown the lot up, but cleaned up with kitchen roll pulled ram out dried that up to, then hair dryer out 3 hours later up and running again


----------



## lawson67

Henry Owens said:


> So what do we have here top/side exhaust and front Intake with no radiator?


Two 360mm Rads, one on top one on back side wall, intake side wall exit roof on Rad set up


----------



## lestatdk

cfranko said:


> Also, the stock backplate of my gpu has thermal pads on them. However in the bykski block manual it doesn’t show that I need to put thermal pads on the backplate, it only shows that I need thermal pads on the memory and other stuff in the PCB that I don’t know what they are. When installing the waterblock backplate do I put thermal pads on the backplate or not?


My Alphacool block came with thermal pads for the backplate. I'd highly recommend it for the added cooling.


----------



## lestatdk

lawson67 said:


> Mate i had a nightmare lol, i was trying to get the Air out the loop, tipped the PC up on one end to move the air though the loop but forgot i had the fill port still open on the Reservoir, water everywhere, the whole lot shut off straight away with QR code 00, water all over Ram and board thought i had blown the lot up, but cleaned up with kitchen roll pulled ram out dried that up to, then hair dryer out 3 hours later up and running again


OMG 

I was afraid of something similar yesterday when we assembled the loop, so I was always double and triple checking everything . Glad you're OK and running again


----------



## cfranko

lestatdk said:


> My Alphacool block came with thermal pads for the backplate. I'd highly recommend it for the added cooling.


afaik there isn’t any vram on the back of these cards. What purpose does thermal pads on the backplate serve?


----------



## LtMatt

lawson67 said:


> Mate i had a nightmare lol, i was trying to get the Air out the loop, tipped the PC up on one end to move the air though the loop but forgot i had the fill port still open on the Reservoir, water everywhere, the whole lot shut off straight away with QR code 00, water all over Ram and board thought i had blown the lot up, but cleaned up with kitchen roll pulled ram out dried that up to, then hair dryer out 3 hours later up and running again


Jesus H Christ, Lol. Thought you were quiet. Glad to hear all is okay, you lucky swine.


----------



## lestatdk

cfranko said:


> afaik there isn’t any vram on the back of these cards. What purpose does thermal pads on the backplate serve?


The pads cool the backside of the PCB. This way the GPU and memory gets additional cooling. I noticed the pads were set in the exact same locations but just on the backside of the PCB. 
You can find YT videos showing that either removing the backplate altogether or giving it thermal pads will improve cooling and thus performance.
There's a reason they are there on the stock backplates as well


----------



## lestatdk

LtMatt said:


> Jesus H Christ, Lol. Thought you were quiet. Glad to hear all is okay, you lucky swine.


Not only does he win the GPU lottery every single time, but now when he tries to kill his system he's lucky not to kill it as well. Man, I wish I was half as lucky 😅


----------



## tolis626

ZealotKi11er said:


> If you have only XTX, dont expect anything more than 2600MHz in TS.


I knew that it was mostly a case of unrealistic expectations on my part, but, well, I was expecting more. Thanks for the truth bomb. 


LtMatt said:


> Get an XTXH if you want the best chance to achieve 2700Mhz in Timespy, but even then it's not completely guaranteed.
> 
> Unless you have a legendary XTX sample (like xROOTx  ) then you won't be able to keep up with XTXH's due to their better silicon quality and extra voltage headroom.
> 
> Unless you are going water cooling (in which case pick any XTXH with support for a block) then you should get either a Strix or Toxic 6900 XTXH AIO, boost clock listed at 2525Mhz for those models.
> 
> Hope you have a few quid.


That's the thing, getting a 6900XT instead of a 6800XT already meant that I was going ever so slightly over budget, I just thought that it was worth the few extra euros I paid (and it is). An XTXH was and remains out of reach at the moment for me. I would've loved to get one, but I'd have to splurge like 25% more cash. 1300€ for a 6900XT Nitro+ SE isn't terrible in the current market, but to get a Toxic I'd have to go to about 2000€, which is definitely not worth it. Thanks for the info, though!

As I said above, it's not that my card is bad (although hotspot temps tend to be a bit ridiculous), I was just a bit disappointed because I expected more for no reason. It's not that the performance isn't there, I have so much of it that it's not even funny on my 1440p 144Hz panel, which is in dire need of an upgrade.


xR00Tx said:


> I have an XTX 6900 XT Sapphire Nitro+ and initially the hotspot temperature was also going over 100c.
> Right away, I took it apart and replaced the original thermal paste with Kryonaut. Hotspot temperature dropped about 30 degrees celsius and the delta between die and hostspot temperature was 20 - 25c maximum.
> 
> Now I use a waterblock and liquid metal and the delta between the temperatures is about 10c @ 360w.
> 
> I recommend that you replace your board's thermal paste!
> 
> I don't remember what was the maximum core clock on Time Spy before replacing the thermal compound, but today I can get 2860mhz (2870 on colder days) with 21.7.2 driver.


You shut up. You won the lottery. Don't talk to me! 

In all seriousness though, thanks for that, that was exactly the info I needed. Ok, sure, my card might not be a golden sample, but with hotspot temps reaching over 105C at stock clocks, I can't imagine even a winner like your card clocking very high. Since repasting with a good TIM helps, that's what I need to do. I'll keep using the card as is for now, make sure nothing is defective or anything (don't wanna get in trouble having to RMA a tinkered-with card) and I'll eventually get to it. I don't think I'll go LM at this stage, I'm already uncomfortable enough having to tear apart such an expensive GPU, but I do have both Kryonaut and Gelid GC Extreme, so one of these will have to do for the time being.

Thanks a bunch mate!


----------



## lestatdk

tolis626 said:


> I knew that it was mostly a case of unrealistic expectations on my part, but, well, I was expecting more. Thanks for the truth bomb.
> 
> That's the thing, getting a 6900XT instead of a 6800XT already meant that I was going ever so slightly over budget, I just thought that it was worth the few extra euros I paid (and it is). An XTXH was and remains out of reach at the moment for me. I would've loved to get one, but I'd have to splurge like 25% more cash. 1300€ for a 6900XT Nitro+ SE isn't terrible in the current market, but to get a Toxic I'd have to go to about 2000€, which is definitely not worth it. Thanks for the info, though!
> 
> As I said above, it's not that my card is bad (although hotspot temps tend to be a bit ridiculous), I was just a bit disappointed because I expected more for no reason. It's not that the performance isn't there, I have so much of it that it's not even funny on my 1440p 144Hz panel, which is in dire need of an upgrade.
> 
> You shut up. You won the lottery. Don't talk to me!
> 
> In all seriousness though, thanks for that, that was exactly the info I needed. Ok, sure, my card might not be a golden sample, but with hotspot temps reaching over 105C at stock clocks, I can't imagine even a winner like your card clocking very high. Since repasting with a good TIM helps, that's what I need to do. I'll keep using the card as is for now, make sure nothing is defective or anything (don't wanna get in trouble having to RMA a tinkered-with card) and I'll eventually get to it. I don't think I'll go LM at this stage, I'm already uncomfortable enough having to tear apart such an expensive GPU, but I do have both Kryonaut and Gelid GC Extreme, so one of these will have to do for the time being.
> 
> Thanks a bunch mate!


Time to save up for a watercooling setup 

Just got my Alphacool system up and running.


Here's a block for the 6900XT Nitro+


Alphacool Wasserkühler für Sapphire AMD Radeon RX 6800XT Nitro+ | AMD Fullsize | GPU Water Cooler | Shop | Alphacool - the cooling company


----------



## tolis626

lestatdk said:


> Time to save up for a watercooling setup
> 
> Just got my Alphacool system up and running.
> 
> 
> Here's a block for the 6900XT Nitro+
> 
> 
> Alphacool Wasserkühler für Sapphire AMD Radeon RX 6800XT Nitro+ | AMD Fullsize | GPU Water Cooler | Shop | Alphacool - the cooling company


Well, if all goes well, come November I'll be working in Germany. With a (medical) doctor's salary in Germany, I can and will do it. As long as I'm in Greece though, nah. 

Also, I probably need a block for the Toxic, not the Nitro+. The Nitro+ SE has the Toxic PCB, just not an XTXH die, sadly.


----------



## jonRock1992

cfranko said:


> Also, the stock backplate of my gpu has thermal pads on them. However in the bykski block manual it doesn’t show that I need to put thermal pads on the backplate, it only shows that I need thermal pads on the memory and other stuff in the PCB that I don’t know what they are. When installing the waterblock backplate do I put thermal pads on the backplate or not?


My Bykski waterblock didn't come with pads for the backplate, but neither did the stock backplate for my Red Devil Ultimate. I've always wondered if it would help with performance, but I have no idea what thickness of pads to get. So I'm not using any pads for the backplate right now.


----------



## cfranko

jonRock1992 said:


> My Bykski waterblock didn't come with pads for the backplate, but neither did the stock backplate for my Red Devil Ultimate. I've always wondered if it would help with performance, but I have no idea what thickness of pads to get. So I'm not using any pads for the backplate right now.


I guess I also won’t use thermal pads then. Both the GPU and Memory should be really cool anyway without the pads on the backplate due to the waterblock itself.


----------



## jonRock1992

Have you guys tried out Superposition? I think I saw @xR00Tx and @L!ME on the 1080p Extreme leaderboards. Performance is a little underwhelming compared to what the 3090's are doing with this bench, but I decided to do a run for the hell of it. I was at 2750/2850 and 2150 fast timings for this run.


----------



## majestynl

Yeah..the EK block for the Red Devil Ultimate is arriving tomorrow. Will report some tests...


----------



## lawson67

Had a Run tonight with TS but been a busy day what with throwing water all over my Ram and motherboard should do better than this when i can get all the air out my loop but still a nice score 
Graphics Score 24719 

I scored 21 600 in Time Spy


----------



## lestatdk

My highest combined and GPU score. 2650 MHz . But it's clearly diminishing returns as the heat goes up and the gains aren't going up much.









Still, I'm number one in Denmark in Timespy with GPU score. And I have first place in Denmark for Firestrike combined and GPU score. 

Will try and push a bit more ( can't help it) , but not expecting much from here on out.

In daily gaming use I now have a junction temp in mid 60s compared to 110 or so. Glad I finally got this on water. Maybe next time I'll be just 20% as lucky as Lawson and get a more decent card 😅


----------



## jonRock1992

I've been benching off and on today and I was able to improve my graphics score a little in Time Spy. Was able to make it through some runs at 2695 MHz min / 2795 MHz max core clocks with a 430W power limit. This was the best GPU score that I could achieve:










AMD Radeon RX 6900 XT video card benchmark result - AMD Ryzen 7 5800X,ASUSTeK COMPUTER INC. ROG CROSSHAIR VIII DARK HERO (3dmark.com)


----------



## LtMatt

jonRock1992 said:


> I've been benching off and on today and I was able to improve my graphics score a little in Time Spy. Was able to make it through some runs at 2695 MHz min / 2795 MHz max core clocks with a 430W power limit. This was the best GPU score that I could achieve:
> 
> View attachment 2521292
> 
> 
> AMD Radeon RX 6900 XT video card benchmark result - AMD Ryzen 7 5800X,ASUSTeK COMPUTER INC. ROG CROSSHAIR VIII DARK HERO (3dmark.com)


25K so close you can almost touch it.



jonRock1992 said:


> Have you guys tried out Superposition? I think I saw @xR00Tx and @L!ME on the 1080p Extreme leaderboards. Performance is a little underwhelming compared to what the 3090's are doing with this bench, but I decided to do a run for the hell of it. I was at 2750/2850 and 2150 fast timings for this run.
> View attachment 2521261


Don't think these Unigine benches get any optimisation as no one really uses them in the tech press, not mainstream like the 3dMark suite.


----------



## jonRock1992

I just tried out Firestrike Ultra. This bench was weird as it required really low min GPU clock values. Eventually just left the min value at the default of 500 MHz for this one. I managed to get the number one spot for GPU score for 6900 XT + 5800X systems though.










AMD Radeon RX 6900 XT video card benchmark result - AMD Ryzen 7 5800X,ASUSTeK COMPUTER INC. ROG CROSSHAIR VIII DARK HERO (3dmark.com)


----------



## LtMatt

jonRock1992 said:


> I just tried out Firestrike Ultra. This bench was weird as it required really low min GPU clock values. Eventually just left the min value at the default of 500 MHz for this one. I managed to get the number one spot for GPU score for 6900 XT + 5800X systems though.
> 
> View attachment 2521302
> 
> 
> AMD Radeon RX 6900 XT video card benchmark result - AMD Ryzen 7 5800X,ASUSTeK COMPUTER INC. ROG CROSSHAIR VIII DARK HERO (3dmark.com)


Firestrike Ultra is the next most demanding after Timespy for me.


----------



## ptt1982

Henry Owens said:


> Here she is 2675/ 1130mV
> 400/400 MPT
> Out of all my runs max gpu temp 47 max hotspot 65
> Update: 2675 wasn't stable but passed 20 runs timespy stress test at 2670


Nice temps! What was your rig like again?


----------



## ptt1982

lestatdk said:


> Not only does he win the GPU lottery every single time, but now when he tries to kill his system he's lucky not to kill it as well. Man, I wish I was half as lucky 😅


The man has gotto buy some lottery tickets.


----------



## LtMatt

ptt1982 said:


> The man has gotto buy some lottery tickets.


A XTX with 1.175v that is better than most XTXH with 1.2v, damn right!


----------



## ptt1982

LtMatt said:


> A XTX with 1.175v that is better than most XTXH with 1.2v, damn right!


More temp headroom, and comes with a wobbly mount!


----------



## Henry Owens

ptt1982 said:


> Nice temps! What was your rig like again?











It's a beast


----------



## L!ME

@jonRock1992 this was an old Run with older Driver , i think i Had to rebench IT. The card was running 2950 3050 an water without mods now i think the Score can ne much Higher.


----------



## ptt1982

Henry Owens said:


> View attachment 2521322
> 
> It's a beast
> View attachment 2521323


Those are some thick radiators!! I've got 1x360 30mm, and 1x 240 30mm basic corsair rads...


----------



## tootall123

My best run so far but accidentally disabled the futuremark service. 👀


----------



## xR00Tx

cfranko said:


> Also, the stock backplate of my gpu has thermal pads on them. However in the bykski block manual it doesn’t show that I need to put thermal pads on the backplate, it only shows that I need thermal pads on the memory and other stuff in the PCB that I don’t know what they are. When installing the waterblock backplate do I put thermal pads on the backplate or not?


I have thermal pads on the backplate. However, I didn't do tests without to see if they really make a difference or not.


----------



## CantingSoup

MorePowerTool (MPT) and Red BIOS Editor (RBE) Beta Program - MPT 1.3.8 Beta 1 (Debug Overrides and Throttler Control) | Page 2 | igor'sLAB


Disclaimer: The following article is machine translated from the original German, and has not been edited or checked for errors. Thank you for understanding!




www.igorslab.de





New MPT


----------



## cfranko

xR00Tx said:


> I have thermal pads on the backplate. However, I didn't do tests without to see if they really make a difference or not.


Which thermal pads did you use? Does the size of the thermal pad have to be different on the backplate or does the same size as the one on the actual pcb work?


----------



## tootall123

From experience thermal pads on the rear tend to be around 2-3mm thick.


----------



## lawson67

I am getting there slowly just trying to workout what voltage's etc my card like, it does not like any ram overclocking over 2120 or it loses lots of points in benchmarks, ill play again later when i have more time

Graphic Score 24899


----------



## jonRock1992

lawson67 said:


> I am getting there slowly just trying to workout what voltage's etc my card like, it does not like any ram overclocking over 2120 or it loses lots of points in benchmarks, ill play again later when i have more time
> 
> Graphic Score 24899
> 
> View attachment 2521338


Try increasing min SOC voltage to 950mV. It may help your score scale with higher ram speeds. Before I did this I could only get to 2130MHz before I started to lose performance.


----------



## lawson67

jonRock1992 said:


> Try increasing min SOC voltage to 950mV. It may help your score scale with higher ram speeds. Before I did this I could only get to 2130MHz before I started to lose performance.


Cheers john ill try that later


----------



## lawson67

Well i have broken the 25k barrier and i am sure it has a lot left in the tank only just started messing with its voltages and frequencies trying to find out what it likes

I scored 21 758 in Time Spy


----------



## jonRock1992

Congrats! I'm unable to break 25k with my GPU lol. I've tried everything.


----------



## lawson67

jonRock1992 said:


> Congrats! I'm unable to break 25k with my GPU lol. I've tried everything.


Thanks Jon i just added more clock speed, that run was min 2760 - max 2860, got to play later though now as off down the Gym


----------



## jonRock1992

Turns out with the newer drivers I can lower my voltage more and still be stable! I got my best run at 2695 MHz min / 2795 MHz max @ 1165mV and 2150 Mhz fast timings with 2150 MHz GPU FCLK and 430W PL.










AMD Radeon RX 6900 XT video card benchmark result - AMD Ryzen 7 5800X,ASUSTeK COMPUTER INC. ROG CROSSHAIR VIII DARK HERO (3dmark.com)


----------



## lawson67

jonRock1992 said:


> Turns out with the newer drivers I can lower my voltage more and still be stable! I got my best run at 2695 MHz min / 2795 MHz max @ 1165mV and 2150 Mhz fast timings with 2150 MHz GPU FCLK and 430W PL.
> 
> View attachment 2521341
> 
> 
> AMD Radeon RX 6900 XT video card benchmark result - AMD Ryzen 7 5800X,ASUSTeK COMPUTER INC. ROG CROSSHAIR VIII DARK HERO (3dmark.com)


Ive not tried the new drivers yet i did my 25k run on the last ones


----------



## xR00Tx

Henry Owens said:


> It's a beast


Your top fans are upside down. You have 9 fans blowing air into the case and only 1 as an exhaust!


----------



## xR00Tx

tootall123 said:


> From experience thermal pads on the rear tend to be around 2-3mm thick.


I'm using 3mm thick thermal pads.


----------



## Henry Owens

xR00Tx said:


> Your top fans are upside down. You have 9 fans blowing air into the case and only 1 as an exhaust!


I have done many tests and found this to give optimal water temps.


----------



## xR00Tx

L!ME said:


> @jonRock1992 this was an old Run with older Driver , i think i Had to rebench IT. The card was running 2950 3050 an water without mods now i think the Score can ne much Higher.


Hey @L!ME, which 6900xt do you own? I've just seen you run its memory @ 2410mhz (time spy).


----------



## tolis626

Hey guys, quick question. Have you had any luck/good results undervolting the SoC? I tried doing so and I seem to be getting better hotspot temps at 1.1V, but nothing major. What was major were the crashes I got as long as I had SoC at 1267MHz and Fclk at 2150MHz. Maybe I went overboard, but I have no idea what to try for these and if it's even worth the hassle, performance improvements were within margin of error, at least in Superposition. At stock SoC/Fclk, 1.1V works just fine for the SoC. Gonna try lower and see how it goes. Also, if you know of any other MPT tweaks I should try and I'm missing, please post about it. Thanks!

PS : I thought I'd risk a bit more voltage on the SoC to see what happens. Sure enough, at 1.175V tuning gets messed up and I can only go up to like 650MHz or so on the core. Y u do dis AMD?

PS 2 : I am curious if anyone has managed to circumvent these restrictions AMD has put in place. Or if anyone has tried flashing an XTXH bios on an XTX card.


----------



## L!ME

@xrootx_ it is a liquid devil ultimate @xtxh-lc BIOS running @ fast Timings 2. Memory loves Heat under 40 degree. My Card has 150 extra Caps For stabilize the voltages. Now i will make an extra active Backplate cooling with an bitspower Backside Block from a 3090. And Change the cooling Pads vom 3 w/MK to real 14 w/MK.
with extra volts i can get 2300+ running at fast Timings 3 but I am cooling Limited ,and i hope after the active Backplate Mod i can improve this.

@tolis626
you cant flash a xtx to xtxh you only can flash xtxh to Xtxh oder xtx to xtx _


----------



## J7SC

...per some earlier posts in this thread, MPT does allow you to boost up the memory beyond 2150 on a XTX - unfortunately, so far at least, your GPU clock will go into safe mode the moment you exceed 2150, so not really anything useful. Still, it bugs me that on my XTX, the VRAM can perform fine at 2360 MHz, but I can't get to it for now...hopefully, a future version of MPT can get past that hurdle because I'm otherwise perfectly happy with my XTX re. clocks, 3x 8 pin PCB etc.


----------



## tolis626

L!ME said:


> @xrootx_ it is a liquid devil ultimate @xtxh-lc BIOS running @ fast Timings 2. Memory loves Heat under 40 degree. My Card has 150 extra Caps For stabilize the voltages. Now i will make an extra active Backplate cooling with an bitspower Backside Block from a 3090. And Change the cooling Pads vom 3 w/MK to real 14 w/MK.
> with extra volts i can get 2300+ running at fast Timings 3 but I am cooling Limited ,and i hope after the active Backplate Mod i can improve this.
> 
> @tolis626
> you cant flash a xtx to xtxh you only can flash xtxh to Xtxh oder xtx to xtx _


Ah, ich glaubte so. Dankeschön!

Wahrscheinlich weißt du es, aber du kannst Fujipoly pads benutzen. Ich weiß nicht mehr wie viele w/MK sie sind, aber sie sind die Beste. Ich hatte diese pads auf meine 390x benutzt in den Vergangenheit und VRM Temperatur war circa 15C niedriger.

(Sorry everyone else for writing in German, it was a) out of curtesy and b) because I need the practice. I literally just said thanks and suggested Fujipoly pads for the card, like the ones I used to use on my 390x, which dropped my VRM temps by like 15C)



J7SC said:


> ...per some earlier posts in this thread, MPT does allow you to boost up the memory beyond 2150 on a XTX - unfortunately, so far at least, your GPU clock will go into safe mode the moment you exceed 2150, so not really anything useful. Still, it bugs me that on my XTX, the VRAM can perform fine at 2360 MHz, but I can't get to it for now...hopefully, a future version of MPT can get past that hurdle because I'm otherwise perfectly happy with my XTX re. clocks, 3x 8 pin PCB etc.


Yeah, that I had read. I just didn't know if the same would apply to the SoC voltage. Seems that the moment you go above any of the preset maximum values for clocks/voltages, core clock takes a dump. Let's hope someone figures out how to bypass that. I am kinda bummed by AMD's move. Artificially limiting the 6800XT is bad enough, but doing so in a 6900XT, their top of the line card, because there's some binned chips that can go higher? That's bs.


----------



## L!ME

I have ordered the fujipoly  thanks for the recommendation. The fast memory clocks are realy crazy at some Points at gt1 and gt2 i get a improvement over 10fps over 2160ft1 with Stock bios


----------



## tootall123

L!ME said:


> I have ordered the fujipoly  thanks for the recommendation. The fast memory clocks are realy crazy at some Points at gt1 and gt2 i get a improvement over 10fps over 2160ft1 with Stock bios


Got any pictures of your GPU? really interested to see your setup.


----------



## Henry Owens

ptt1982 said:


> Those are some thick radiators!! I've got 1x360 30mm, and 1x 240 30mm basic corsair rads...


Thanks. My side radiator is the Corsair xr7 55mm, bottom is 80mm and top is 45mm I think.


----------



## Henry Owens

jonRock1992 said:


> Turns out with the newer drivers I can lower my voltage more and still be stable! I got my best run at 2695 MHz min / 2795 MHz max @ 1165mV and 2150 Mhz fast timings with 2150 MHz GPU FCLK and 430W PL.
> 
> View attachment 2521341
> 
> 
> AMD Radeon RX 6900 XT video card benchmark result - AMD Ryzen 7 5800X,ASUSTeK COMPUTER INC. ROG CROSSHAIR VIII DARK HERO (3dmark.com)


Can I give my reference 430power limit? Also what is your amps limit?


----------



## Henry Owens

My new rmx 1000 PSU smells burnt and gross when running 3dmark or gaming. Thinking of sending it back and getting hx 1200 or EVGA p+ 1300.


----------



## tolis626

Henry Owens said:


> Can I give my reference 430power limit? Also what is your amps limit?


You can set the limits wherever you want in software, at least to my knowledge. Thing is, I don't think it's worth it on an XTX. An XTXH under water sure, yeah, but hammering your power delivery like that for a miniscule performance gain wouldn't be worth it. If you do decide to go ahead, just use MPT.


Henry Owens said:


> My new rmx 1000 PSU smells burnt and gross when running 3dmark or gaming. Thinking of sending it back and getting hx 1200 or EVGA p+ 1300.


Well that's not good. I haven't owned a Corsair PSU, so I can't comment, but EVGA's PSUs are top notch. Mine is a 850W G3 and it's been rock solid no matter what. When I bought it I had a 3800x and a 5700XT, so it was overkill, but I never expected newer GPUs to consume that much power. Couple the 6900XT with a 5900X (PBO is on, limits are raised, curve optimizer is set, so under full load it consumes like 150-175W according to software readings) and it's getting some stress alright. Hooked up a wattmeter (or whatever it's called) the other day and peak power at the wall was like 700+W. PSU ain't breaking a sweat. If you decide to go platinum, I think 1300W is overkill even for these GPUs, but choice is yours.


----------



## jonRock1992

L!ME said:


> @xrootx_ it is a liquid devil ultimate @xtxh-lc BIOS running @ fast Timings 2. Memory loves Heat under 40 degree. My Card has 150 extra Caps For stabilize the voltages. Now i will make an extra active Backplate cooling with an bitspower Backside Block from a 3090. And Change the cooling Pads vom 3 w/MK to real 14 w/MK.
> with extra volts i can get 2300+ running at fast Timings 3 but I am cooling Limited ,and i hope after the active Backplate Mod i can improve this.
> 
> @tolis626
> you cant flash a xtx to xtxh you only can flash xtxh to Xtxh oder xtx to xtx _


Interesting. Is it just straight forward to flash the LC bios to the GPU, or is there something specific you need to do to get it working correctly?

Also, this GPU looks noice.


----------



## L!ME

You only need an externel Programmer like ch431a than you can flash the xtxh LC bios.
At the Moment i know from 2 Others that they using the LC bios without Problems. One Guy with a liquid devil ultimate and the Others in a ASRock ocf.
The Card get Lot of gain from the memory clocks.


----------



## jonRock1992

L!ME said:


> You only need an externel Programmer like ch431a than you can flash the xtxh LC bios.
> 
> At the Moment i know from 2 Others that they using the LC bios without Problems. One Guy with a liquid devil ultimate and the Others in a ASRock ocf.
> 
> The Card get Lot of gain from the memory clocks.



Thanks for the info! Why can't you flash through RBE?


----------



## L!ME

Different Device ID you cant forcre the programming only a Linux Version of amdvbflash can do it without a externel Programmer.
But i think this Version is Not Public any more.


----------



## Henry Owens

tolis626 said:


> You can set the limits wherever you want in software, at least to my knowledge. Thing is, I don't think it's worth it on an XTX. An XTXH under water sure, yeah, but hammering your power delivery like that for a miniscule performance gain wouldn't be worth it. If you do decide to go ahead, just use MPT.
> 
> Well that's not good. I haven't owned a Corsair PSU, so I can't comment, but EVGA's PSUs are top notch. Mine is a 850W G3 and it's been rock solid no matter what. When I bought it I had a 3800x and a 5700XT, so it was overkill, but I never expected newer GPUs to consume that much power. Couple the 6900XT with a 5900X (PBO is on, limits are raised, curve optimizer is set, so under full load it consumes like 150-175W according to software readings) and it's getting some stress alright. Hooked up a wattmeter (or whatever it's called) the other day and peak power at the wall was like 700+W. PSU ain't breaking a sweat. If you decide to go platinum, I think 1300W is overkill even for these GPUs, but choice is yours.


Yea that's true I was pretty sure the 2021 rmx 1000 would be awesome. I will test for a little longer before sending it back. My computer uses a fair amount of other extra power with two d5 pumps and 13 fans. How much power were you giving your 6900xt when you tested the wattage?


----------



## xR00Tx

L!ME said:


> @xrootx_ it is a liquid devil ultimate @xtxh-lc BIOS running @ fast Timings 2. Memory loves Heat under 40 degree. My Card has 150 extra Caps For stabilize the voltages. Now i will make an extra active Backplate cooling with an bitspower Backside Block from a 3090. And Change the cooling Pads vom 3 w/MK to real 14 w/MK.
> with extra volts i can get 2300+ running at fast Timings 3 but I am cooling Limited ,and i hope after the active Backplate Mod i can improve this._


Wow! The Liquid Devil Ultimate must be a spectacular video card!!! I'm crazy about one of those! 
Congratulations!!!

It is very unlikely that this gpu will be sold here in Brazil and, if it does, it will be very expensive...


----------



## L!ME

@xROOTx

Not all cards are good, this is a Platin Sample Like your xtx, with the mods i did its a really good Card also for 24/7 gaming. And ya ist was ****ing expensive, but it's Like rtx3080 and rtx3090...
You can buy the normal red devil ultimate and get a waterblock this will be cheaper.


----------



## LtMatt

L!ME said:


> @xROOTx
> 
> Not all cards are good, this is a Platin Sample Like your xtx, with the mods i did its a really good Card also for 24/7 gaming. And ya ist was ****ing expensive, but it's Like rtx3080 and rtx3090...
> You can buy the normal red devil ultimate and get a waterblock this will be cheaper.


Can you share more details and settings on how you managed to get your memory stable over 2150Mhz? I tried messing with FCLK and Memory voltage a little, but it seems using Fast Timings 2 (never mind 3) brings a green screen.


----------



## lestatdk

LtMatt said:


> Can you share more details and settings on how you managed to get your memory stable over 2150Mhz? I tried messing with FCLK and Memory voltage a little, but it seems using Fast Timings 2 (never mind 3) brings a green screen.


How do you even get memory above 2150 ? Is this something reserved for the XTXH cards only ?


----------



## L!ME

@LtMatt i flashed the xtxh-lc BIOSwith the original BIOS i cant so more than 2170 fast Timing 1


----------



## xR00Tx

L!ME said:


> @LtMatt i flashed the xtxh-lc BIOSwith the original BIOS i cant so more than 2170 fast Timing 1


Do you happen to know if I could flash the xtxh-lc bios on my regular xtx card?! Would it work?!

Mine sample has a dual bios switch in case something goes wrong.


----------



## weleh

Flashing XTXH to XTX will brick the card.
I already posted it here somewhere my testing when I flashed via Linux.

What I'm more interested is if adding input filtering on memory was what gave L1ME the stability to do higher Timings and Clock Speed rather than the XTXH LC bios because the PCB is exactly the same between XTXH and XTX Red Devil in terms of memory so a simple bios "flash" shouldn't allow you to do 200 Mhz more and even tighter timings.

From what I see from XTXH LC bios it only changes memory controller voltage and memory voltage, all of these changes can be done on XTX cards too with MPT or EVC. So you can try them yourself.

For me, it makes no difference at all.


----------



## LtMatt

weleh said:


> Flashing XTXH to XTX will brick the card.
> I already posted it here somewhere my testing when I flashed via Linux.
> 
> What I'm more interested is if adding input filtering on memory was what gave L1ME the stability to do higher Timings and Clock Speed rather than the XTXH LC bios because the PCB is exactly the same between XTXH and XTX Red Devil in terms of memory so a simple bios "flash" shouldn't allow you to do 200 Mhz more and even tighter timings.
> 
> From what I see from XTXH LC bios it only changes memory controller voltage and memory voltage, all of these changes can be done on XTX cards too with MPT or EVC. So you can try them yourself.
> 
> For me, it makes no difference at all.


Saw your new graphics score on Timespy, very nice indeed considering your average core clock at only 2730Mhz. 👏


----------



## weleh

I can actually run FT2 on 3DMARK at 2000 Mhz but it will eventually crash with small artifacts starting to show up.

This is at 1V memory controller (850mV stock) and 1.5V vram (1.356V stock).

So this pretty much confirms voltage doesn't do much if you don't have the hardware to back it up.

Pretty much every 6900XT has **** memory input filtering that's why most XTXH stock can't do any better on memory than a XTX... So hard mod is the way to go if you want more memory performance. Now I wonder what was done to the Reference LC card at 18Gbps stock. Improved PCB? Binned chips? Who knows. 

Do we have a bios of this card to test via MPT what's changed?


----------



## weleh

Also TSE scales a lot with memory since 6900XT are starved. So if you want to compete on that front, you pretty much need a card like L1MES or a reference LC card with EVC (not sure if reference LC is a XTXH card or not).


----------



## L!ME

Backcooler is ready to brake some new records


----------



## Bart

That's some proper insanity right there, LOL! I love it!


----------



## jonRock1992

weleh said:


> I can actually run FT2 on 3DMARK at 2000 Mhz but it will eventually crash with small artifacts starting to show up.
> 
> This is at 1V memory controller (850mV stock) and 1.5V vram (1.356V stock).
> 
> So this pretty much confirms voltage doesn't do much if you don't have the hardware to back it up.
> 
> Pretty much every 6900XT has **** memory input filtering that's why most XTXH stock can't do any better on memory than a XTX... So hard mod is the way to go if you want more memory performance. Now I wonder what was done to the Reference LC card at 18Gbps stock. Improved PCB? Binned chips? Who knows.
> 
> Do we have a bios of this card to test via MPT what's changed?


How does FT2 @2000 compare to FT1 at 2150MHz?


----------



## weleh

jonRock1992 said:


> How does FT2 @2000 compare to FT1 at 2150MHz?


No idea, I can't pass GT1, it fails mid way however from purely observation of FPS numbers, it seemed to be pretty much similar.


----------



## jonRock1992

weleh said:


> No idea, I can't pass GT1, it fails mid way however from purely observation of FPS numbers, it seemed to be pretty much similar.


So does increasing memory voltage in MPT actually work?


----------



## weleh

jonRock1992 said:


> So does increasing memory voltage in MPT actually work?


All voltage parameters on MPT work. 

Only Vcore can't be increased otherwise you brick the drivers.


----------



## weleh

Memory VDDCI = Memory Controller's voltage
Memory MVDD = Memory voltage


----------



## weleh

For instance, I couldn't mine above 2130 Mhz otherwise I would get driver crash on Windows but somehow, increasing min SOC voltage to 950mV makes it 100% stable. 

This doesn't seem to have a positive impact on benchmarks though...


----------



## jonRock1992

weleh said:


> For instance, I couldn't mine above 2130 Mhz otherwise I would get driver crash on Windows but somehow, increasing min SOC voltage to 950mV makes it 100% stable.
> 
> This doesn't seem to have a positive impact on benchmarks though...


Increasing min SOC to 950mV makes mine stable at 2150 MHz fast timings. I confirmed that in the past. I just wasn't sure about the memory voltages. Thanks for the info. I'm going to mess with the memory voltages to see if I can squeeze out some more performance and finally reach that 25k Timespy lol.


----------



## LtMatt

jonRock1992 said:


> Increasing min SOC to 950mV makes mine stable at 2150 MHz fast timings. I confirmed that in the past. I just wasn't sure about the memory voltages. Thanks for the info. I'm going to mess with the memory voltages to see if I can squeeze out some more performance and finally reach that 25k Timespy lol.


We’ll all be relieved when this feat happens!


----------



## tolis626

Talking about memory, have you guys messed with the memory timing option in MPT? I guess that's what FT2 means, but I'm not sure. I remember having this option with my 5700XT but if it did anything, it was negligible.


----------



## jonRock1992

tolis626 said:


> Talking about memory, have you guys messed with the memory timing option in MPT? I guess that's what FT2 means, but I'm not sure. I remember having this option with my 5700XT but if it did anything, it was negligible.


I messed around with it last night. It negatively impacted performance. Even after unlocking the Fast Timing Level2 option and setting it to Level 1 in the driver, it still negatively impacted performance for me. I tried using the Level 2 option, but it resulted in green square artifacts at above 2025 MHz. So I just gave up lol. I'm going to mess around with the memory voltages tomorrow night.


----------



## tolis626

jonRock1992 said:


> I messed around with it last night. It negatively impacted performance. Even after unlocking the Fast Timing Level2 option and setting it to Level 1 in the driver, it still negatively impacted performance for me. I tried using the Level 2 option, but it resulted in green square artifacts at above 2025 MHz. So I just gave up lol. I'm going to mess around with the memory voltages tomorrow night.


Well that was quick. 

I see. I'll give it a go out of curiosity but I'll probably stick to the normal setting. I was mostly searching for a way to set it to fast timings without using the Radeon Settings for overclocking as I still prefer Afterburner. Oh well, I'll see.


----------



## xR00Tx

weleh said:


> No idea, I can't pass GT1, it fails mid way however from purely observation of FPS numbers, it seemed to be pretty much similar.


Yeah... I tried to pass FT2 but I couldn't do it either. I raised the voltages and at least managed to start the benchmarks (TS and Superposition), but a short time later everything crashed. lol


----------



## LtMatt

xR00Tx said:


> Yeah... I tried to pass FT2 but I couldn't do it either. I raised the voltages and at least managed to start the benchmarks (TS and Superposition), but a short time later everything crashed. lol


Yes no luck here with FT2. I didn't notice any gain from increasing memory voltage either which I hope would allow higher memory clocks.


----------



## tootall123

How do you enable FT2?


----------



## LtMatt

tootall123 said:


> How do you enable FT2?


Change Memory Timing Control from 1 to 2.


----------



## LtMatt

Does anyone know what Power Mode Control does?


----------



## weleh

No idea but XTXH STRIX LC have that option at 0


----------



## weleh

Yesterday and today the weather has been a bit colder so I decided to try and bench again...

I cannot even pass 25000 let alone beat my previous best score. I don't know what's happening but when my driver crashes the clock behaviour on the card changes... Performance is miles below on GT1 at least.

These cards are so strange...It's like they have a will of their own...

I'm selling my whole rig actually because I'm moving countries in September and I want to buy a laptop until I'm settled. Don't think I'll be benching anymore for a while.


----------



## jonRock1992

weleh said:


> Yesterday and today the weather has been a bit colder so I decided to try and bench again...
> 
> I cannot even pass 25000 let alone beat my previous best score. I don't know what's happening but when my driver crashes the clock behaviour on the card changes... Performance is miles below on GT1 at least.
> 
> These cards are so strange...It's like they have a will of their own...
> 
> I'm selling my whole rig actually because I'm moving countries in September and I want to buy a laptop until I'm settled. Don't think I'll be benching anymore for a while.


This happened to me as well after enabling the fast timing level 2 option in mpt. I had to do a clean driver install with DDU to get performance back.


----------



## CHKY-BSTRD

At the moment this is my highest score on air. Still not done tweaking so i should get more out of it.


----------



## weleh

jonRock1992 said:


> This happened to me as well after enabling the fast timing level 2 option in mpt. I had to do a clean driver install with DDU to get performance back.


I've done that.

No idea what's up. I'm doing 2830 Mhz during GT1 and FPS is lower than usual.


----------



## majestynl

weleh said:


> Yesterday and today the weather has been a bit colder so I decided to try and bench again...
> 
> I cannot even pass 25000 let alone beat my previous best score. I don't know what's happening but when my driver crashes the clock behaviour on the card changes... Performance is miles below on GT1 at least.
> 
> These cards are so strange...It's like they have a will of their own...





jonRock1992 said:


> This happened to me as well after enabling the fast timing level 2 option in mpt. I had to do a clean driver install with DDU to get performance back.


I know this behavior for a long time on AMD GPU's. What's helping for me is:

- Reset wattmann to defaults, restart pc / reload drivers!
Or,
sometimes if I'm still not happy: Reset Wattman, + delete SPT and re-apply everything!


----------



## lawson67

weleh said:


> Yesterday and today the weather has been a bit colder so I decided to try and bench again...
> 
> I cannot even pass 25000 let alone beat my previous best score. I don't know what's happening but when my driver crashes the clock behaviour on the card changes... Performance is miles below on GT1 at least.
> 
> These cards are so strange...It's like they have a will of their own...
> 
> I'm selling my whole rig actually because I'm moving countries in September and I want to buy a laptop until I'm settled. Don't think I'll be benching anymore for a while.


I believe you can degrade your RAM permanently if you leave it overclocked for even just a week, when i first got my Powercolor Ultimate which was just about 2 weeks ago i left my Ram over clocked at 2150mhz as it gave a boost in benchmarks and games, now 2120mhz and above i will lose around 10 fps in game benchmarks like Shadow of tomb Raider and Red Dead 2, i leave it at 2100mhz now as i belive it maybe permanently degraded, at 2100mhz i have my performance back, i tried DDU and reload drivers over and over again however the degrade is permanent, i have just concentrated on overclocking it as stably as i can for gaming and i can do 40 loops or more of GT2 at 2850mhz so i am leaving it at that, not that bothered about hitting an unstable high benchmark score just for one run so i can post it tbh.


----------



## jonRock1992

lawson67 said:


> I believe you can degrade your RAM permanently if you leave it overclocked for even just a week, when i first got my Powercolor Ultimate which was just about 2 weeks ago i left my Ram over clocked at 2150mhz as it gave a boost in benchmarks and games, now 2120mhz and above i will lose around 10 fps in game benchmarks like Shadow of tomb Raider and Red Dead 2, i leave it at 2100mhz now as i belive it maybe permanently degraded, at 2100mhz i have my performance back, i tried DDU and reload drivers over and over again however the degrade is permanent, i have just concentrated on overclocking it as stably as i can for gaming and i can do 40 loops or more of GT2 at 2850mhz so i am leaving it at that, not that bothered about hitting an unstable high benchmark score just for one run so i can post it tbh.


Damn. That's unfortunate


----------



## J7SC

FYI, MPT beta 8 is out in case that wasn't mentioned here yet


----------



## LtMatt

lawson67 said:


> I believe you can degrade your RAM permanently if you leave it overclocked for even just a week, when i first got my Powercolor Ultimate which was just about 2 weeks ago i left my Ram over clocked at 2150mhz as it gave a boost in benchmarks and games, now 2120mhz and above i will lose around 10 fps in game benchmarks like Shadow of tomb Raider and Red Dead 2, i leave it at 2100mhz now as i belive it maybe permanently degraded, at 2100mhz i have my performance back, i tried DDU and reload drivers over and over again however the degrade is permanent, i have just concentrated on overclocking it as stably as i can for gaming and i can do 40 loops or more of GT2 at 2850mhz so i am leaving it at that, not that bothered about hitting an unstable high benchmark score just for one run so i can post it tbh.


Is that your XTX or new XTXH?

That is indeed unfortunate and luckily for me, not something I have experienced. 

As voltage does not change regardless of clock speed, it seems odd that it would degrade so quickly into its lifespan. 



J7SC said:


> FYI, MPT beta 8 is out in case that wasn't mentioned here yet


Nice, anything to get excited about?


----------



## marcoschaap

J7SC said:


> FYI, MPT beta 8 is out in case that wasn't mentioned here yet


I'm probably an idiot for asking, but is there some kind of thread here or over at Igorslab where all the (new) functionality in MPT is explained? I always see the changes in the changelog, but never an explanation on what they do. For instance, wth is DcBtc voltage? And how can I edit the Linear Droop to be benificial for clock stability, I googled a lot but have yet to find a topic which explains it.


----------



## ZealotKi11er

lawson67 said:


> I believe you can degrade your RAM permanently if you leave it overclocked for even just a week, when i first got my Powercolor Ultimate which was just about 2 weeks ago i left my Ram over clocked at 2150mhz as it gave a boost in benchmarks and games, now 2120mhz and above i will lose around 10 fps in game benchmarks like Shadow of tomb Raider and Red Dead 2, i leave it at 2100mhz now as i belive it maybe permanently degraded, at 2100mhz i have my performance back, i tried DDU and reload drivers over and over again however the degrade is permanent, i have just concentrated on overclocking it as stably as i can for gaming and i can do 40 loops or more of GT2 at 2850mhz so i am leaving it at that, not that bothered about hitting an unstable high benchmark score just for one run so i can post it tbh.


You can not degrade the vRAM like that. It just 2100-2150 there is small frequency difference. It also depends in temperatures. The memory controller in rdna2 has EDC which probably was hitting high enough values causing the performance to drop. Also I have noticed that best scores are right after a cold system boot after it has been off. Memory goes though training at boot meaning different boot cycles can effect its performance especially if you are at the limit of the OC.


----------



## lawson67

LtMatt said:


> Is that your XTX or new XTXH Power colour? That is indeed unfortunate and luckily for me, not something I have experienced. As voltage does not change regardless of clock speed, it seems odd that it would degrade.


Yep it seems permanent however set back to 2100 or even 2110 its fine again and performance is back, i can still score over 25k with the ram at stock 2100mhz and OCing your ram will only really been seen in benchmarks as extra points its really not doing much in games at all however i am not risking degrading it further as its really noticeable when it is degraded and as i said at 2120mhz and over i am losing (10 FPS or more in RL game benchmarks), i have read it can become permanently degraded and at £1,600 i am not risking damaging it further for benchmarks runs, i intend to keep the card the as the core on mine is seriously fast and stable at 2850mhz and i can get runs in at 2870mhz that will pass TS, i have not bothered to see just how much further it will go after i noticed what had happened to the ram and I definitely don't want to RMA it with such good silicone so i am just keeping it as it is for now doing what i wanted it to do be fast in Games, i don't belive it will degraded further if i leave the Vram at stock from now on.


----------



## lawson67

ZealotKi11er said:


> You can not degrade the vRAM like that. It just 2100-2150 there is small frequency difference. It also depends in temperatures. The memory controller in rdna2 has EDC which probably was hitting high enough values causing the performance to drop. Also I have noticed that best scores are right after a cold system boot after it has been off. Memory goes though training at boot meaning different boot cycles can effect its performance especially if you are at the limit of the OC.


My Vram was scaling all the way up to 2150mhz two weeks ago now whatever driver i use and how many times i DDU and delete SPPT i can not get it to scale anymore over 2120mhz, its losing points all the way after that which was not happening 2 weeks ago which can only lead me to belive its permanently damaged or it has become permanently degraded to the point where it just cant scale over 2120mhz like it could 2 weeks ago, that's the only logical conclusion that I can think of with what i am seeing


----------



## LtMatt

marcoschaap said:


> I'm probably an idiot for asking, but is there some kind of thread here or over at Igorslab where all the (new) functionality in MPT is explained? I always see the changes in the changelog, but never an explanation on what they do. For instance, wth is DcBtc voltage? And how can I edit the Linear Droop to be benificial for clock stability, I googled a lot but have yet to find a topic which explains it.


Fair questions, I have a feeling no one really knows. 

I think Jon will hit 25K graphics score before we understand what most of the options do in MPT.


----------



## majestynl

ZealotKi11er said:


> You can not degrade the vRAM like that. It just 2100-2150 there is small frequency difference. It also depends in temperatures. The memory controller in rdna2 has EDC which probably was hitting high enough values causing the performance to drop. Also I have noticed that best scores are right after a cold system boot after it has been off. Memory goes though training at boot meaning different boot cycles can effect its performance especially if you are at the limit of the OC.


++ Completely agree with ZealotKi11er


----------



## gtz

What's a quick way to check what uses the new core?

Found this one, but I think it still uses the old one.









XFX Radeon RX 6900 XT Speedster MERC 319 Black Gaming Graphics Card


Buy XFX Radeon RX 6900 XT Speedster MERC 319 Black Gaming Graphics Card featuring 1950 MHz Core - Boostable to 2365 MHz, 5120 Stream Processors, RDNA 2 Architecture, 16GB of GDDR6 VRAM, 16 Gb/s Memory Speed, 256-Bit Memory Interface, HDMI 2.1 | DisplayPort 1.4, PCIe 4.0 Interface, Triple Fan...




www.bhphotovideo.com


----------



## tolis626

Just to add to the discussion about memory degrading, basically the only two ways to degrade memory fast are extremely high voltages and extremely high temperatures. Some ICs are more sensitive than others, but operating within spec, RAM should basically never degrade like that. I had some DDR3 DIMMs that I had pushed so far out of spec that I was sure they'd die, and not only are they still alive almost a decade later, but they are working overclocked in a system that I've sold to a friend. It's REALLY hard to damage RAM in that way, unless you blow up something else on the PCB.

What can degrade fast if ran improperly is the memory controller on the GPU/CPU itself. But given how long these cards have been out, I'd be extremely skeptical of it too.


----------



## jonRock1992

Well, sadly after messing around with fast timing level two a couple days ago, I cannot get close to the 24957 graphics score that I got before. My ambient temp is 4 degrees Fahrenheit hotter, but I'm running basically the same settings. I can only get to around 24.7k now. Does ambient temp really make that much of a difference, or am I experiencing what @lawson67 is experiencing? Could this be a Red Devil Ultimate isolated issue?

@L!ME can you link the XTXH LC bios that you flashed? I can't seem to find the 18 Gbps version of the bios through a Google search.

Update: Got it back up over 24850. Turned out I had radeon image sharpening enabled along with freesync 😂🤦. I started this bench session with a few drinks lol. Lesson learned.


----------



## LtMatt

jonRock1992 said:


> Well, sadly after messing around with fast timing level two a couple days ago, I cannot get close to the 24957 graphics score that I got before. My ambient temp is 4 degrees Fahrenheit hotter, but I'm running basically the same settings. I can only get to around 24.7k now. Does ambient temp really make that much of a difference, or am I experiencing what @lawson67 is experiencing? Could this be a Red Devil Ultimate isolated issue?
> 
> @L!ME can you link the XTXH LC bios that you flashed? I can't seem to find the 18 Gbps version of the bios through a Google search.
> 
> Update: Got it back up over 24850. Turned out I had radeon image sharpening enabled along with freesync 😂🤦. I started this bench session with a few drinks lol. Lesson learned.


Lol, a few drinks always livens things up a bit.


----------



## L!ME

@johnRock1992

Here it is








[Official] AMD Radeon RX 6900 XT Owner's Club


Damn, saw your previous post and said to self, I have that beat by a little bit, I scored 23 137 in Time Spy. Then I saw this post, great score!!! It will be interesting to see if there are any more gains to be had from future driver updates. Yes, I thought I could get a bit more but not that...




www.overclock.net


----------



## jonRock1992

L!ME said:


> @johnRock1992
> 
> Here it is
> 
> 
> 
> 
> 
> 
> 
> 
> [Official] AMD Radeon RX 6900 XT Owner's Club
> 
> 
> Damn, saw your previous post and said to self, I have that beat by a little bit, I scored 23 137 in Time Spy. Then I saw this post, great score!!! It will be interesting to see if there are any more gains to be had from future driver updates. Yes, I thought I could get a bit more but not that...
> 
> 
> 
> 
> www.overclock.net


Thank you! I'm pretty sure mpt is a dead end for me. I'm probably going to have to flash this bios to get over 25k in Timespy. Wish that Linux version of that AMD flashing tool was still online so I could flash this.


----------



## L!ME

I think this is the right Version





File-Upload.net - amdvbflash


Datei: amdvbflash. Die Datei wurde von einem User hochgeladen. Laden Sie auch kostenlos Dateien hoch mit File Upload.



www.file-upload.net


----------



## jonRock1992

L!ME said:


> I think this is the right Version
> 
> 
> 
> 
> 
> 
> File-Upload.net - amdvbflash
> 
> 
> Datei: amdvbflash. Die Datei wurde von einem User hochgeladen. Laden Sie auch kostenlos Dateien hoch mit File Upload.
> 
> 
> 
> www.file-upload.net


Thank you! I'll give this a try this weekend.


----------



## LtMatt

jonRock1992 said:


> Thank you! I'll give this a try this weekend.


Is it possible to do it in Windows or do you need Linux?


----------



## L!ME

Only Linux but i do Not know If this Version will Work With the xtxh LC BIOS. I know IT Work With the xtxh.


----------



## LtMatt

L!ME said:


> Only Linux but i do Not know If this Version will Work With the xtxh LC BIOS. I know IT Work With the xtxh.


I can see the MVDD is at 1.4v looking at the BIOS. I am sure i tried setting it to 1.4v for the Toxic XTXH but it made no difference to memory overclocking. I might try again...


----------



## L!ME

The BIOS use Other memory Straps thats why the clocks are Higher


----------



## lawson67

jonRock1992 said:


> Well, sadly after messing around with fast timing level two a couple days ago, I cannot get close to the 24957 graphics score that I got before. My ambient temp is 4 degrees Fahrenheit hotter, but I'm running basically the same settings. I can only get to around 24.7k now. Does ambient temp really make that much of a difference, or am I experiencing what @lawson67 is experiencing? Could this be a Red Devil Ultimate isolated issue?
> 
> @L!ME can you link the XTXH LC bios that you flashed? I can't seem to find the 18 Gbps version of the bios through a Google search.
> 
> Update: Got it back up over 24850. Turned out I had radeon image sharpening enabled along with freesync 😂🤦. I started this bench session with a few drinks lol. Lesson learned.


I doubt your suffering the same issue as me and i am only losing performance if my ram is pushed over 2120mhz, i can still hit over 25k with my Vram at stock settings or up to 2020mhz, Weleh says he cant even hit 25k now with his card and is losing FPS and that's a Toxic so i don't belive its a problem with the Powercolor, all i know is with my card if i push the ram over 2120mhz i will lose like 10fps in games benchmarks, under 2120mhz its fine so i am still very happy with my card, its is slightly confusing though that my Vram could scale in 3Dmark up to 2150mhz two weeks ago and now it wont, other than that its a beast of a card and i love it, what weleh problem is i have know idea but when he said he was losing FPS and cant even hit 25k anymore i thought he might have a Vram problem similar to mine if he pushed his Vram to hard, also i found you don't see the loss or the effect of the loss so much in 3Dmark if your ram is pushed to hard / unstable, however in game benchmarks its a massive hit in terms of FPS loss, i see 10fps or more if my Vram is pushed over 2120mhz, and i bought my card for fast 4K OLED TV gaming, i am not so bothered about sitting for hours trying to eke out an unstable clock frequency to pass for one run in 3dmark, if i can hit 25k in 10mins of messing with clock like i did the other day that's kinda fun, over 10mins for me gets boring and unstable frequencies do nothing in games other than crash so that's no fun at all as far as I'm concerned, my card is fully stable at 2850mhz in TS GT2 for 40 runs or more, (i stopped it at 40 passes) and i can play hours of Metro exodus enhanced at 2850mhz without any driver or game crashes and that game for me is the best test of stability, so all in all i am very happy


----------



## weleh

You don't need bioses to change SOC, memory controller and memory voltages. You can do that on MPT.

The bios might be useful if they use different more relaxed timings which allow for faster clocks without performance degradation. 

There's no reason for a 16Gbps card to be stable at 18Gbps other than this or hard modifications because the XTXH-LC card uses 18 Gbps memory chips, the PCB is the same, hence why it's at 1.4V and not 1.356V and hence why it just works without hassle.


----------



## jonRock1992

LtMatt said:


> Is it possible to do it in Windows or do you need Linux?


You can try using the Windows Subsystem for Linux.


----------



## L!ME

Use a USB Stick live Linux its the easy way


----------



## jonRock1992

L!ME said:


> Use a USB Stick live Linux its the easy way


Yeah that's how I'm going to try it. Fingers crossed it allows device ID mismatch.


----------



## tolis626

Well, either my card is a complete dud or I'm doing something incredibly wrong. Either way, I was trying TimeSpy (I have avoided because I can't skip the demo and I refuse to give these pricks more money just so I can fully use the software I've already paid for) and I couldn't get it to complete the test at 2600MHz with any undervolt. It would either crash during the demo or, the rare times it made it through the demo, it would immediately crash in GT2. It only managed to go through the whole thing at 1.175V, even 1.15V fails (hard UV in MPT). Even so, with a power limit of 330W, it hits a wall and only runs at about 2450MHz actual clocks. And even at that PL, hotspot temps creep up to 102C. At first I thought it was crashing because of RAM being unstable, but no, the test it managed to complete was with 2150MHz and fast timings. This card definitely needs repasting, but I doubt its overclocking is salvageable. I am really bummed.

EDIT : Forgot to mention the score I got. 21781 graphics score. That is actually lower than many 6800XTs. The overall score is kinda saved by my 5900X.








I scored 19 436 in Time Spy


AMD Ryzen 9 5900X, AMD Radeon RX 6900 XT x 1, 16384 MB, 64-bit Windows 10}




www.3dmark.com





EDIT 2 : Scratch that, I just realized that even my CPU score is low. 12k for a 5900x whereas someone with a 5800X here is getting a tad higher and another with a 10850k is getting over 13k. What is happening?


----------



## LtMatt

tolis626 said:


> Well, either my card is a complete dud or I'm doing something incredibly wrong. Either way, I was trying TimeSpy (I have avoided because I can't skip the demo and I refuse to give these pricks more money just so I can fully use the software I've already paid for) and I couldn't get it to complete the test at 2600MHz with any undervolt. It would either crash during the demo or, the rare times it made it through the demo, it would immediately crash in GT2. It only managed to go through the whole thing at 1.175V, even 1.15V fails (hard UV in MPT). Even so, with a power limit of 330W, it hits a wall and only runs at about 2450MHz actual clocks. And even at that PL, hotspot temps creep up to 102C. At first I thought it was crashing because of RAM being unstable, but no, the test it managed to complete was with 2150MHz and fast timings. This card definitely needs repasting, but I doubt its overclocking is salvageable. I am really bummed.


3DMark is really cheap to buy. If you like benching, it's worth the few quid it costs IMO to get Firestrike/Timespy without having to sit through the demo crap.


----------



## tolis626

LtMatt said:


> 3DMark is really cheap to buy. If you like benching, it's worth the few quid it costs IMO to get Firestrike/Timespy without having to sit through the demo crap.


But I've already bought it, that's the thing. It just wants 10€ more to fully unlock Timespy for me. It's not the money, it's the attitude. I've already paid for it, yet they want more. It kinda puts me off.


----------



## LtMatt

tolis626 said:


> But I've already bought it, that's the thing. It just wants 10€ more to fully unlock Timespy for me. It's not the money, it's the attitude. I've already paid for it, yet they want more. It kinda puts me off.


Yes Timespy was an additional DLC I believe, though from memory it cost no more than a few dollars when I got it. I guess prices increased.


----------



## lawson67

tolis626 said:


> Well, either my card is a complete dud or I'm doing something incredibly wrong. Either way, I was trying TimeSpy (I have avoided because I can't skip the demo and I refuse to give these pricks more money just so I can fully use the software I've already paid for) and I couldn't get it to complete the test at 2600MHz with any undervolt. It would either crash during the demo or, the rare times it made it through the demo, it would immediately crash in GT2. It only managed to go through the whole thing at 1.175V, even 1.15V fails (hard UV in MPT). Even so, with a power limit of 330W, it hits a wall and only runs at about 2450MHz actual clocks. And even at that PL, hotspot temps creep up to 102C. At first I thought it was crashing because of RAM being unstable, but no, the test it managed to complete was with 2150MHz and fast timings. This card definitely needs repasting, but I doubt its overclocking is salvageable. I am really bummed.
> 
> EDIT : Forgot to mention the score I got. 21781 graphics score. That is actually lower than many 6800XTs. The overall score is kinda saved by my 5900X.
> 
> 
> 
> 
> 
> 
> 
> 
> I scored 19 436 in Time Spy
> 
> 
> AMD Ryzen 9 5900X, AMD Radeon RX 6900 XT x 1, 16384 MB, 64-bit Windows 10}
> 
> 
> 
> 
> www.3dmark.com
> 
> 
> 
> 
> 
> EDIT 2 : Scratch that, I just realized that even my CPU score is low. 12k for a 5900x whereas someone with a 5800X here is getting a tad higher and another with a 10850k is getting over 13k. What is happening?


RAM doesn't necessarily need to crash during a benchmark to mean its unstable, mine will pass at 2150mhz TS GT2 every time without a crash, its unstable at 2150mhz due to the fact that performance at 2150mhz has degraded the overall performance of the card, ie losing lots of FPS, as for 3dmark, buy the advanced version though steam copy the 3 key numbers, refund, download standalone, there you go you have the advanced version for nothing!


----------



## tolis626

lawson67 said:


> RAM doesn't necessarily need to crash during a benchmark to mean its unstable, mine will pass at 2150mhz TS GT2 every time without a crash, its unstable at 2150mhz due to the fact that performance at 2150mhz has degraded the overall performance of the card, ie losing lots of FPS, as for 3dmark, buy the advanced version though steam copy the 3 key numbers, refund, download standalone, there you go you have the advanced version for nothing!


You misunderstood me. I was talking about hardware level stuff. It's actually pretty unlikely that there's anything you CAN do with these cards to physically damage your RAM. It's probably something else that's causing your issues. Maybe it's drivers, maybe something else, I really don't know. But I find it improbable that you have actually physically degraded your RAM modules. I'm not disputing your findings, I just think the root cause is something else other than degradation.

As for TimeSpy, I already have the advanced version from Steam. That's why I'm angry at them.

PS : I just did a few runs of Superposition at 2700MHz and capped at 1.15V in MPT (RAM at 2150MHz FT). It's running it no problems. Power limit raised to 350W, again, no problems. I even got my highest score yet at 12520 for 1080p extreme. Even went ahead and did a couple runs each for the 4K and 8K tests, fine again. Readings in HWiNFO64 showed clocks of about 2650MHz (bouncing between 2630 and 2660), core voltages of about 1.09V, power pegged at 350W for all tests, hotspot at about 100C and SoC voltage again at about 1.09V. I mean, ok, Timespy is harder to pass, fair enough, but this seems very strange to me. On my 5700XT, a pass in Superposition meant a pass in TimeSpy most of the time. Having such a discrepancy in stability between the two tests is new to me.


----------



## J7SC

...per earlier discussions, you can increase VRAM speed via MPT on an XTH card, but at the expense of the GPU dropping to safe mode speed - so generally useless. That said, the AMD Radeon software 'overclock' check option does run in that mode and it keeps on telling me that my ideal (unfettered) VRAM speed is 2260...


----------



## weleh

Use a Linux liveboot from a usb pen. 

Works fine.


----------



## jonRock1992

@L!ME
Unfortunately that bios that you linked didn't work for me. My system doesn't post with it. Luckily I have that dual bios switch. I flashed it over the performance bios. Did I do something wrong? I just forced the flash with -f -p.

When trying to boot with the LC bios I get error code 55 on my Dark Hero motherboard. I don't get this error code with the red devil ultimate bios. If I flip the bios switch while I'm booted into the OS on the Silent bios, will it write over the silent bios or the performance bios? Is there a way to restore my red devil ultimate bios to that performance slot without an external programming tool?

On a more positive note, that amdvbios file worked like a charm for cross-flashing lol. Just seems like that particular bios isn't going to work with my GPU.


----------



## The EX1

Has anyone gotten ahold of the OEM only XTXH LC cards to see if the memory modules are actually different than what already comes on XTXH AIB cards? I'm curious to know if the memory difference is a physical or software limit.


----------



## L!ME

jonRock1992 said:


> @L!ME
> Unfortunately that bios that you linked didn't work for me. My system doesn't post with it. Luckily I have that dual bios switch. I flashed it over the performance bios. Did I do something wrong? I just forced the flash with -f -p.
> 
> When trying to boot with the LC bios I get error code 55 on my Dark Hero motherboard. I don't get this error code with the red devil ultimate bios. If I flip the bios switch while I'm booted into the OS on the Silent bios, will it write over the silent bios or the performance bios? Is there a way to restore my red devil ultimate bios to that performance slot without an external programming tool?
> 
> On a more positive note, that amdvbios file worked like a charm for cross-flashing lol. Just seems like that particular bios isn't going to work with my GPU.


Hm i don't know If this Version will Work With the LC BIOS. I don't Test it.
But you are the First who Test it and it did Not Work. You can try to Put an Other Card in and try If its Boot with both.
Other way a ch431a ist a good Investition
@The EX1 the LC Card has 16gbs and the normal ones 14gbs Samsung Chips


----------



## J7SC

jonRock1992 said:


> @L!ME
> Unfortunately that bios that you linked didn't work for me. My system doesn't post with it. Luckily I have that dual bios switch. I flashed it over the performance bios. Did I do something wrong? I just forced the flash with -f -p.
> 
> When trying to boot with the LC bios I get error code 55 on my Dark Hero motherboard. I don't get this error code with the red devil ultimate bios. If I flip the bios switch while I'm booted into the OS on the Silent bios, will it write over the silent bios or the performance bios? Is there a way to restore my red devil ultimate bios to that performance slot without an external programming tool?
> 
> On a more positive note, that amdvbios file worked like a charm for cross-flashing lol. Just seems like that particular bios isn't going to work with my GPU.


...dual Bios switch  ...my card also has that, and I may yet try to flash some 'forbidden fruit' bios since there's 'fall-back'


----------



## jonRock1992

L!ME said:


> Hm i don't know If this Version will Work With the LC BIOS. I don't Test it.
> But you are the First who Test it and it did Not Work. You can try to Put an Other Card in and try If its Boot with both.
> Other way a ch431a ist a good Investition
> @The EX1 the LC Card has 16gbs and the normal ones 14gbs Samsung Chips


I'm going to try to flash my backed up bios to the slot with the LC bios on it. I read that you can flip the bios switch over after you boot with the working bios and it will flash the corrupted slot.

Anyways, I'm going to try to flash the following bios. It's a different XTXH LC bios.








AMD RX 6900 XT VBIOS


16 GB GDDR6, 500 MHz GPU, 914 MHz Memory




www.techpowerup.com


----------



## The EX1

L!ME said:


> @The EX1 the LC Card has 16gbs and the normal ones 14gbs Samsung Chips


Thank you.


----------



## Neoki

Long time lurker here finally looking to jump into the 5950x / 6900 xt benchmark leaderboard fray here. Currently stuck deciding between the power color liquid devil ultimate and the gigabyte waterforce extreme. There is a huge discount delta at moment in favor of the waterforce. But one thing I've always enjoyed on my EVGA cards was the dual bios for vbios flashing, and it looks like only the devil has that. Any suggestions as to What to grab here for my custom loop build? This will be my first time jumping into the AMD side of the house since early 2000s.


----------



## majestynl

jonRock1992 said:


> Thank you! I'll give this a try this weekend.


Let me know if it works. I ordered the clip but got a wrong delivery. Waited to long. Going for the Linux flash if that's working too...


----------



## jonRock1992

majestynl said:


> Let me know if it works. I ordered the clip but got a wrong delivery. Waited to long. Going for the Linux flash if that's working too...


The bios flashing tool flashed it successfully, but my PC wouldn't post with it. If you use amdvbios, make sure you have a dual bios switch.

If anyone else tries to flash the LC bios, please post results!


----------



## J7SC

Neoki said:


> Long time lurker here finally looking to jump into the 5950x / 6900 xt benchmark leaderboard fray here. Currently stuck deciding between the power color liquid devil ultimate and the gigabyte waterforce extreme. There is a huge discount delta at moment in favor of the waterforce. But one thing I've always enjoyed on my EVGA cards was the dual bios for vbios flashing, and it looks like only the devil has that. Any suggestions as to What to grab here for my custom loop build? This will be my first time jumping into the AMD side of the house since early 2000s.


...after a decade+ w/ Intel only, I added a 2950X in Dec '18, then a 3950X and 5950X earlier this year (work-play builds) and never looked back. That doesn't mean that I won't consider Intel again in the future, but for now, the 5950X still actually manages to exceed my expectations. I paired that with an Asus CH8 'Dark' mobo which has some unique performance features and I highly recommend that mobo (or s.th. similar), along with the fastest 32GB of RAM you can find as 5950X really like that.

...as to your indicated 6900XT choices, I'm a bit partial to GB Aorus Waterforce GPU products as I have a couple w/o any issues for years. Re. the 6900XT xtreme Waterforce, I actually think it might have a dual bios switch...all other Gigabyte/Aorus 6900XT GPUs that use the same PCB have dual bios (my 3x8 pin GB included). The PowerColor LD ultimate is also a great-looking if pricey card btw, but I have no experience w/ PowerColor at all.


----------



## wmjbottriell

ptt1982 said:


> Let me rant about my experience with the Red Devil 6900XT once more, it's therapy for me and a warning for you:
> 
> If you want to OC your card "to the max" you need to put your card under water and use MPT, and buying the reference model is the cheapest and easiest way to do it. If you want to keep it on air and overclock it slightly, buy Nitro+, TUF or Gaming X Trio. This is essentially your best plug n play solution. I would not recommend using MPT on air, because the temps can shoot really high on air, and you have to be prepared to put your fans to 80-100%. UV+OC is probably the way to go on air.
> 
> If you are in the markets for the XTXH cards avoid the Red Devil Ultimate, because it has mounting problems which lead to high temperatures, or you need to put it under water, like some people here have. The binning doesn't seem to be as good as other models either, but it is cheaper. Don't be fooled by the beefy looking cooler.
> 
> If you never want to overclock, which means you are leaving 10-20% performance on the table, depending on silicon lottery, you can buy the Red Devil. Be prepared to heavily undervolt it with software like MPT to keep it cool enough so it doesn't thermally throttle. That's the only way to use Red Devil on air without constantly thinking "how hot is it running now...".
> 
> Whatever you do if you buy the Red Devil, do not open it yourself if it runs hot. Send it back. It is incredibly difficult to get the remount working, and the default thermal pads can break even when you are careful and essentially cannot be replaced by aftermarket pads because they are always too hard for the delicate mounting. The heatsink/cooler mounting pressure is messed up, so my theory is that the longer you use Red Devil the higher the temps will creep due to the mount loosening up. This is based on around 30 attempts to repaste, and using 4 different aftermarket thermal pads. Sometimes I would get it right and maybe a reduction of 10-15C compared to stock temps, but the temps crept back in three weeks due to losing the mounting pressure slowly. Under the same test the card junction temp was at 90C in TimeSpy 2-3 days after successful repasting, and three weeks later it reached 110C and thermal throttling using same drivers. I spent a lot of time inspecting the card, testing different ways of mounting it, and concluded that the thermal pads cannot be replaced, and that even if you repaste once successfully, there's a catastrophe waiting later. I did not use washers, they might help, but then again getting the mounting pressure right is incredibly difficult in the first place, especially if you want to keep your VRAM temps in check. My conclusion is to avoid the card all together or know that you need to put it under water. The latter option is expensive, and Red Devil's do not have the best bins out there either, and seem to be very sensitive to drivers in terms of core clocks (driver change can boost or reduce your core clock capability by 50mhz.)
> 
> Personally, didn't want to sell my war-torn Red Devil cheap, so I had to put it under water (ok I admit, I loved the idea of a new project), and now it finally works as it should when overclocked and using MPT. However it was a mess of a project, and caused a lot of stress and it sucked my time and money.
> 
> The best overall solution (no surprises here) for 6900xt seems to be highend AIO models or buying the cheapest card and putting it under cheap custom loop to keep the temps low and overclock high. Remember, 6900xt clocks do not boost based on GPU temps like Nvidia's cards, so there's no tangible benefit in gaming if your card is running 45C/70C vs 60C/85C (gpu/junction).
> 
> I won't compromise personally anymore when it comes to high end GPUs and will always use a waterblock, and buy a binned model. I'm willing to pay double the price to get rid of all the hassle that comes with cheap cards. It's a hobby, it doesn't need to make financial sense. I say, splurge on your GPU and never look back.


Holy ****.......I wish I would have stumbled upon this months ago. This is exactly what I've gone through over the past few months with my Red Devil Ultim. I've been losing my mind. Was just about to order my third set of expensive Thermal pads when I stumbled upon this. Even though I'm still in the same ****ty position of high junctions, wide deltas and not being able to afford to put it on water at least I can take solace in the fact that someone out there has gone through the same suffering as me. Which block did you go with? That custom EK one?
Cheers man,
-B


----------



## tootall123

Neoki said:


> Long time lurker here finally looking to jump into the 5950x / 6900 xt benchmark leaderboard fray here. Currently stuck deciding between the power color liquid devil ultimate and the gigabyte waterforce extreme. There is a huge discount delta at moment in favor of the waterforce. But one thing I've always enjoyed on my EVGA cards was the dual bios for vbios flashing, and it looks like only the devil has that. Any suggestions as to What to grab here for my custom loop build? This will be my first time jumping into the AMD side of the house since early 2000s.


I've owned both, both will provide fantastic performance.

However I'd pick the Aorus if given the choice as it comes with a 4 year warranty opposed to 1 with the PowerColor, The Aorus is longer but not as tall so you're less likely to encounter issues with height and your side panel if you're mounting horizontally.

In my opinion the Aorus card also looks better.

Its had by far the best bin of any 6900 XT I've owned and I've had a LOT


----------



## majestynl

ptt1982 said:


> Let me rant about my experience with the Red Devil 6900XT once more, it's therapy for me and a warning for you:
> 
> If you want to OC your card "to the max" you need to put your card under water and use MPT, and buying the reference model is the cheapest and easiest way to do it. If you want to keep it on air and overclock it slightly, buy Nitro+, TUF or Gaming X Trio. This is essentially your best plug n play solution. I would not recommend using MPT on air, because the temps can shoot really high on air, and you have to be prepared to put your fans to 80-100%. UV+OC is probably the way to go on air.
> 
> If you are in the markets for the XTXH cards avoid the Red Devil Ultimate, because it has mounting problems which lead to high temperatures, or you need to put it under water, like some people here have. The binning doesn't seem to be as good as other models either, but it is cheaper. Don't be fooled by the beefy looking cooler.
> 
> If you never want to overclock, which means you are leaving 10-20% performance on the table, depending on silicon lottery, you can buy the Red Devil. Be prepared to heavily undervolt it with software like MPT to keep it cool enough so it doesn't thermally throttle. That's the only way to use Red Devil on air without constantly thinking "how hot is it running now...".
> 
> Whatever you do if you buy the Red Devil, do not open it yourself if it runs hot. Send it back. It is incredibly difficult to get the remount working, and the default thermal pads can break even when you are careful and essentially cannot be replaced by aftermarket pads because they are always too hard for the delicate mounting. The heatsink/cooler mounting pressure is messed up, so my theory is that the longer you use Red Devil the higher the temps will creep due to the mount loosening up. This is based on around 30 attempts to repaste, and using 4 different aftermarket thermal pads. Sometimes I would get it right and maybe a reduction of 10-15C compared to stock temps, but the temps crept back in three weeks due to losing the mounting pressure slowly. Under the same test the card junction temp was at 90C in TimeSpy 2-3 days after successful repasting, and three weeks later it reached 110C and thermal throttling using same drivers. I spent a lot of time inspecting the card, testing different ways of mounting it, and concluded that the thermal pads cannot be replaced, and that even if you repaste once successfully, there's a catastrophe waiting later. I did not use washers, they might help, but then again getting the mounting pressure right is incredibly difficult in the first place, especially if you want to keep your VRAM temps in check. My conclusion is to avoid the card all together or know that you need to put it under water. The latter option is expensive, and Red Devil's do not have the best bins out there either, and seem to be very sensitive to drivers in terms of core clocks (driver change can boost or reduce your core clock capability by 50mhz.)
> 
> Personally, didn't want to sell my war-torn Red Devil cheap, so I had to put it under water (ok I admit, I loved the idea of a new project), and now it finally works as it should when overclocked and using MPT. However it was a mess of a project, and caused a lot of stress and it sucked my time and money.
> 
> The best overall solution (no surprises here) for 6900xt seems to be highend AIO models or buying the cheapest card and putting it under cheap custom loop to keep the temps low and overclock high. Remember, 6900xt clocks do not boost based on GPU temps like Nvidia's cards, so there's no tangible benefit in gaming if your card is running 45C/70C vs 60C/85C (gpu/junction).
> 
> I won't compromise personally anymore when it comes to high end GPUs and will always use a waterblock, and buy a binned model. I'm willing to pay double the price to get rid of all the hassle that comes with cheap cards. It's a hobby, it doesn't need to make financial sense. I say, splurge on your GPU and never look back.


I had the Reference Model / Red devil ultimate LC / Aorus extreme WF / Red devil ultimate air

I dissembled the block from the RD Ultimate LC because i wasn't happy with the temps. After talking with EK that made the block for them, they suggested me to use 1mm and 0.5mm. After installing those my temps where even worst then before. So contacted EK again and they investigated the issue and came back with a suggestion to use 0.5mm everywhere. After this i had the best temperatures ever with that card.

With my Air version of the RD Ultimate I had good stock temps but i still wanted to go for a block. I ordered a block from EK and directly went for the 0,5mm pads. Currently using this card with wonderful temps. But I agree the original pads where crap as hell.!!

Aorus extreme WF was the worst from all, sent it immediately back to Amazon. Bad bin and bad temps + broken Displayport.

I can't agree about the bin for the red devil. Mine where al good! And the dual bios switch is one of the reasons I went for RD.! 

If i needed to choose a new one today, i would still go for the Red Devils!! You just need to use the right height pads on the rights places.! Is a massive difference when doing it right!

My 2c....


----------



## amigafan2003

J7SC said:


> Re. the 6900XT xtreme Waterforce, I actually think it might have a dual bios switch..


It does not.


----------



## xR00Tx

Don't know if you guys have done this test before, but last night I ran a few Time Spy runs with 4x8gb instead of 2x16gb (same timings for both setups) and found out that 2x16gb is better (at least for 3d mark).


----------



## LtMatt

xR00Tx said:


> Don't know if you guys have done this test before, but last night I ran a few Time Spy runs with 4x8gb instead of 2x16gb (same timings for both setups) and found out that 2x16gb is better (at least for 3d mark).


I think all things equal the dual rank should always be slightly faster. That can sometimes be offset by the single rank DIMMS sometimes having a bit more overclock headroom. 

Using dual rank here, previously using single rank before that.


----------



## thomasck

After a little while on air due do lacking of time I bought the Bykski block (I've been using their hardware for a while now) and received it today, massive temperature drop, from 88C in the junction in a single run of time spy to 55C.
Same thing as the previous Bykski blocks I've installed before, this one also did not have instructions, but it's pretty simple to assemble.
Two things that I noticed, with did not happen with the Radeon VII for example, I didn't need to use the "X bracket" for the 6900XT, instead, 6 screws with springs were used around the core. And few more around, and some other long ones to attach the back plate, however is possible to use the stock back plate but I just did not want to fiddle with it.
Another thing and for a moment I thought I had bought the wrong block, are these four components here that had thermal pads on them, but they don't make any contact at all with any part of the bykski block.








Has any one noticed the same? I got the MSI version of the reference card so I bought the Bykski block for the reference card.
Apart of this, everything seems fine. Gonna game a bit now to see how it will behave in terms of temperature.
Some extra pics,


Spoiler


----------



## tootall123

I scored 24 103 in Time Spy


AMD Ryzen 9 5950X, AMD Radeon RX 6900 XT x 1, 32768 MB, 64-bit Windows 10}




www.3dmark.com


----------



## LtMatt

tootall123 said:


> I scored 24 103 in Time Spy
> 
> 
> AMD Ryzen 9 5950X, AMD Radeon RX 6900 XT x 1, 32768 MB, 64-bit Windows 10}
> 
> 
> 
> 
> www.3dmark.com


Aubergine.gif 🦴


----------



## jonRock1992

thomasck said:


> After a little while on air due do lacking of time I bought the Bykski block (I've been using their hardware for a while now) and received it today, massive temperature drop, from 88C in the junction in a single run of time spy to 55C.
> Same thing as the previous Bykski blocks I've installed before, this one also did not have instructions, but it's pretty simple to assemble.
> Two things that I noticed, with did not happen with the Radeon VII for example, I didn't need to use the "X bracket" for the 6900XT, instead, 6 screws with springs were used around the core. And few more around, and some other long ones to attach the back plate, however is possible to use the stock back plate but I just did not want to fiddle with it.
> Another thing and for a moment I thought I had bought the wrong block, are these four components here that had thermal pads on them, but they don't make any contact at all with any part of the bykski block.
> 
> 
> 
> 
> 
> 
> 
> 
> Has any one noticed the same? I got the MSI version of the reference card so I bought the Bykski block for the reference card.
> Apart of this, everything seems fine. Gonna game a bit now to see how it will behave in terms of temperature.
> Some extra pics,
> 
> 
> Spoiler


I'm using a Bykski block as well. Mine also didn't come with thick pads for a few of the small components that has them with my air cooler. I'm using the Red Devil Ultimate gpu though. Not sure if it really matters if those parts are actively cooled or not.


----------



## thomasck

@jonRock1992 Well, for those little parts the pads must be THICK as hell, so I am not sure if that would transfer any heat. But it is a good idea to try, I am sure is more than 3mm, I gotta check again, let's see if someone pops up with some info!


----------



## tootall123

I scored 24 249 in Time Spy


AMD Ryzen 9 5950X, AMD Radeon RX 6900 XT x 1, 32768 MB, 64-bit Windows 10}




www.3dmark.com





Over the moon with that. 

Unsure why the CPU clock is so high, its only set to 48.5 in the bios.


----------



## lawson67

thomasck said:


> @jonRock1992 Well, for those little parts the pads must be THICK as hell, so I am not sure if that would transfer any heat. But it is a good idea to try, I am sure is more than 3mm, I gotta check again, let's see if someone pops up with some info!


You get those pads for the small component's with the Alphacool block, you also get 3mm pads to make contact with back plate and they do transfer a lot of heat as the back plate is bloody hot to the touch when gaming etc


----------



## jonRock1992

lawson67 said:


> You get those pads for the small component's with the Alphacool block, you also get 3mm pads to make contact with back plate and they do transfer a lot of heat as the back plate is bloody hot to the touch when gaming etc
> 
> View attachment 2521961
> View attachment 2521962


Oh damn. Guess that's what I get for going cheap lol. Maybe that's why I have a hard time getting stable in Timespy lol. I run 2850MHz max core clock / 2150MHz fast timings in everything else. I'd kinda like to get a backplate pad. I have no idea what those other components are though. They aren't VRMs right?


----------



## lawson67

jonRock1992 said:


> Oh damn. Guess that's what I get for going cheap lol. Maybe that's why I have a hard time getting stable in Timespy lol. I run 2850MHz max core clock / 2150MHz fast timings in everything else. Does it specify the thickness of the backplate pad? I'd kinda like to get one on there. I have no idea what those other components are though. They aren't VRMs right?


Yes back plate pads for the Alphacool are 3mm thick, and no those smaller component's are not the power phases as they are covered by pads numbered 2 and 5 in the first picture, i have no idea what those other chips are TBH


----------



## weleh

Ignore the CPU score but I've got a new record...









I scored 21 613 in Time Spy


AMD Ryzen 7 5800X, AMD Radeon RX 6900 XT x 1, 16384 MB, 64-bit Windows 10}




www.3dmark.com





25715 Graphic Score.


----------



## thomasck

@lawson67 thanks for the pics! in the 1st image they are covered, nice. But here with the bykski bloco the 2 firts on the far up right are not even covered by the cold plate, the third one (last of the 1st three that are together) is covered by the cold plate, as well as the one in between numbers 2 and 3. What does the second picture represents? aparently is the back of the card, but at the same time the regions coloured in yellow are actually the regions I have thermal pads in, but in the "inside" of the card, ram, gpu, and mosfets.
I've got some spare thermal pads and I thoiught in using in the back plate but nothing really called my attention to put in. I've just did a two hours session of zombies 1440p all maxed out and the back plate was just warm. Gonna research a bit to see if I find anything useful about it.


----------



## tootall123

weleh said:


> Ignore the CPU score but I've got a new record...
> 
> 
> 
> 
> 
> 
> 
> 
> 
> I scored 21 613 in Time Spy
> 
> 
> AMD Ryzen 7 5800X, AMD Radeon RX 6900 XT x 1, 16384 MB, 64-bit Windows 10}
> 
> 
> 
> 
> www.3dmark.com
> 
> 
> 
> 
> 
> 25715 Graphic Score.


That's a monster GPU Score for those clocks!


----------



## lawson67

thomasck said:


> @lawson67 thanks for the pics! in the 1st image they are covered, nice. But here with the bykski bloco the 2 firts on the far up right are not even covered by the cold plate, the third one (last of the 1st three that are together) is covered by the cold plate, as well as the one in between numbers 2 and 3. What does the second picture represents? aparently is the back of the card, but at the same time the regions coloured in yellow are actually the regions I have thermal pads in, but in the "inside" of the card, ram, gpu, and mosfets.
> I've got some spare thermal pads and I thoiught in using in the back plate but nothing really called my attention to put in. I've just did a two hours session of zombies 1440p all maxed out and the back plate was just warm. Gonna research a bit to see if I find anything useful about it.


The second picture is the placement of the 3mm pads for the back of the card to make contact with the back plate, they just replicate the positions of the pads on the front of the card apart of course from the big thermal pad stuck on the back of the GPU Die


----------



## D-EJ915

majestynl said:


> I had the Reference Model / Red devil ultimate LC / Aorus extreme WF / Red devil ultimate air
> 
> I dissembled the block from the RD Ultimate LC because i wasn't happy with the temps. After talking with EK that made the block for them, they suggested me to use 1mm and 0.5mm. After installing those my temps where even worst then before. So contacted EK again and they investigated the issue and came back with a suggestion to use 0.5mm everywhere. After this i had the best temperatures ever with that card.
> 
> With my Air version of the RD Ultimate I had good stock temps but i still wanted to go for a block. I ordered a block from EK and directly went for the 0,5mm pads. Currently using this card with wonderful temps. But I agree the original pads where crap as hell.!!
> 
> Aorus extreme WF was the worst from all, sent it immediately back to Amazon. Bad bin and bad temps + broken Displayport.
> 
> I can't agree about the bin for the red devil. Mine where al good! And the dual bios switch is one of the reasons I went for RD.!
> 
> If i needed to choose a new one today, i would still go for the Red Devils!! You just need to use the right height pads on the rights places.! Is a massive difference when doing it right!
> 
> My 2c....


Air cooled red devil ultimate is 1800 on amazon right now, kind of makes me want to try it out lol.


----------



## LtMatt

tootall123 said:


> I scored 24 249 in Time Spy
> 
> 
> AMD Ryzen 9 5950X, AMD Radeon RX 6900 XT x 1, 32768 MB, 64-bit Windows 10}
> 
> 
> 
> 
> www.3dmark.com
> 
> 
> 
> 
> 
> Over the moon with that.
> 
> Unsure why the CPU clock is so high, its only set to 48.5 in the bios.


Well done, great score!


weleh said:


> Ignore the CPU score but I've got a new record...
> 
> 
> 
> 
> 
> 
> 
> 
> 
> I scored 21 613 in Time Spy
> 
> 
> AMD Ryzen 7 5800X, AMD Radeon RX 6900 XT x 1, 16384 MB, 64-bit Windows 10}
> 
> 
> 
> 
> www.3dmark.com
> 
> 
> 
> 
> 
> 25715 Graphic Score.


Very nice graphics score!

Any special tweaks as you said you were struggling to even breach 25K earlier?


----------



## thomasck

lawson67 said:


> The second picture is the placement of the 3mm pads for the back of the card to make contact with the back plate, they just replicate the positions of the pads on the front of the card apart of course from the big thermal pad stuck on the back of the GPU Die


Thanks for clarifying. Gonna add those 

Sent from my Pixel 2 XL using Tapatalk


----------



## tootall123

LtMatt said:


> Any special tweaks as you said you were struggling to even breach 25K earlier?


Also very interested in this, your memory clocks look low and your temps are a bit higher than scores lower than yours.

Well done though!


----------



## weleh

I don't think there are any secrets left, at least, that I know of regarding the cards and interaction with MPT.

My settings are pretty straight forward on MPT.

I disable deepsleep, 600W 600A 65A, FT2 on MPT but FT1 on driver, 2200 fCLK, 2200 fCLK Boost and 1260 SOC.

Now regarding FT1 vs FT2 controversy, I must add that FT2 on MPT and FT1 on driver is faster than FT1 on MPT and FT1 on driver at the expense of some memory clock speed. This is easily tested on mining software. I can mine at 2150 Mhz FT1 MPT/Driver but if I try to do it on FT2 MPT FT1 driver I get flicker and hard crash.

The same degradation can be seen on Time Spy because once I got my clockspeeds dialed in I just played with VRAM and basicaly anything above 2110 results in terrible performance.

In terms of RAM speed, the higher the core the lower the memory I need to do too so this also plays a role. Some people might be hitting ECC and not realizing it.

Now don't forget I'm using a volt mod on my card (EVC) and unfortunately, or contrary to popular belief, even with insane amounts of voltage, these cards do not scale infinitely, in fact, my card does not love to be above 2830 Mhz throughouh GT1 and 2800 Mhz on GT2. The same can be said to the amount of voltage you pump into the card. Pure silicon lottery as you can see on Faxtor2 card on HoF, scales insanely well with voltage and that's why he brute force results contrary to most of us.

There aren't any special tricks or secrets. These cards do not care about temperature at all unless you're going really cold (0C and below specially important for volt moded cards due to higher power and consequently temps). Temperatures are especially important for volt modded cards due to higher power. As long as you don't hit throttle temps you're good. This is something I tested with my own card. I use a portable AC to blow on the card's radiatior and I was still seeing super high hotspot temps, with deltas of almost 40C!!! between edge and hotspot. One of the good runs I did, I even hit 105C hotspot and the score was still super high and there wasn't any throttleing involved.

Also, I would like to point out that, despite what's been said on the hardwareluxx forums (really good stuff there by the way, use translator), CPU speed or RAM speed doesn't matter at all if you only care about GPU score as long as you have a decent CPU 9900K and above. TS and TSE are 100% GPU bound even on a 6900XT at insane clockspeeds. This is easily proven by my own best score, where my CPU was bugged or some **** and I tried to manual OC via Windows and ended up bugging it even more whereas I was doing worse performance than stock XMP+stock CPU... Still managed super high graphic score. This is just pure conspiracies at this point.

As far as tweaks go, I basically disable Freesync globaly and I disable sound from GPU on device manager. I run debloated Windows via script but I find it only helps on CPU score anyway. I do disable the graphic crap and animations though not sure it helps but give it a try.

As far as monitoring, I use remote Hwinfo on a laptop connected via LAN on my PC so I can observe core clock behaviour. This helps fine tune everything to the micro level and I suggest you guys do the same. Observe clock speed, observe FPS numbers on certain areas so you know if you're progressing or regressing, etc. 

All in all the secret here is effective clockspeed hence why watching hwinfo during test runs is good. It doesn't matter if TS reports 2900 Mhz if you're not running close to that. I also found that running DS on vs off skews the numbers on TS so I wouldn't pay much attention to it.


----------



## jonRock1992

weleh said:


> I don't think there are any secrets left, at least, that I know of regarding the cards and interaction with MPT.
> 
> 
> 
> My settings are pretty straight forward on MPT.
> 
> 
> 
> I disable deepsleep, 600W 600A 65A, FT2 on MPT but FT1 on driver, 2200 fCLK, 2200 fCLK Boost and 1260 SOC.
> 
> 
> 
> Now regarding FT1 vs FT2 controversy, I must add that FT2 on MPT and FT1 on driver is faster than FT1 on MPT and FT1 on driver at the expense of some memory clock speed. This is easily tested on mining software. I can mine at 2150 Mhz FT1 MPT/Driver but if I try to do it on FT2 MPT FT1 driver I get flicker and hard crash.
> 
> 
> 
> The same degradation can be seen on Time Spy because once I got my clockspeeds dialed in I just played with VRAM and basicaly anything above 2110 results in terrible performance.
> 
> 
> 
> In terms of RAM speed, the higher the core the lower the memory I need to do too so this also plays a role. Some people might be hitting ECC and not realizing it.
> 
> 
> 
> Now don't forget I'm using a volt mod on my card (EVC) and unfortunately, or contrary to popular belief, even with insane amounts of voltage, these cards do not scale infinitely, in fact, my card does not love to be above 2830 Mhz throughouh GT1 and 2800 Mhz on GT2. The same can be said to the amount of voltage you pump into the card. Pure silicon lottery as you can see on Faxtor2 card on HoF, scales insanely well with voltage and that's why he brute force results contrary to most of us.
> 
> 
> 
> There aren't any special tricks or secrets. These cards do not care about temperature at all unless you're going really cold (0C and below specially important for volt moded cards due to higher power and consequently temps). Temperatures are especially important for volt modded cards due to higher power. As long as you don't hit throttle temps you're good. This is something I tested with my own card. I use a portable AC to blow on the card's radiatior and I was still seeing super high hotspot temps, with deltas of almost 40C!!! between edge and hotspot. One of the good runs I did, I even hit 105C hotspot and the score was still super high and there wasn't any throttleing involved.
> 
> 
> 
> Also, I would like to point out that, despite what's been said on the hardwareluxx forums (really good stuff there by the way, use translator), CPU speed or RAM speed doesn't matter at all if you only care about GPU score as long as you have a decent CPU 9900K and above. TS and TSE are 100% GPU bound even on a 6900XT at insane clockspeeds. This is easily proven by my own best score, where my CPU was bugged or some **** and I tried to manual OC via Windows and ended up bugging it even more whereas I was doing worse performance than stock XMP+stock CPU... Still managed super high graphic score. This is just pure conspiracies at this point.
> 
> 
> 
> As far as tweaks go, I basically disable Freesync globaly and I disable sound from GPU on device manager. I run debloated Windows via script but I find it only helps on CPU score anyway. I do disable the graphic crap and animations though not sure it helps but give it a try.
> 
> 
> 
> As far as monitoring, I use remote Hwinfo on a laptop connected via LAN on my PC so I can observe core clock behaviour. This helps fine tune everything to the micro level and I suggest you guys do the same. Observe clock speed, observe FPS numbers on certain areas so you know if you're progressing or regressing, etc.
> 
> 
> 
> All in all the secret here is effective clockspeed hence why watching hwinfo during test runs is good. It doesn't matter if TS reports 2900 Mhz if you're not running close to that. I also found that running DS on vs off skews the numbers on TS so I wouldn't pay much attention to it.


Some solid info there. Thanks!


----------



## lawson67

weleh said:


> I don't think there are any secrets left, at least, that I know of regarding the cards and interaction with MPT.
> 
> My settings are pretty straight forward on MPT.
> 
> I disable deepsleep, 600W 600A 65A, FT2 on MPT but FT1 on driver, 2200 fCLK, 2200 fCLK Boost and 1260 SOC.
> 
> Now regarding FT1 vs FT2 controversy, I must add that FT2 on MPT and FT1 on driver is faster than FT1 on MPT and FT1 on driver at the expense of some memory clock speed. This is easily tested on mining software. I can mine at 2150 Mhz FT1 MPT/Driver but if I try to do it on FT2 MPT FT1 driver I get flicker and hard crash.
> 
> The same degradation can be seen on Time Spy because once I got my clockspeeds dialed in I just played with VRAM and basicaly anything above 2110 results in terrible performance.
> 
> In terms of RAM speed, the higher the core the lower the memory I need to do too so this also plays a role. Some people might be hitting ECC and not realizing it.
> 
> Now don't forget I'm using a volt mod on my card (EVC) and unfortunately, or contrary to popular belief, even with insane amounts of voltage, these cards do not scale infinitely, in fact, my card does not love to be above 2830 Mhz throughouh GT1 and 2800 Mhz on GT2. The same can be said to the amount of voltage you pump into the card. Pure silicon lottery as you can see on Faxtor2 card on HoF, scales insanely well with voltage and that's why he brute force results contrary to most of us.
> 
> There aren't any special tricks or secrets. These cards do not care about temperature at all unless you're going really cold (0C and below specially important for volt moded cards due to higher power and consequently temps). Temperatures are especially important for volt modded cards due to higher power. As long as you don't hit throttle temps you're good. This is something I tested with my own card. I use a portable AC to blow on the card's radiatior and I was still seeing super high hotspot temps, with deltas of almost 40C!!! between edge and hotspot. One of the good runs I did, I even hit 105C hotspot and the score was still super high and there wasn't any throttleing involved.
> 
> Also, I would like to point out that, despite what's been said on the hardwareluxx forums (really good stuff there by the way, use translator), CPU speed or RAM speed doesn't matter at all if you only care about GPU score as long as you have a decent CPU 9900K and above. TS and TSE are 100% GPU bound even on a 6900XT at insane clockspeeds. This is easily proven by my own best score, where my CPU was bugged or some **** and I tried to manual OC via Windows and ended up bugging it even more whereas I was doing worse performance than stock XMP+stock CPU... Still managed super high graphic score. This is just pure conspiracies at this point.
> 
> As far as tweaks go, I basically disable Freesync globaly and I disable sound from GPU on device manager. I run debloated Windows via script but I find it only helps on CPU score anyway. I do disable the graphic crap and animations though not sure it helps but give it a try.
> 
> As far as monitoring, I use remote Hwinfo on a laptop connected via LAN on my PC so I can observe core clock behaviour. This helps fine tune everything to the micro level and I suggest you guys do the same. Observe clock speed, observe FPS numbers on certain areas so you know if you're progressing or regressing, etc.
> 
> All in all the secret here is effective clockspeed hence why watching hwinfo during test runs is good. It doesn't matter if TS reports 2900 Mhz if you're not running close to that. I also found that running DS on vs off skews the numbers on TS so I wouldn't pay much attention to it.


Using 1260 SOC got my ram scaling again so thanks for that


----------



## weleh

I forgot something super important that I understood via watching clockspeed on hwinfo during runs.

The overboost issues are related to excess voltage and too high clockspeeds. This is why some cards might actually benefit from a small undervolt via driver slider because GT1 is easy to pass, GT2 is where it's at. With this slight undervolt it might be possible to pass GT2 at higher clockspeed because you don't introduce the overboost bug so your card can stay near the verge of stability all the time.


----------



## weleh

Sorry another thing, overclocking bCLK also should bring a couple of points however I haven't tested this myself and have only seen others report it.

Keep in mind that if your board has an external clock gen it won't bring any perfomance uplift because PCI-E is separated from bCLK, but if your board does not have it, then be careful because too high bCLK can corrupt data on NVME drives. (Should be fine on SATA).


----------



## tootall123

I scored 31 712 in Fire Strike Extreme


AMD Ryzen 9 5950X, AMD Radeon RX 6900 XT x 1, 32768 MB, 64-bit Windows 10}




www.3dmark.com





Really pleased. 

For a while I had a 3090, but I wouldn't ever dream of going back.


----------



## lh2p

I'm having weird issues where the cpu performance will randomly drop because it decided it wants to draw 100w instead of the normal 200-300w. The 6900xt red devil seems to be very inconsistent with power draw. Has anyone experienced this?


----------



## tolis626

Guys, I'm having a strange issue, where my damn card will crash during gaming while being nowhere near where it should be unstable. For example, it will complete Superposition over and over again at 2700MHz core/2150MHz memory, but Warzone will crash at these settings (and lower, basically anything above 2600MHz, and even that will occassionally crash). Which is very strange, because during Warzone it is basically half working, consuming on average something like 250W (maybe less) and clocking erratically between 2200MHz and barely 2500MHz. Any ideas? It's driving me nuts, but what makes me even crazier is the thought of running it... _shudders_... stock.

For more details, I run a slight undervolt of 1150mV in MPT, returned to 1150mV max SoC voltage (no point going lower, really, it already goes to about 1.05V under load, no higher), 330W PL in MPT with 0% added, 1260MHz SoC clock (same behavior with stock 1200MHz) and 1800MHz fclk boost (same as with Soc, stock also has the same behavior). I also run an "undervolt" to 1.075V in software, but really core voltage hovers around 1.05V to 1.09V actual, as per HWiNFO64's readings.

EDIT : I DDU'd my drivers after deleting my SPPT table and reinstalled everything. I can't be arsed to test Warzone because its behavior is erratic in general, so I fired up 3DMark and decided to go with Firestrike Ultra. Damn thing crashes almost instantly during GT1 at 2650MHz. And that was with no MPT, just with a fresh driver install. Meanwhile, Superposition doesn't give a damn and runs just fine. My brain hurts.


----------



## ZealotKi11er

If FS is crashing 2650, its not going to be stable in Warzone. My friend also told me that he has stability issues with overclocks in Warzone.


----------



## cfranko

I want to mount my 6900 xt with a waterblock vertically using a pcie 3.0 riser, but I also want good benchmark scores. Does a PCIE 3.0 riser hurt Time Spy scores?


----------



## J7SC

cfranko said:


> I want to mount my 6900 xt with a waterblock vertically using a pcie 3.0 riser, but I also want good benchmark scores. Does a PCIE 3.0 riser hurt Time Spy scores?


...from the tests I have seen, the PCIe 4 > PCIe 3 impact is marginal / insignificant on a single card. That said, I picked up a couple of PCIe 4.0 risers (Linkup @ Amazon) which work exceptionally well per my own testing on 3090 and 6900XT...testbench pic / 6900XT below


----------



## tolis626

ZealotKi11er said:


> If FS is crashing 2650, its not going to be stable in Warzone. My friend also told me that he has stability issues with overclocks in Warzone.


Well, I figured as much. It just bums me. Is my GPU that bad? Or is something else going on? That's the real question.


----------



## cfranko

J7SC said:


> ...from the tests I have seen, the PCIe 4 > PCIe 3 impact is marginal / insignificant on a single card. That said, I picked up a couple of PCIe 4.0 risers (Linkup @ Amazon) which work exceptionally well per my own testing on 3090 and 6900XT...testbench pic / 6900XT below
> 
> View attachment 2522172
> 
> View attachment 2522171


I didn’t quite understand, so according to the tests on youtube and stuff the difference between pcie 3.0 and 4.0 is insignificant, but from your testing pcie 4.0 is much better right?


----------



## J7SC

cfranko said:


> I didn’t quite understand, so according to the tests on youtube and stuff the difference between pcie 3.0 and 4.0 is insignificant, but from your testing pcie 4.0 is much better right?


...I ran my 3090 in a PCIe 3.0 based testbench before moving it to a PCIe 4.0 system...the differences in scores were within margin of error (may be 50 pts out of 15,000). But since I had no actual PCIe risers of any kind to begin with, I bought two PCIe 4.0 risers


----------



## cfranko

J7SC said:


> ...I ran my 3090 in a PCIe 3.0 based testbench before moving it to a PCIe 4.0 system...the differences in scores were within margin of error (may be 50 pts out of 15,000). But since I had no actual PCIe risers of any kind to begin with, I bought two PCIe 4.0 risers


Oh ok, thanks for the explanation. I guess ill just use my Pcie 3.0 riser then.


----------



## Nicklas0912

Hello boys, wich 6900 XT should I buy? RX 6900 Strix LC or Red devil? 

anyone know what the max driver limit oc is? I kinda want them under LN2 at some point.
Or any better moduel?


----------



## J7SC

cfranko said:


> Oh ok, thanks for the explanation. I guess ill just use my Pcie 3.0 riser then.


...fyi...









Nicklas0912 said:


> Hello boys, wich 6900 XT should I buy? RX 6900 Strix LC or Red devil?
> 
> anyone know what the max driver limit oc is? I kinda want them under LN2 at some point.
> Or any better moduel?


...just make sure you get an XTXH model as that will have a higher (theoretical) GPU limit of up to 5000 on the GPU and 3000 on the VRAM (ie. for Aorus WF WB)...on the Asus Strix, you should be looking for the Stirx LC 'TOP'


----------



## Nicklas0912

I know tihs is in Danish.

But is this card: https://www.fcomputer.dk/asus-rog-strix-lc-rx6900xt-t16g-gaming-90yv0gf1-m0nm00

How can I see if it got it or not?


*ASUS ROG-STRIX-LC-RX6900XT-T16G-GAMING - AMD Radeon RX6900XT - 16GB GDDR6 *


Cant find any top version..

Edit: it said 2525 MHz boost clock, and and should be the top as techpower said? same boost clocks. 








ASUS Unveils ROG Strix LC Radeon RX 6900 XT TOP T16G Based on XTXH Silicon, 2525MHz Boost


ASUS today updated its flagship AMD Radeon RX 6000 series graphics card lineup with the new ROG Strix LC Radeon RX 6900 XT TOP T16G. This card is visually identical to the ROG Strix LC RX 6900 XT O16G the company launched last December, but with a handful updates. The biggest of these is the new...




www.techpowerup.com


----------



## Nicklas0912

And what about 
*XFX Speedster MERC319 Radeon RX 6900 XT - Limited ? *

Is that XTXH or XTX?


----------



## J7SC

Nicklas0912 said:


> I know tihs is in Danish.
> 
> But is this card: https://www.fcomputer.dk/asus-rog-strix-lc-rx6900xt-t16g-gaming-90yv0gf1-m0nm00
> 
> How can I see if it got it or not?
> 
> 
> *ASUS ROG-STRIX-LC-RX6900XT-T16G-GAMING - AMD Radeon RX6900XT - 16GB GDDR6 *
> 
> 
> Cant find any top version..
> 
> Edit: it said 2525 MHz boost clock, and and should be the top as techpower said? same boost clocks.
> 
> 
> 
> 
> 
> 
> 
> 
> ASUS Unveils ROG Strix LC Radeon RX 6900 XT TOP T16G Based on XTXH Silicon, 2525MHz Boost
> 
> 
> ASUS today updated its flagship AMD Radeon RX 6000 series graphics card lineup with the new ROG Strix LC Radeon RX 6900 XT TOP T16G. This card is visually identical to the ROG Strix LC RX 6900 XT O16G the company launched last December, but with a handful updates. The biggest of these is the new...
> 
> 
> 
> 
> www.techpowerup.com


...yes, this one from techpowerup > TOP


----------



## weleh

Nicklas0912 said:


> Hello boys, wich 6900 XT should I buy? RX 6900 Strix LC or Red devil?
> 
> anyone know what the max driver limit oc is? I kinda want them under LN2 at some point.
> Or any better moduel?


ASROCK OC FÓRMULA has the best PCB for LN2 in general.

Also if you can, grab the 18Gbps version of the reference card.


----------



## Nicklas0912

Sadly that card cost to much here  Strix top is cheaper.


----------



## lestatdk

Nicklas0912 said:


> Sadly that card cost to much here  Strix top is cheaper.


It'll be hard to find anything else even close to that price. I've noticed proshop has bumped up their prices as well.
Less than 13k for a new 6900XT is about as low as you'll get in current conditions


----------



## VikTOR_RUS

weleh said:


> At the moment I have the highest PR score on HWBOT.


How? What's the secret? ))


----------



## Nicklas0912

I know  I just ment the asrock one costed way to much.


----------



## J7SC

Nicklas0912 said:


> I know  I just ment the asrock one costed way to much.


...interestingly enough, the 2x 8 pin OEM AMD 6900XTs seem to hold their own after checking HWBot...still, at the end of the day, I would make sure that:

a.) you get a XTX'H' card
b.) a 'real' 3x 8 pin re. VRM / PCB (not one of those siamesed jobs)
c.) be lucky on the lottery 

I think you're doing the right thing looking at price as well, subject to the above. I personally like the Aorus XTR WF WB, and some folks here had great success with them right out of the box, while others did not have their expectations met. For the record, the Aorus has a bios with a GPU limit of 5000 MHz (a tad optimistic if you ask me) and a VRAM limit of 1500>3000 (ditto).

The Asus TOP card may be interesting because from my HWBot days, I remember that Asus could be 'sneaky' and release a special, unadvertised GPU Tweak package listed only for their rare top cards which actually had all kinds of limits removed - great for HWBot !

Quite a few folks at HWBot will add s.th. like Elmor's EVC2SX anyways, so I would just look for a really solid PCB and best price. I recall Buildzoid praising his card to high heavens, right until it blew up...now it is "The last ramble about my RX 6900XT Red Devil Ultimate because it's SUPER DEAD" or 'get a 3090'

Do let us know what you get, though - 'inquiring minds want to know'  !


----------



## lestatdk

J7SC said:


> Quite a few folks at HWBot will add s.th. like Elmor's EVC2SX anyways, so I would just look for a really solid PCB and best price. I recall Buildzoid praising his card to high heavens, right until it blew up...now it is "The last ramble about my RX 6900XT Red Devil Ultimate because it's SUPER DEAD" or 'get a 3090'


It amazes me that BZ is surprised it blew up considering all the changes he made to it 

I nearly had a heart attack after we mounted the modified waterblock to my card 

BZ plastered his PCB with various capacitors and resistors and stuff. I'm surprised it lasted that long tbh.

Would be cool if Powercolor sent him a replacement though since his scores did raise awareness of the capabilities of these cards . Probably not going to happen


----------



## weleh

The problem with BZ's card is not that he removed the limits, many people here bench with similar settings.

The problem is that, he was running a fairly weak in cparison, pcb. I mean 700A VRM is like 400A lower than my card, and it's a XTX not a XTXH... Powercolor really slacked on that PCB I guess but then again Powercolor didn't expect people to push 1.35Vcore underload on their GPU either.

LN2 had everything to do with his card blowing up and his insolation job wasn't the best either so I suspect water might be connected to it. 

At the end of the day, everything can fail, and his card, unfortunately for him failed.

Out of curiosity, I've done >1.3Vcore, 999A/999W underload on TS on water cooling and haven't had issues nor do I suspect I'll ever have issues. In fact, my EVC2SX while benching reported over 600A many times so you can only imagine what his card was pulling with a 300mV offset on the core on top of the 1.2V while benching with LN2 at 3000 Mhz pulling god knows how much power on a weak PCB....


----------



## ZealotKi11er

weleh said:


> The problem with BZ's card is not that he removed the limits, many people here bench with similar settings.
> 
> The problem is that, he was running a fairly weak in cparison, pcb. I mean 700A VRM is like 400A lower than my card, and it's a XTX not a XTXH... Powercolor really slacked on that PCB I guess but then again Powercolor didn't expect people to push 1.35Vcore underload on their GPU either.
> 
> LN2 had everything to do with his card blowing up and his insolation job wasn't the best either so I suspect water might be connected to it.
> 
> At the end of the day, everything can fail, and his card, unfortunately for him failed.
> 
> Out of curiosity, I've done >1.3Vcore, 999A/999W underload on TS on water cooling and haven't had issues nor do I suspect I'll ever have issues. In fact, my EVC2SX while benching reported over 600A many times so you can only imagine what his card was pulling with a 300mV offset on the core on top of the 1.2V while benching with LN2 at 3000 Mhz pulling god knows how much power on a weak PCB....


Your scores are with 1.3v?


----------



## thomasck

E-mailed Bykski about the image of couple of pages ago regarding those four components that won't touch the cold plate without thick pads that are not included on the box. Surprisingly, they answered me back in couple of hours.



> They're not necessary to cool directly. There's a variety of factors that determine what does and doesn't need a thermal pad on them, and many of them are to do with heatsink design. Many cards also use thermal pads on parts that really just don't need them in general because of an old misguided internet scandal (EVGA's bad batch of components their supplier shipped them and they used on 1080 FTW cards) in an attempt to avoid criticism over something harmless (power components running dozens of degrees celsius below their rated maximum safe temperature. Scandalous, I know.). Those parts are capable of passively dissipating their heat so long as they've got room to, which it seems like they may not have with the stock cooler.


Sent from my Pixel 2 XL using Tapatalk


----------



## The EX1

weleh said:


> ASROCK OC FÓRMULA has the best PCB for LN2 in general.
> 
> Also if you can, grab the 18Gbps version of the reference card.


Isn't that AMD LC card only available in prebuilts? I know normal XTX cards hit the frequency limit of 2130-2150 on memory. Are the XTXH cards doing any better?


----------



## LtMatt

Who is going to test 21.8.2 on Timespy?

I nominate @ZealotKi11er, @Nicklas0912 > @tolis626 for testing duties.


----------



## weleh

ZealotKi11er said:


> Your scores are with 1.3v?


No, my scores were super efficient. 450W range at like 1.18-1.22V under load.


----------



## weleh

I hope 21.8.2 do not have performance improvements, I sold my PC


----------



## lestatdk

weleh said:


> I hope 21.8.2 do not have performance improvements, I sold my PC


Hey you ! Don't jinx it


----------



## LtMatt

weleh said:


> I hope 21.8.2 do not have performance improvements, I sold my PC


I'll beat your graphics score on Timespy one day.


----------



## thomasck

Has anyone tried 21.8.2? Seems RIS is not working?

EDIT

Is working, but just not with Warzone for some reason.


----------



## tolis626

LtMatt said:


> Who is going to test 21.8.2 on Timespy?
> 
> I nominate @ZealotKi11er, @Nicklas0912 > @tolis626 for testing duties.


Why would you even consider me? Only useful thing I could tell you is if they somehow magically managed to make my GPU not suck at overclocking. 

With that said, I found out (I think) why my card was SO terrible. I kept reading how undervolting via the drivers wasn't really undervolting and that the card would use whatever voltage it needed etc. So I had left the voltage at 1.075V in the driver, after having read that lowering it could alleviate issues with overboosting in some games. Well, turns out that that was the culprit. My card isn't an amazing overclocker by any stretch of the imagination, but having the slider at 1.15V (1.175V in MPT) actually lets the card use more voltage under load and thus it is stable at 2650MHz. I have yet to test higher, but for the time being I am just enjoying the card without my stupidity getting in the way. 

Will test 21.8.2 and TS later.


thomasck said:


> Has anyone tried 21.8.2? Seems RIS is not working?
> 
> EDIT
> 
> Is working, but just not with Warzone for some reason.


Well, Warzone is a mess anyway, so meh. If it's the only thing not working properly, just be grateful. When I had the 5700XT, I had an overclock of 2150MHz core/1800MHz memory, which was tested with a lot of benchmarking and hundreds of hours in other games. No problems at all. Then Warzone would sometimes crash out of the blue, sometimes in the menu, sometimes during a match. It's a complete mess of a game.


----------



## thomasck

@tolis626 I fully agree! My mates a bit reluctant in trying other games, we play mainly to "meet" and talk about random stuff, just for fun..


----------



## tolis626

thomasck said:


> @tolis626 I fully agree! My mates a bit reluctant in trying other games, we play mainly to "meet" and talk about random stuff, just for fun..


Same story here. I have a few friends that live in other cities, and we mainly play Warzone together just because it's something we all play and we can talk etc. But I've had enough. It's been a buggy mess since the start and it's infested with hackers. I will be going back to Battlefield once 2042 comes out. I just hope that that isn't a mess too and that they don't shill too hard for NVidia.

In other news, @LtMatt , I managed to complete TS at 2650MHz for the first time (on 21.8.1 so that I have a baseline, I also buckled and bought the TS DLC, no more demo), so that was my issue after all. The two problems I have now is that hotspot reaches 105C during GT2 at 350W PL and that, for some reason, my CPU score is atrocious at 12100. I was under the impression that that was a good score, but it's closer to what my 3800x used to get than to what the 5900x should be getting. I seem to remember high core count CPUs needing to disable SMT, but wasn't that the case only for the 16 core ones?

EDIT : Just finished testing. Graphics score with 21.8.1 was 22205. Same settings, 21.8.2 crashed 2 times during GT2. Rebooted and everything went fine, so I don't know which one's the fluke, but when it crashed it had momentarily boosted to 2648MHz, which didn't happen after reboot. Anyway, 21.8.2 gave me a graphics score of 22180, so just a tad lower but within margin of error. CPU scores still suck, hotspot temps still suck.

EDIT 2 : Well, 21.8.2 seems to crash more easily than 21.8.1, but that may be completely random here. Still, when it crashes, it does so in GT2 and it overboosts to the limit I've set. Dunno why it does that. Haven't managed to get a complete run yet with the PL cranked to 375W (that's with my case side panel open and cold air being blasted on my system, god bless air conditioning). TL;DR my card still sucks.


----------



## jonRock1992

weleh said:


> I hope 21.8.2 do not have performance improvements, I sold my PC


I hope it does! I'm within spitting distance of 25K graphics score, and I've tried everything except EVC2SX to get there. I literally need less than 50 points 🤣


----------



## jfrob75

tolis626 said:


> Same story here. I have a few friends that live in other cities, and we mainly play Warzone together just because it's something we all play and we can talk etc. But I've had enough. It's been a buggy mess since the start and it's infested with hackers. I will be going back to Battlefield once 2042 comes out. I just hope that that isn't a mess too and that they don't shill too hard for NVidia.
> 
> In other news, @LtMatt , I managed to complete TS at 2650MHz for the first time (on 21.8.1 so that I have a baseline, I also buckled and bought the TS DLC, no more demo), so that was my issue after all. The two problems I have now is that hotspot reaches 105C during GT2 at 350W PL and that, for some reason, my CPU score is atrocious at 12100. I was under the impression that that was a good score, but it's closer to what my 3800x used to get than to what the 5900x should be getting. I seem to remember high core count CPUs needing to disable SMT, but wasn't that the case only for the 16 core ones?
> 
> EDIT : Just finished testing. Graphics score with 21.8.1 was 22205. Same settings, 21.8.2 crashed 2 times during GT2. Rebooted and everything went fine, so I don't know which one's the fluke, but when it crashed it had momentarily boosted to 2648MHz, which didn't happen after reboot. Anyway, 21.8.2 gave me a graphics score of 22180, so just a tad lower but within margin of error. CPU scores still suck, hotspot temps still suck.
> 
> EDIT 2 : Well, 21.8.2 seems to crash more easily than 21.8.1, but that may be completely random here. Still, when it crashes, it does so in GT2 and it overboosts to the limit I've set. Dunno why it does that. Haven't managed to get a complete run yet with the PL cranked to 375W (that's with my case side panel open and cold air being blasted on my system, god bless air conditioning). TL;DR my card still sucks.


The crashing of GT2 in TimeSpy due to boosting to near max gpu clock setting is just not you experiencing this I also experience this same issue a high percentage of the time GT2 crashes. So, IMHO this is a driver issue with the boosting algorithm.


----------



## LtMatt

jfrob75 said:


> The crashing of GT2 in TimeSpy due to boosting to near max gpu clock setting is just not you experiencing this I also experience this same issue a high percentage of the time GT2 crashes. So, IMHO this is a driver issue with the boosting algorithm.


Good work @tolis626 s626, this is why you were assigned testing duties. 

I also appear to see the same behaviour, crashing in GT2 same as you guys. I could probably improve my scores a little bit if that didn't happen.


----------



## CantingSoup

So im not the only one that’s experienced the crashing.


----------



## J7SC

LtMatt said:


> Good work @tolis626 s626, this is why you were assigned testing duties.
> 
> I also appear to see the same behaviour, crashing in GT2 same as you guys. I could probably improve my scores a little bit if that didn't happen.


...that is a known issue with GT2 (so 3DM code?)...NVidia 3090 and 2080 Ti do the same. More often than not, running GT2 by itself at the same settings will work. GT2 also draws the most GPU power (max limit) in TS/X with both my AMD and NVidia setups - so PL shoots up rapidly after the 'pause' following GT1


----------



## tolis626

jfrob75 said:


> The crashing of GT2 in TimeSpy due to boosting to near max gpu clock setting is just not you experiencing this I also experience this same issue a high percentage of the time GT2 crashes. So, IMHO this is a driver issue with the boosting algorithm.


Yeah, looks like it. What seems to be wrong is that it boosts the clock to the limit of the set range, without scaling voltage accordingly. I have HWiNFO64 graphing all of these parameters, and there's a mismatch of clock and voltage most of these times, where the core will boost higher while voltage either drops a bit or stays the same. Undervolting in the driver seems to help a bit with this, but then it leads to instability in my case. Any lower than 1.125V and I'll get a crash guaranteed. Strange.


LtMatt said:


> Good work @tolis626 s626, this is why you were assigned testing duties.
> 
> I also appear to see the same behaviour, crashing in GT2 same as you guys. I could probably improve my scores a little bit if that didn't happen.


Yeah, bummer, isn't it? I could be gaming at 2650MHz probably if it wasn't for that overboost thing. Maybe even higher. But no, some tosser at AMD decided that I won't be able to. And that's after limiting the 6900XT artificially for no reason.

EDIT : I just tried some CoD Modern Warfare multiplayer (not Warzone) at 2650MHz and I got some interesting results. First off, the core clock is almost locked at 2600-2620MHz. Secondly, the core voltage and SoC voltage are almost constant at about 1.13V and 1.115V respectively, as is the power consumption at 260-280W. Performance is stupidly high at like 250FPS, give or take 50, but it honestly surprises me how stable it is, unlike Warzone and Timespy. With Warzone, clocks are all over the place, power consumption is about the same as multiplayer, but as with Timespy, voltage is lower at under 1.1V for the core mostly and like 1.075V for the SoC. That could explain the discrepancies I'm seeing in stability between CoD Multiplayer and Warzone. But why is that? Anyone knows?

EDIT 2 : Nah, scratch that last one. I was playing Warzone with the min clock set to 500MHz and multiplayer with my benchmarking settings (min = max-100). Playing Warzone with the same options fixes that. It still crashed though at 2650MHz due to it overboosting. Oh well, 2625MHz it is.


----------



## EastCoast

*tolis626*
COD MW2019 is coded to work very well with Ryzen/RDNA 1/2. 
The real problem you are facing is that COD Cold War was coded to work with Nvidia. With the updates to Warzone, that code was offset for Nvidia as well, from Cold War. If you get a chance try Cold War. At a guess you should see the same instability as you did in Warzone.


----------



## tolis626

EastCoast said:


> *tolis626*
> COD MW2019 is coded to work very well with Ryzen/RDNA 1/2.
> The real problem you are facing is that COD Cold War was coded to work with Nvidia. With the updates to Warzone, that code was offset for Nvidia as well, from Cold War. If you get a chance try Cold War. At a guess you should see the same instability as you did in Warzone.


As if I needed more reasons to hate Cold War. 

Strangely, though, I don't see the same behavior. Warzone is a mess of its own. Can't wait for Battlefield 2042 here, that should be good. I hope. Unless NVidia have their way again.


----------



## EastCoast

tolis626 said:


> As if I needed more reasons to hate Cold War.
> 
> Strangely, though, I don't see the same behavior. Warzone is a mess of its own. Can't wait for Battlefield 2042 here, that should be good. I hope. Unless NVidia have their way again.


I see, good to hear then. Did you lower the cl of your 3600mhz ram from CL16 to CL14-14-14-36? You should have a ram profile option in the bios for 3600 CL14.


----------



## LtMatt

tolis626 said:


> As if I needed more reasons to hate Cold War.
> 
> Strangely, though, I don't see the same behavior. Warzone is a mess of its own. Can't wait for Battlefield 2042 here, that should be good. I hope. Unless NVidia have their way again.


It's been a few months since I last played COD Cold War, but when i last checked my 6900 XT Merc (see vids below as i recorded using ReLive) overclocked was 30% faster than a 3090 FE at 4K at the same settings and it ran very fast and perfectly smooth. 3090 FE had to use DLSS to beat my 6900 XT in raw FPS on the same map. 
6900 XT Merc Overclocked | Call of Duty Black Ops Cold War | 2160P Ultra Quality - YouTube
6900 XT Merc Overclocked | Call of Duty Black Ops Cold War | 1440P Ultra Quality - YouTube


----------



## tolis626

EastCoast said:


> I see, good to hear then. Did you lower the cl of your 3600mhz ram from CL16 to CL14-14-14-36? You should have a ram profile option in the bios for 3600 CL14.


Nah, I haven't bothered yet. I went on vacation soon after I bought the 5900x, then when I returned I sold my 5700XT so my PC was out of commission, then when the 6900XT arrived I had to play with it. So I haven't tried messing with my RAM. I know I should, but I'm kinda itching to get 32GB of dual rank B-dies.


----------



## jonRock1992

The new driver is less stable and scores worse in Timespy for me. I'll be sticking with 12.8.1 for now.


----------



## EastCoast

LtMatt said:


> It's been a few months since I last played COD Cold War, but when i last checked my 6900 XT Merc (see vids below as i recorded using ReLive) overclocked was 30% faster than a 3090 FE at 4K at the same settings and it ran very fast and perfectly smooth. 3090 FE had to use DLSS to beat my 6900 XT in raw FPS on the same map.
> 6900 XT Merc Overclocked | Call of Duty Black Ops Cold War | 2160P Ultra Quality - YouTube
> 6900 XT Merc Overclocked | Call of Duty Black Ops Cold War | 1440P Ultra Quality - YouTube


I have to admit the 3090 doesn't come close to that. I wonder why I see a huge reduction in FPS in Cold War. When in Modern Warfare I can average around 200+ FPS at 1080p.
Do you have any more COD Cold War using the xtxh this time at 4k?


----------



## LtMatt

EastCoast said:


> I have to admit the 3090 doesn't come close to that. I wonder why I see a huge reduction in FPS in Cold War. When in Modern Warfare I can average around 200+ FPS at 1080p.
> Do you have any more COD Cold War using the xtxh this time at 4k?


I had SAM enabled in all the videos also.

I have this one at 4K, same as above but this time i had SMT off so was running 8 cores 16 threads.
Call of Duty® Black Ops 12v12 - 2160P | Radeon Boost | 6900 XT Merc - YouTube

Also have this 6700 XT clip, but i since sold that GPU.
Call of Duty® Black Ops Nuketown 6V6 - 1440P | 6700 XT Merc - YouTube

Are you running out of video memory? This game drinks it, though i believe most of it is just cached. Try setting the vram limit to 70% in video menu options.


----------



## kairi_zeroblade

@LtMatt 
Nice game plays, I never knew the 6700XT was that very capable at Ultra detail..its too underrated IMHO..and currently its well priced from where I am at since majority of users would favor the 3060TI..lol for RT..


----------



## LtMatt

kairi_zeroblade said:


> @LtMatt
> Nice game plays, I never knew the 6700XT was that very capable at Ultra detail..its too underrated IMHO..and currently its well priced from where I am at since majority of users would favor the 3060TI..lol for RT..


Yes it's pretty fast, basically half a 6900 XT.


----------



## ptt1982

Any new findings from the latest driver?

I'm sticking to the 21.8.1 as we are not seeing performance improvements together with increased stability. I think we got a nice 7% increase in overall performance the last 90 days, so it is better stick with the current drivers until they go beyond that. Happy to have a GPU that runs cooler and faster than 3090, and costs 70% of the price (including a frikking full custom loop).

Edited this a couple of times, had a couple of very nice IPAs in Tokyo after a long week of hardcore work and mistyped a bit ha! Happy Friday and merry new year everyone!


----------



## kairi_zeroblade

ptt1982 said:


> Any new findings from the latest driver?
> 
> I'm sticking to the 21.8.1 as we are not seeing performance improvements together with increased stability. I think we got a nice 7% increase in overall performance the last 90 days, so it is better stick with the current drivers until they go beyond that. Happy to have a GPU that runs cooler and faster than 3090, and costs 70% of the price (including a frikking full custom loop).
> 
> Edited this a couple of times, had a couple of very nice IPAs in Tokyo after a long week of hardcore work and mistyped a bit ha! Happy Friday and merry new year everyone!


Is the 21.8.1 stable (I know its beta, but i mean for daily normal use) does it have any strange quirks/issues?? I haven't updated since 21.7.2 since it was personally the best beta driver I have tried with overall huge gains..

Based on user feeds from across other boards, its pretty much mixed bag of results..


----------



## lestatdk

ptt1982 said:


> Any new findings from the latest driver?
> 
> I'm sticking to the 21.8.1 as we are not seeing performance improvements together with increased stability. I think we got a nice 7% increase in overall performance the last 90 days, so it is better stick with the current drivers until they go beyond that. Happy to have a GPU that runs cooler and faster than 3090, and costs 70% of the price (including a frikking full custom loop).
> 
> Edited this a couple of times, had a couple of very nice IPAs in Tokyo after a long week of hardcore work and mistyped a bit ha! Happy Friday and merry new year everyone!


I saw no gain with the latest driver. Perhaps a bit more unstable ,so decided to roll back. Staying on 7.2 for now


----------



## lestatdk

Anyone got some input on this ? Did some TS runs. Thought it was due to driver update that I couldn't pass, but even after rolling back it still persisted.

Min freq = 500 =>> GT2 crash
Min freq = max - 100 MHz =>> GT2 crash
Min freq = max - 120 MHz =>> pass ( did several runs)..

I have always used either 500 MHz which can result in a higher score but very unstable, or the max -100 which would be stable. Never thought about a middle ground.

It still drops below the minimum ,so what is the actual purpose of that min freq ?


----------



## gtz

kairi_zeroblade said:


> Is the 21.8.1 stable (I know its beta, but i mean for daily normal use) does it have any strange quirks/issues?? I haven't updated since 21.7.2 since it was personally the best beta driver I have tried with overall huge gains..
> 
> Based on user feeds from across other boards, its pretty much mixed bag of results..


I like it because it gives me the best performance stock. Overclocking is buggy on the latest driver, however I do live the out the door performance boost. With 7.2 on stock I get 19000ish in timespy, with 8.2 I get near 20000. But with 7.2 I can manually tweak the setting and get around 21300ish, with 8.2 all I need to do is max out the power limit, adjust the fan curve and do my 2100 tight timing OC and I get 21000. With the newest driver I have not even run MPT. I am also running a 6800XT.


----------



## LtMatt

ptt1982 said:


> Any new findings from the latest driver?
> 
> I'm sticking to the 21.8.1 as we are not seeing performance improvements together with increased stability. I think we got a nice 7% increase in overall performance the last 90 days, so it is better stick with the current drivers until they go beyond that. Happy to have a GPU that runs cooler and faster than 3090, and costs 70% of the price (including a frikking full custom loop).
> 
> Edited this a couple of times, had a couple of very nice IPAs in Tokyo after a long week of hardcore work and mistyped a bit ha! Happy Friday and merry new year everyone!


Cheers, will join you with an IPA.


----------



## kairi_zeroblade

lestatdk said:


> I have always used either 500 MHz which can result in a higher score but very unstable, or the max -100 which would be stable. Never thought about a middle ground.


on my case I set the min freq to 1000mhz..still gets me there..(6800XT here, sorry for the intrusion 6900XT guys)..

whenever I do either -100 , -200, -300 upto flat 2000mhz from max, I end up with worse GPU score results..(but better CPU scores)









I scored 20 709 in Time Spy


AMD Ryzen 9 5900X, AMD Radeon RX 6800 XT x 1, 32768 MB, 64-bit Windows 10}




www.3dmark.com







LtMatt said:


> Cheers, will join you with an IPA.


The sun is still up, seems too early for that..How I miss a drink..been ages now since this Covid thing..(on some areas here there is a ban on liqour)


----------



## lestatdk

kairi_zeroblade said:


> on my case I set the min freq to 1000mhz..still gets me there..(6800XT here, sorry for the intrusion 6900XT guys)..
> 
> whenever I do either -100 , -200, -300 upto flat 2000mhz from max, I end up with worse GPU score results..(but better CPU scores)
> 
> 
> 
> 
> 
> 
> 
> 
> 
> I scored 20 709 in Time Spy
> 
> 
> AMD Ryzen 9 5900X, AMD Radeon RX 6800 XT x 1, 32768 MB, 64-bit Windows 10}
> 
> 
> 
> 
> www.3dmark.com
> 
> 
> 
> 
> 
> 
> 
> The sun is still up, seems too early for that..How I miss a drink..been ages now since this Covid thing..(on some areas here there is a ban on liqour)


Thanks for the input. I will try that.
Doesn't matter if it's a 6800XT , it's very close to the 6900XT anyway


----------



## LtMatt

kairi_zeroblade said:


> on my case I set the min freq to 1000mhz..still gets me there..(6800XT here, sorry for the intrusion 6900XT guys)..
> 
> whenever I do either -100 , -200, -300 upto flat 2000mhz from max, I end up with worse GPU score results..(but better CPU scores)
> 
> 
> 
> 
> 
> 
> 
> 
> 
> I scored 20 709 in Time Spy
> 
> 
> AMD Ryzen 9 5900X, AMD Radeon RX 6800 XT x 1, 32768 MB, 64-bit Windows 10}
> 
> 
> 
> 
> www.3dmark.com
> 
> 
> 
> 
> 
> 
> 
> The sun is still up, seems too early for that..How I miss a drink..been ages now since this Covid thing..(on some areas here there is a ban on liqour)


A ban on alcohol? Sounds like hell my man.


----------



## kairi_zeroblade

LtMatt said:


> A ban on alcohol? Sounds like hell my man.


It is HELL..hahaha..been drinking canned juices for the past year or so..limited movement outside the house..I can drive to a few places only..and worse, Curfews..lol..


----------



## Illuminado

Hey guys,

Stumbled upon this thread searching around/researching for ways to optimise my PC in the very near future. I have a XFX 319 6900xt (Ultra I believe). You probably get these kind of posts a fair bit, but what kind of advice can you give towards really making my setup sing? I feel like I've never really managed to achieve this, at least in regards to good components vs expected perf and it constantly irks as I'm a bit of a stickler for that sorta thing -_-.

General plan is to embark on a custom loop to lower temps as after owning this card for 8 months it really does have a habit of getting quite toasty when pushed. I've picked up a compatible Alphacool block (I hope) and will be cooling the 5900x I have with a monoblock in all likelihood in the hope of achieving some much needed headroom for both. Any good resources you can link me as a starting point for the former? Searching seems to find less in regards to targeted discussion for this card which is why this thread/community seems great at first glance!

Thanks for reading and sorry for the chump post. Relatively new to this stuff and want to try doing it properly (for once..!).

Cheers.


----------



## kairi_zeroblade

Illuminado said:


> but what kind of advice can you give towards really making my setup sing? I feel like I've never really managed to achieve this, at least in regards to good components vs expected perf and it constantly irks as I'm a bit of a stickler for that sorta thing -_-.


These cards are sensitive to temperature..so keeping them cold will eventually boost performance up..as one user suggested, just maxing out the slider for power limit on the Radeon Software would already yield significant gains..

Your plan on doing a Custom loop is the best means to lower your temps..also the best investment towards your overall system's cooling capacity..when I went to a Custom loop it was a night and day difference on cooling capabilities and capacity, though the hobby itself is pricey, but the benefits are for long term I should say..


----------



## Nicklas0912

uh, I dont like all the bad talk about the
*PowerColor Red Devil RX 6900 XT Ultimate*

Im about to buy that card  

That should be XATH?


----------



## The EX1

Nicklas0912 said:


> uh, I dont like all the bad talk about the
> *PowerColor Red Devil RX 6900 XT Ultimate*
> 
> Im about to buy that card
> 
> That should be XATH?


Yes the Ultimate is XTXH. Cooler mounts seem to be hit or miss with them in this thread. XTXH is going to be a bit harder to push on air due to their lower thermal limits (95C hotspot vs 110C hotspot in regular XTX).


----------



## lestatdk

New personal best in Timespy.

I had some stability issues in TS, and was actually having to lower my max frequency a bit.

For debugging purposes I decided to roll back my MPT values from 400/420 (PL/TDC) to 380/400. And now it was stable again and I could push the frequency up.

Have now increased my GPU score by 200 pts. What also gave me a slight better overall score + GPU score was to underclock my memory from 4000 to 3733, but at the same time tightening timings from 16-19-19-19-39 to 14-14-14-14-34 . This has lowered my latency by 15% or so at a slightly lower bandwidth.

Since there seems to be a 1900 MHz fclk bug around and my motherboard appears to have it, I'm not able to try 3800 MHz with those tight timings. Maybe there'll be a BIOS update soon ( I hope) 


Running FCLK, MCLK and UCLK in sync appears to be the best, unfortunately I get WHEA errors with FCLK of 1933 and above

TLDR : lowering MPT values = increase in performance, lowering DRAM speed with tighter timings = increase in performance


----------



## lestatdk

Nicklas0912 said:


> uh, I dont like all the bad talk about the
> *PowerColor Red Devil RX 6900 XT Ultimate*
> 
> Im about to buy that card
> 
> That should be XATH?


Might be better to go for the Asus one you posted some days ago. Those RD cards are a bit hit and miss it seems


----------



## lawson67

The EX1 said:


> Yes the Ultimate is XTXH. Cooler mounts seem to be hit or miss with them in this thread. XTXH is going to be a bit harder to push on air due to their lower thermal limits (95C hotspot vs 110C hotspot in regular XTX).


The powercolor Ultimate hotspot thermal limit 110c the same as a regular XTX which makes it most likely the best best XTXH you can get if you plan to keep it on Air


----------



## lawson67

Nicklas0912 said:


> uh, I dont like all the bad talk about the
> *PowerColor Red Devil RX 6900 XT Ultimate*
> 
> Im about to buy that card
> 
> That should be XATH?


Its a great card get one, despite what some have said it's a great card with some of the best power phases that can get and can take way more power than your likely to ever throw at it, don't be put off by what happened to buildziod card that card was messed around with so much it could of been many things that took that card out!


----------



## kairi_zeroblade

lestatdk said:


> Since there seems to be a 1900 MHz fclk bug around and my motherboard appears to have it, I'm not able to try 3800 MHz with those tight timings. Maybe there'll be a BIOS update soon ( I hope)


The IFCLK Hole isn't because of the motherboard, its your processor, it can't handle that much IFCLK, been there, done that..I had been through 2 5800x's and 3 5900x's before I got something that can boot with 1900mhz and above..

Simply Silicon Lottery..


----------



## lawson67

kairi_zeroblade said:


> The IFCLK Hole isn't because of the motherboard, its your processor, it can't handle that much IFCLK, been there, done that..I had been through 2 5800x's and 3 5900x's before I got something that can boot with 1900mhz and above..
> 
> Simply Silicon Lottery..





kairi_zeroblade said:


> The IFCLK Hole isn't because of the motherboard, its your processor, it can't handle that much IFCLK, been there, done that..I had been through 2 5800x's and 3 5900x's before I got something that can boot with 1900mhz and above..
> 
> Simply Silicon Lottery..


Not true i run 2000mhz FLCK with modded SOC voltage without WHEA errors but simply can not run 1900mhz FLCK and if i try i have to reset Bios to even post and be able to boot back into windows.


----------



## dagget3450

So I am finally installing my waterblock and about to put liquid metal on the gpu after using nail polish on around the die. Has anyone done this yet? If so was curious of your results. Either way I hope to have this done this weekend.


----------



## J7SC

lestatdk said:


> New personal best in Timespy.
> 
> I had some stability issues in TS, and was actually having to lower my max frequency a bit.
> 
> For debugging purposes I decided to roll back my MPT values from 400/420 (PL/TDC) to 380/400. And now it was stable again and I could push the frequency up.
> 
> Have now increased my GPU score by 200 pts. What also gave me a slight better overall score + GPU score was to underclock my memory from 4000 to 3733, but at the same time tightening timings from 16-19-19-19-39 to 14-14-14-14-34 . This has lowered my latency by 15% or so at a slightly lower bandwidth.
> 
> Since there seems to be a 1900 MHz fclk bug around and my motherboard appears to have it, I'm not able to try 3800 MHz with those tight timings. Maybe there'll be a BIOS update soon ( I hope)
> 
> 
> Running FCLK, MCLK and UCLK in sync appears to be the best, unfortunately I get WHEA errors with FCLK of 1933 and above
> 
> TLDR : lowering MPT values = increase in performance, lowering DRAM speed with tighter timings = increase in performance
> 
> View attachment 2522587
> 
> 
> View attachment 2522588


_"Since there seems to be a 1900 MHz fclk bug around and my motherboard appears to have it..."_ ?? haven't heard about that / don't seem to have that bug, but I'm running Asus CH8 Dark, not MSI.

Generally agree though that near the final 'top speed' of a RAM setup, it often makes more sense to opt for tighter timings rather than outright max MHz/bandwidth. I can run IF at 2000 and RAM at 4000 'native' but the trade-off isn't worth it. Also, I under-volt the RAM by a bit so 1900 / 3800 it is


----------



## lestatdk

J7SC said:


> _"Since there seems to be a 1900 MHz fclk bug around and my motherboard appears to have it..."_ ?? haven't heard about that / don't seem to have that bug, but I'm running Asus CH8 Dark, not MSI.
> 
> Generally agree though that near the final 'top speed' of a RAM setup, it often makes more sense to opt for tighter timings rather than outright max MHz/bandwidth. I can run IF at 2000 and RAM at 4000 'native' but the trade-off isn't worth it. Also, I under-volt the RAM by a bit so 1900 / 3800 it is
> 
> View attachment 2522608


This would also be my choice. But I can't even boot at 1900 MHz . 1933 is fine. Even 2000 is fine but above 1900 I get WHEA errors


----------



## lawson67

dagget3450 said:


> So I am finally installing my waterblock and about to put liquid metal on the gpu after using nail polish on around the die. Has anyone done this yet? If so was curious of your results. Either way I hope to have this done this weekend.


Ive coated LM whist using nail polish around the semiconductors on both my Powercolor Red Devil Rx 6800 XT which droped 20c on the air cooler and my Powercolor RX 6900 XT Ultimate Edition which is imbedded in a Aphacool water block and results in the Hotspot temp running in the 60's to low 70's at 420w on the RX 6900 XT on the water block, its well worth doing and I've had no problems at all


----------



## J7SC

lestatdk said:


> This would also be my choice. But I can't even boot at 1900 MHz . 1933 is fine. Even 2000 is fine but above 1900 I get WHEA errors


I had a somewhat similar issue on an otherwise great MSI X399 Creator TR board with 3466, 3533 and 3600 (3533 being the troubled one)


----------



## EastCoast

Ok, I've been reading some of these post and want to know what's the best 6900xt out there right now? I can either do air or watercooling.


----------



## dagget3450

lawson67 said:


> Ive coated LM whist using nail polish around the semiconductors on both my Powercolor Red Devil Rx 6800 XT which droped 20c on the air cooler and my Powercolor RX 6900 XT Ultimate Edition which is imbedded in a Aphacool water block and results in the Hotspot temp running in the 60's to low 70's at 420w on the RX 6900 XT on the water block, its well worth doing and I've had no problems at all


Awesome thank you! So on a side note i have the thermal pads that came with my waterblock EK full copper quamtum vector. Did you use special pads or just ones provided with block? I dont want to have to tear down if they are problematic.

Any issue with VRMs like on previous AMD gpus with heat?


----------



## kairi_zeroblade

lawson67 said:


> Not true i run 2000mhz FLCK with modded SOC voltage without WHEA errors but simply can not run 1900mhz FLCK and if i try i have to reset Bios to even post and be able to boot back into windows.


Believe what you want but I already tested it out..if you can claim "very significant" gains on higher IFclock with higher voltages and without WHEA errors then you have proved me wrong..Ryzen isn't pretty close to what an Intel IMC can do in terms of memory overclocking and scaling..

AMD only said/guaranteed 1600/3200mhz out of the box..most bins/batches would do 3600 modestly but beyond that is almost a feat for a good silver, gold and platinum chip(factor in the amount of voltage one would need to be stable as well)..like what a certain member sells here for a ridiculous price via ebay thru paypal w/o refunds..

_Edit:typo_


----------



## lawson67

dagget3450 said:


> Awesome thank you! So on a side note i have the thermal pads that came with my waterblock EK full copper quamtum vector. Did you use special pads or just ones provided with block? I dont want to have to tear down if they are problematic.
> 
> Any issue with VRMs like on previous AMD gpus with heat?


I just used the thermal pads that came with the Alphacool block and no i have not had any problems with overheating VRM's on both the RX 6800 XT and the RX 6900 XT in both stock configuration or after using LM using the thermal pads that came with the Alphacool block, the VRM's on these cards don't seem to suffer from overheating at least on both my cards, powercolor are using 19 power stages in total and 14+2 for the GPU, more powerstages results in spreading the load across more phases which helps keep the board temperatures in check


----------



## lawson67

kairi_zeroblade said:


> Believe what you want but I already tested it out..if you can claim "very significant" gains on higher IFclock with higher voltages and without WHEA errors then you have proved me wrong..Ryzen isn't pretty close to what an Intel IMC can do in terms of memory overclocking and scaling..
> 
> AMD only said/guaranteed 1600/3200mhz out of the box..most bins/batches would do 3600 modestly but beyond that is almost a feat for a good silver, gold and platinum chip(factor in the amount of voltage one would need to be stable as well)..like what a certain member sells here for a ridiculous price via ebay thru paypal w/o refunds..
> 
> _Edit:typo_


I don't have to prove anything to you, i stated a fact and whether you belive me or not is irrelevant to me, furthermore i never said i got "very significant gains" from running 2000mhz FLCK so I don't have to prove that to you either i simply said i run 2000mhz FLCK yet i can not run 1900mhz FLCK which you have stated would be impossable due to the silicon lottery of the chip, :- your quote,

"The IFCLK Hole isn't because of the motherboard, its your processor, it can't handle that much IFCLK, been there, done that..I had been through 2 5800x's and 3 5900x's before I got something that can boot with 1900mhz and above.."

Well your statement of fact clearly Falls flat on its face as i can not boot 1900mhz Flck yet i can boot 2000mhz FLCK the same as @lestatdk and many others on the internet


----------



## Thanh Nguyen

Hey guys, how can I make the radeon software load my oc setting after reboot? It just rolls back to default setting after I restart.


----------



## kairi_zeroblade

lawson67 said:


> Well your statement of fact clearly Falls flat on its face as i can not boot 1900mhz Flck yet i can boot 2000mhz FLCK the same as @lestatdk and many others on the internet





















where's FLAT?? I am using these right now without issues..too unfortunate, yours doesn't boot on 1900..mine doesn't boot on 2133mhz at all..so that must be my limit/hole(higher limit before it doesn't boot)??

anyways, I don't want to derail this thread, with my ramblings..as I am all good with my setup..my rams might not be a B-Die (since its nowhere to get from where I am at for a reasonable price and even reasonable specs on timings) but I am all good up there or down there (memory Frequency and IF Clock)


----------



## lawson67

kairi_zeroblade said:


> View attachment 2522654
> 
> 
> View attachment 2522655
> 
> 
> where's FLAT?? I am using these right now without issues..too unfortunate, yours doesn't boot on 1900..mine doesn't boot on 2133mhz at all..so that must be my limit/hole(higher limit before it doesn't boot)??
> 
> anyways, I don't want to derail this thread, with my ramblings..as I am all good with my setup..my rams might not be a B-Die (since its nowhere to get from where I am at for a reasonable price and even reasonable specs on timings) but I am all good up there or down there (memory Frequency and IF Clock)


I was simply responding to your original claim where you clearly stated that if you can not boot with a FLCK of 1900mhz then this is because you have hit the silicon lottery of your CPU and you simply wont be able to post at a higher FLCK frequency of 1900mhz as you had done all the testing, well myself and @lestatdk have both proved your statement to be wrong, we both cant post at 1900mhz yet can both post successfully with an IF of 2000mhz, well its clearly not a silicon lottery problem with our CPU's logic would dictate its most likely a problem with our Bios


----------



## kairi_zeroblade

lawson67 said:


> I was simply responding to your original claim where you clearly implied that if you can not boot with a FLCK of 1900mhz then this is because you have hit the silicon lottery of your CPU and you simply wont be able to boot at a higher FLCK frequency of 1900mhz as you had done all the testing, well myself and @lestatdk have both proved your statement to be wrong


It has always been a silicon lottery..

there are lots of threads here on OCN and elsewhere (even reddit) stating the IF Clock hole..if you haven't got one, lucky, else if you have one, Can't say if you're lucky or not, some tend to ignore the fact that beyond that hole you have a slim chance of getting a stable IF Clock overclock, the baseline is the amount of WHEA errors you get, now, if you are lucky to get WHEA free beyond that hole, then thats good, else there is a solution where you can ignore those WHEA's but still suffer slow downs..I simply (personally) think that, the Hole is there due to architectural design flaws..it could also mean the "limit" as to how high you can overclock the IF Clock..but that's just me..I have used different boards as well and IF Clock overclock/limit is the same across my tests..

I simply stated a fact from what I have done testing myself..did I hit a nerve?? 😖

I don't want to derail the thread/discussion any further, we both have our reservations/observations on this topic, I'd rather not complicate things on here..its not the proper place to do so..


----------



## lawson67

kairi_zeroblade said:


> It has always been a silicon lottery..
> 
> there are lots of threads here on OCN and elsewhere (even reddit) stating the IF Clock hole..if you haven't got one, lucky, else if you have one, Can't say if you're lucky or not, some tend to ignore the fact that beyond that hole you have a slim chance of getting a stable IF Clock overclock, the baseline is the amount of WHEA errors you get, now, if you are lucky to get WHEA free beyond that hole, then thats good, else there is a solution where you can ignore those WHEA's but still suffer slow downs..I simply (personally) think that, the Hole is there due to architectural design flaws..it could also mean the "limit" as to how high you can overclock the IF Clock..but that's just me..I have used different boards as well and IF Clock overclock/limit is the same across my tests..
> 
> I simply stated a fact from what I have done testing myself..did I hit a nerve?? 😖
> 
> I don't want to derail the thread/discussion any further, we both have our reservations/observations on this topic, I'd rather not complicate things on here..its not the proper place to do so..


You didn't hit a nerve at all you made a statement of fact which could lead others to belive you are correct yet you are simply wrong, misinformation is not helpful to anyone!


----------



## weleh

fCLK hole is something known and it's a bit hard to understand where the limitation is, CPU, BOARD or combination.

Either way, AMD guarantees 1800 fCLK this gen, above this it's overclocking and therefore users are expected to understand what's going on and tweaking would be needed in terms of SOC and VDDGs.

Truth is, most CPUs do 1900 fCLK, some do higher without wheas but it's still something rare. Most CPUs with 1900 fCLK hole can do higher fCLK so this indicates some sort of incompatibility with maybe the motherboard. X570 chipset is a joke in terms of stability compared to B550 or even B450. Hell the best board I had was a B450 board on beta bios for Zen 3.

This is a bit off topic so we shouldn't be spamming this with RAM talk though 

Anyway...  max I've booted sync was 4267 but it was SLOOWWWWWWW


----------



## kairi_zeroblade

lawson67 said:


> statement of fact which could lead others to belive you are correct yet you are simply wrong


I never claimed my statement was superior/ultimately correct, I based it from what I have tested/done as much as my wallet can allow me and all the internet bandwidth would allow me to download and read just to validate this IF clock hole thing and how this silicon lottery runs with Ryzen 5000 series..others have discussiones elsewhere (I have stated it as well), like the poster above me said..its a mystery nobody knows since AMD didn't released to much of a detail about the Infinity Fabric and its mysteries..one can only do himself a favor and test it out..that's what I have done, and I am just sharing my opinion, never wanted anyone flocking around that simple idea I stated..somebody on these forums call it a design flaw..someone says from here as well that its fixable, somebody claimed its being caused by a firmware stuff and the list can just go on and so on..

if you're so self pleased, then tell me what is CORRECT?


----------



## LtMatt

weleh said:


> Either way, AMD guarantees 1800 fCLK this gen


I can guarantee you they do not. Max supported is 3200/1600. Anything above that is considered overclocking, not guaranteed, and may or may not work.


----------



## kairi_zeroblade

LtMatt said:


> I can guarantee you they do not. Max supported is 3200/1600. Anything above that is considered overclocking, not guaranteed, and may or may not work.


what exactly I was saying on my previous post..rep'd, at least you weren't "intimidated" when I posted my inferior benchmarks and started insinuating "he's posting something superior, Imma bang him back to reality he's so damn slow!!"

Why some peeps here feel sensitive towards benchmark post and start posting their epeens wide open?? feel left out?? need attention?? I wasn't even claiming my statements to be in a godlike nature to be so ultimately true..sheesh..

on a side note: anybody tried Crossfire with the RX6000 series?? only benchmarks would work right?? few or no new games would technically utilize 2 cards??


----------



## ZealotKi11er

kairi_zeroblade said:


> what exactly I was saying on my previous post..rep'd, at least you weren't "intimidated" when I posted my inferior benchmarks and started insinuating "he's posting something superior, Imma bang him back to reality he's so damn slow!!"
> 
> Why some peeps here feel sensitive towards benchmark post and start posting their epeens wide open?? feel left out?? need attention?? I wasn't even claiming my statements to be in a godlike nature to be so ultimately true..sheesh..
> 
> on a side note: anybody tried Crossfire with the RX6000 series?? only benchmarks would work right?? few or no new games would technically utilize 2 cards??


There is no Crossfire support for RX6000, just DX12 mGPU.


----------



## J7SC

LtMatt said:


> I can guarantee you they do not. Max supported is 3200/1600. Anything above that is considered overclocking, not guaranteed, and may or may not work.


...perhaps it's superfluous to mention, but it also comes down to the specific memory itself (in addition to board quality and trace layout, CPU MC etc). I only use GSkill (Samsung-B) GTZR on all my AMD and Intel mobos, including the two X570 in 4 stick / 32 GB configuration. This doesn't mean that other brands or configs don't work well, but that particular RAM is a known quantity to me as 'X570' are the first non-HEDT system I built in a while (systems are dual-use) and I used said RAM on various Threadrippers and Intel HEDT in four and eight stick combos. btw, X570 has just enough lanes for my use-cases as long as I don't employ SLI/Crossfire ...

On both Asus CH8 X570 setups, neither the 3950X nor the 5950X had any trouble booting up at 1900/3800 on the first try and w/o WHEA even after employing 'tight' settings. Playing around with 2000/4000 on the 5950X (w/ 'native DDR4 - 4000 CL15) showed that there's more headroom still, but at the expense of 'CL'.


----------



## St0RM53

Today i've borrowed 2 more Reference 6900XT's from my friends in addition of my own and spend half a day testing their overclocking performance.

The results well to put it simply...DON'T MAKE ANY SENSE.

I've prepared a spreadsheet with all my tests and detail you can download from here:









6900xt_bench.xlsx


Shared with Dropbox




www.dropbox.com





We have 3 cards, bought 3 months apart directly from the same AMD shop, so we can safely assume that is about the difference in their manufacturing time due to how quickly they sell.

The results are really surprising and i don't know how to explain them.

We have the newest of the 3 cards (#3) to be an overclock and undervolt monster, but it scores comparatively average.

Card #1 is the worst card in terms of overclocking and undervolting, and it has about +5-7oC higher hotspot temperature, but it gets some pretty good scores.

Card #2 is in-between the above 2 cards in terms of overclocking/undervolting but it had the highest scores...when it feels it likes it. Example we have a Firestrike test that was almost back to back and there is a difference of 1000 points!!! And it was the 2nd test that scored higher, so it's not temp related! 

I've compared detailed benchmarks between cards #1 and #3, and for most of the tests #3 has higher average clocks but scores less!

I also checked the cpu threads under the firestrike benchmark and has no thread that peaks 100%. 

Thus it is unknown what the heck is causing such discrepancies between the tests.

Radeon 6000 series overclocking has always been flakey at best because AMD doesn't unlock the vbios and also Wattman does whatever it wants to. 

Another example is if you run a miner you might only get 55mh/s but after you soft-restart the driver and miner with ctrl+win+shift+b, without any other change, it will get the normal 63-64mh/s rate. 

I don't know how they manage to make the overclocking experience so bad. Navi 10 (5700xt) is always doing whatever you put in the settings always with 0 issues. 6000 series is all over the place.

Here to listen to your opinions. If you are fast enough you may recommend me some more tests to perform.


----------



## J7SC

St0RM53 said:


> Today i've borrowed 2 more Reference 6900XT's from my friends in addition of my own and spend half a day testing their overclocking performance.
> 
> The results well to put it simply...DON'T MAKE ANY SENSE.
> 
> I've prepared a spreadsheet with all my tests and detail you can download from here:
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 6900xt_bench.xlsx
> 
> 
> Shared with Dropbox
> 
> 
> 
> 
> www.dropbox.com
> 
> 
> 
> 
> 
> We have 3 cards, bought 3 months apart directly from the same AMD shop, so we can safely assume that is about the difference in their manufacturing time due to how quickly they sell.
> 
> The results are really surprising and i don't know how to explain them.
> 
> We have the newest of the 3 cards (#3) to be an overclock and undervolt monster, but it scores comparatively average.
> 
> Card #1 is the worst card in terms of overclocking and undervolting, and it has about +5-7oC higher hotspot temperature, but it gets some pretty good scores.
> 
> Card #2 is in-between the above 2 cards in terms of overclocking/undervolting but it had the highest scores...when it feels it likes it. Example we have a Firestrike test that was almost back to back and there is a difference of 1000 points!!! And it was the 2nd test that scored higher, so it's not temp related!
> 
> I've compared detailed benchmarks between cards #1 and #3, and for most of the tests #3 has higher average clocks but scores less!
> 
> I also checked the cpu threads under the firestrike benchmark and has no thread that peaks 100%.
> 
> Thus it is unknown what the heck is causing such discrepancies between the tests.
> 
> Radeon 6000 series overclocking has always been flakey at best because AMD doesn't unlock the vbios and also Wattman does whatever it wants to.
> 
> Another example is if you run a miner you might only get 55mh/s but after you soft-restart the driver and miner with ctrl+win+shift+b, without any other change, it will get the normal 63-64mh/s rate.
> 
> I don't know how they manage to make the overclocking experience so bad. Navi 10 (5700xt) is always doing whatever you put in the settings always with 0 issues. 6000 series is all over the place.
> 
> Here to listen to your opinions. If you are fast enough you may recommend me some more tests to perform.


...this doesn't surprise me at all....

1.) I've done a fair amount of HWBot with multiple sets of Quad-Sli - QuadFire in the past...four brand new cards of the same model with either close or sequential serial numbers would all stray differently...in those days, boost was less sophisticated, and you could also still read out ASIC quality value... I have seen a range of 68% to 94% all in one set of four cards. This current gen for AMD and NVidia is the first time that I'm not running multiple GPUs. Vendors will list a game boost clock that is 'minimum'...my xtx 3x8pin is about 450MHz to 500Mhz faster than what is listed as its game boost max, depending on app and bench.

2.) Even w/ really expensive GPUs, there can be / often are differences in 'mounting quality' of pads, paste and even cooler that can affect performance easily enough via '3.) '

3.) The latest crop of cards by AMD and NVidia have even more complex boost algorithms ...essentially, one is trying to figure out one equation with at least 3 (or more) unknowns...only trial and error really works, as does good record keeping (kudos re. your spreadsheet btw). With modern boost algorithms, a higher-max clocking card can push other algorithm limits earlier and thus show lower scores.

IMO, short of adding an Elmor EVC2SX / other hard mods, the best users can do is check the mount (and may be upgrade) thermal pads, paste and also w-cool if they plan to push MPT PL well past stock values. I have gone from air to w-cooling on mine and it really made a difference once PL was upped past 360W by keeping Hotspot in check, and thus benefited overall boost.


----------



## thomasck

J7SC said:


> _"Since there seems to be a 1900 MHz fclk bug around and my motherboard appears to have it..."_ ?? haven't heard about that / don't seem to have that bug, but I'm running Asus CH8 Dark, not MSI.
> 
> Generally agree though that near the final 'top speed' of a RAM setup, it often makes more sense to opt for tighter timings rather than outright max MHz/bandwidth. I can run IF at 2000 and RAM at 4000 'native' but the trade-off isn't worth it. Also, I under-volt the RAM by a bit so 1900 / 3800 it is
> 
> View attachment 2522608


What's the ram voltage? Thanks.


----------



## J7SC

thomasck said:


> What's the ram voltage? Thanks.


...varies by exact GTZR spec (have two in use, nominal 3866 and nominal 4000) but between 1.35v and 1.47v (the latter undervolted from the 1.5v from the factory)


----------



## Blameless

St0RM53 said:


> Another example is if you run a miner you might only get 55mh/s but after you soft-restart the driver and miner with ctrl+win+shift+b, without any other change, it will get the normal 63-64mh/s rate.


This usually means the memory wasn't completely stable.


----------



## St0RM53

J7SC said:


> ...this doesn't surprise me at all....
> 
> 1.) I've done a fair amount of HWBot with multiple sets of Quad-Sli - QuadFire in the past...four brand new cards of the same model with either close or sequential serial numbers would all stray differently...in those days, boost was less sophisticated, and you could also still read out ASIC quality value... I have seen a range of 68% to 94% all in one set of four cards. This current gen for AMD and NVidia is the first time that I'm not running multiple GPUs. Vendors will list a game boost clock that is 'minimum'...my xtx 3x8pin is about 450MHz to 500Mhz faster than what is listed as its game boost max, depending on app and bench.
> 
> 2.) Even w/ really expensive GPUs, there can be / often are differences in 'mounting quality' of pads, paste and even cooler that can affect performance easily enough via '3.) '
> 
> 3.) The latest crop of cards by AMD and NVidia have even more complex boost algorithms ...essentially, one is trying to figure out one equation with at least 3 (or more) unknowns...only trial and error really works, as does good record keeping (kudos re. your spreadsheet btw). With modern boost algorithms, a higher-max clocking card can push other algorithm limits earlier and thus show lower scores.
> 
> IMO, short of adding an Elmor EVC2SX / other hard mods, the best users can do is check the mount (and may be upgrade) thermal pads, paste and also w-cool if they plan to push MPT PL well past stock values. I have gone from air to w-cooling on mine and it really made a difference once PL was upped past 360W by keeping Hotspot in check, and thus benefited overall boost.


Thanks a lot for the reply. It's a shame AMD decided not give us full control over the card. At least there is morepowertool; I wouldn't do an elmor mod unless i was chasing scores since it's impractical for regular usage.




Blameless said:


> This usually means the memory wasn't completely stable.


No you are completely wrong on this part. I may run the card mining fine for a week non-stop, load a different wattman profile to play a game, then load back the previous profile and it will do this ****. The Wattman part of the driver is broken beyond repair. I don't think AMD even knows what they are doing. They probably ****ed up the design on the hardware side somewhat making it difficult to control tuning parameters via software. Or whatever. Soft or Hard restarting the driver with ctrl+win+shift+b or using the restart64.exe from CRU fixes this problem. This problem is even present on Linux.


----------



## cfranko

Hey everyone, since I repasted my GPU I am having temperature issues with my air cooled 6900 xt. My mistake while repasting was that I screwed in one spring screw fully and then I screwed another one fully and it wasn’t an X pattern too and I forced the metal thing in the middle of the card by putting pressure on it in order to screw it in. So I took it apart again and did it correcty, I screwed the card back using a X pattern and I tightened each spring screw little by little I didn’t screw one in fully and then screwed another one fully. After my second attempt of repasting at first my temps were fine but after a while the temps got terrible again. I suspect that I may have bent the PCB permanently due to the mistake I did and now there isn’t proper contact. Is this possible? I ordered a waterblock for the GPU now but I am afraid that there will be contact issues on the waterblock as well because of the possibly bent PCB. Does anyone have any clue if it is possible to permanently bent the PCB and have contact issues? When I look at it from the outside it looks fine.


----------



## Thanh Nguyen

I have a 6900xt red devil ultimate and use MPT to get 380/400 but hit only 2550mhz during timespy. What did I do wrong?


----------



## J7SC

Thanh Nguyen said:


> I have a 6900xt red devil ultimate and use MPT to get 380/400 but hit only 2550mhz during timespy. What did I do wrong?


What do you get re. clocks in TimeSpy all stock, ie. WITHOUT any MPT PL increase ?


----------



## Thanh Nguyen

I scored 21 576 in Time Spy


AMD Ryzen 9 5950X, AMD Radeon RX 6900 XT x 1, 32768 MB, 64-bit Windows 10}




www.3dmark.com





Why I have only 22k graphic score with 2700mhz clock


----------



## jonRock1992

Thanh Nguyen said:


> I scored 21 576 in Time Spy
> 
> 
> AMD Ryzen 9 5950X, AMD Radeon RX 6900 XT x 1, 32768 MB, 64-bit Windows 10}
> 
> 
> 
> 
> www.3dmark.com
> 
> 
> 
> 
> 
> Why I have only 22k graphic score with 2700mhz clock


Either power limit or temp throttling, or maybe a combination of both. Timespy will use more than 430W.


----------



## J7SC

Thanh Nguyen said:


> I scored 21 576 in Time Spy
> 
> 
> AMD Ryzen 9 5950X, AMD Radeon RX 6900 XT x 1, 32768 MB, 64-bit Windows 10}
> 
> 
> 
> 
> www.3dmark.com
> 
> 
> 
> 
> 
> Why I have only 22k graphic score with 2700mhz clock


That's why I asked re. stock setting run...great GPU, but as @jonRock1992 said, you're bumping into some throttling courtesy of the boost algorithm as MPT PL increases...easiest way is to do a stock run with GPUz monitoring ('max' values) open for all temps, power and voltages, then after cool-down another run with your preferred MPT PL, again with GPUz monitoring open


----------



## Thanh Nguyen

Do we have 1000w bios for these card?


----------



## Blameless

St0RM53 said:


> No you are completely wrong on this part. I may run the card mining fine for a week non-stop, load a different wattman profile to play a game, then load back the previous profile and it will do this ****.


I see the same behavior...until I back off slightly on memory clock. A stable memory clock always results in consistent hashrates, in my experience, and does not require the card/driver to be restarted.



St0RM53 said:


> Soft or Hard restarting the driver with ctrl+win+shift+b or using the restart64.exe from CRU fixes this problem. This problem is even present on Linux.


The driver cannot be expected to restart itself with every clock change as that would make different power states essentially unusable. If reverting from your peak clocks requires this, I'd call that a stability issue with those clocks.


----------



## Oversemper

Disassembled Liquid Devil 6900xt. What a mess with thermal paste it was! I've lost silicon lottery and my card runs without crashing only 2540Mhz/2150Mhz at 1,175v with power limit extended to 400 watts; and from the start it was 90-95 C for junction at 370-380 watts peaks (in-game without FPS cap). Cleaning the card from the manufacturing mess and applying liquid metal (Thermalright Silver King) shaved off like 20 degrees for junction! Now it's 75-77 C in torture tests and even less in games! Around the die I've put silicon isolation compound used in radioelectronics. Also used a thicker washers for the GPU screws. Arctic Cooling Thermal Pad 1mm as new thermal pads (original ones get torn when the water-block is removed). Some pictures under cut:


Spoiler: Pictures


















































UPDATE: forgot to attach measurements:


----------



## Oversemper

cfranko said:


> After my second attempt of repasting at first my temps were fine but after a while the temps got terrible again. I suspect that I may have bent the PCB permanently due to the mistake I did and now there isn’t proper contact. Is this possible? I ordered a waterblock for the GPU now but I am afraid that there will be contact issues on the waterblock as well because of the possibly bent PCB. Does anyone have any clue if it is possible to permanently bent the PCB and have contact issues?


Once when I was installing a custom water block+backplate on the GPU I had an accident. I was screwing down a backplate's screw and the screw got stuck in the "screw tower with thread inside" which the screw is to be received (screwed) in (into), but it was still screwing through the thread in the backplate. It was a middle screw (between two other fully screwed screws). So, I've applied some force and it began to screw again but kinda tightly. Guess what, it turned out that the "screw tower" got torn and I was pushing it hard into the board against the backplate. The board got "wavy" with like 2-3 mm wave at the place where it was pushing it. When I realized it and saw the wavy board from the side I started wandering where to buy a second card (it was Radeon 7 +ekwb block and backplate), but Gods were kind that day and card survived without any detrimental effect.

Regarding your situation. I think, tightening one screen cannot introduce any permanent deformation. See my pictures above and do the same even layer of thermal paste and make sure all pads are in place.


----------



## Thanh Nguyen

I already pump to 450w in MPT and the card hits only 420w but it still scores only 22k graphic.


----------



## cfranko

Oversemper said:


> Once when I was installing a custom water block+backplate on the GPU I had an accident. I was screwing down a backplate's screw and the screw got stuck in the "screw tower with thread inside" which the screw is to be received (screwed) in (into), but it was still screwing through the thread in the backplate. It was a middle screw (between two other fully screwed screws). So, I've applied some force and it began to screw again but kinda tightly. Guess what, it turned out that the "screw tower" got torn and I was pushing it hard into the board against the backplate. The board got "wavy" with like 2-3 mm wave at the place where it was pushing it. When I realized it and saw the wavy board from the side I started wandering where to buy a second card (it was Radeon 7 +ekwb block and backplate), but Gods were kind that day and card survived without any detrimental effect.
> 
> Regarding your situation. I think, tightening one screen cannot introduce any permanent deformation. See my pictures above and do the same even layer of thermal paste and make sure all pads are in place.


The thermal pad I circled was kind of out of place, can that be the issue?


----------



## lestatdk

Thanh Nguyen said:


> I already pump to 450w in MPT and the card hits only 420w but it still scores only 22k graphic.


You should not need that amount of wattage to reach 23k+ .

Try these settings they got me 23500 GPU score on a pretty mediocre card .






















Make sure to monitor your hotspot temp. Maybe the card is throttling


----------



## Thanh Nguyen

Oh I put the min frequency at 500. Maybe thats why.


----------



## lestatdk

Thanh Nguyen said:


> Oh I put the min frequency at 500. Maybe thats why.


For me the min frequency has to be 120-140 below max. If I set it to 500 it becomes unstable. If I set it to max - 100 it's also unstable sometimes. This seems like the best setting for now. Driver is the 21.8.1 version


----------



## Oversemper

cfranko said:


> The thermal pad I circled was kind of out of place, can that be the issue?


Theoretically yes, but I doubt that. To be sure place new thermal pads on the board, but you need to measure the thickness of the original pads and use the same thickness pads (they may differ for different card manufactures). And you need a good pressure-tightening but without broking screwheads with your screwdriver. Your original washers under the screws around the die may be indented so much that screws touch bottom of the holes and do not provide any tightening. Find proper washers. I used 1mm-thick metal washers which in diameter are small and does not short-circuit anything on the board. Be careful with those! You better find new plastic washers.


----------



## cfranko

Oversemper said:


> Theoretically yes, but I doubt that. To be sure place new thermal pads on the board, but you need to measure the thickness of the original pads and use the same thickness pads (they may differ for different card manufactures). And you need a good pressure-tightening but without broking screwheads with your screwdriver. Your original washers under the screws around the die may be indented so much that screws touch bottom of the holes and do not provide any tightening. Find proper washers. I used 1mm-thick metal washers which in diameter are small and does not short-circuit anything on the board. Be careful with those! You better find new plastic washers.


I bought a waterblock for my GPU, I am going to build a custom loop. I don't want to bother with this air cooler anymore.


----------



## lestatdk

cfranko said:


> I bought a waterblock for my GPU, I am going to build a custom loop. I don't want to bother with this air cooler anymore.


You won't regret it. Only regret I have is not buying a loop for my card sooner, but then again I had to do some research to find a block that could match


----------



## cfranko

lestatdk said:


> You won't regret it. Only regret I have is not buying a loop for my card sooner, but then again I had to do some research to find a block that could match


Luckily Bykski had a waterblock for my GPU and I easily ordered it off of Aliexpress. Its gonna be a long process to build this loop but I think its gonna be fun.


----------



## Thanh Nguyen

Finally








I scored 22 954 in Time Spy


AMD Ryzen 9 5950X, AMD Radeon RX 6900 XT x 1, 32768 MB, 64-bit Windows 10}




www.3dmark.com





how can I get to 25k graphic?


----------



## Oversemper

[offtopic]


cfranko said:


> Luckily Bykski had a waterblock for my GPU and I easily ordered it off of Aliexpress. Its gonna be a long process to build this loop but I think its gonna be fun.


Reserve a full weekend for your first build. And watch a least 2 hours of custom loop building videos before you even unpack your stuff. I watched maybe 6-7 hours of tutorials and show-offs of custom loops and did that from the first try without any mistakes:
(13) First blood | Overclock.net 
Also use gloves, these are not hard to find in covid times 
[/offtopic]


----------



## cfranko

Oversemper said:


> [offtopic]
> 
> Reserve a full weekend for your first build. And watch a least 2 hours of custom loop building videos before you even unpack your stuff. I watched maybe 6-7 hours of tutorials and show-offs of custom loops and did that from the first try without any mistakes:
> (13) First blood | Overclock.net
> Also use gloves, these are not hard to find in covid times
> [/offtopic]


I already did a mistake by buying non rotary 90 degree fittings but that is the only mistake I made, thankfully it was a cheap mistake. I currently have everything I need, I am just waiting for my Thermaltake View 51 to arrive from RMA. Unfortunately the fan controller on the case exploded so I had to RMA the case.


----------



## lestatdk

Thanh Nguyen said:


> Finally
> 
> 
> 
> 
> 
> 
> 
> 
> I scored 22 954 in Time Spy
> 
> 
> AMD Ryzen 9 5950X, AMD Radeon RX 6900 XT x 1, 32768 MB, 64-bit Windows 10}
> 
> 
> 
> 
> www.3dmark.com
> 
> 
> 
> 
> 
> how can I get to 25k graphic?


So, what did you change ??


----------



## Thanh Nguyen

Ddu driver, redo mpt, use msi afterburn instead of amd .


----------



## The EX1

St0RM53 said:


> Thanks a lot for the reply. It's a shame AMD decided not give us full control over the card. At least there is morepowertool; I wouldn't do an elmor mod unless i was chasing scores since it's impractical for regular usage.
> 
> 
> 
> 
> No you are completely wrong on this part. I may run the card mining fine for a week non-stop, load a different wattman profile to play a game, then load back the previous profile and it will do this ****. The Wattman part of the driver is broken beyond repair. I don't think AMD even knows what they are doing. They probably ****ed up the design on the hardware side somewhat making it difficult to control tuning parameters via software. Or whatever. Soft or Hard restarting the driver with ctrl+win+shift+b or using the restart64.exe from CRU fixes this problem. This problem is even present on Linux.


Mining is more tolerant of memory hw errors. HWINFO used to be able to show you the amount of hardware errors your card would encounter while mining. I have been switching between my mining and gaming profiles on my 6900 XT since March and have only seen this behavior when my mining settings were unstable enough to cause driver timeouts without needing to restart the whole system. Your behavior sounds exactly like driver timeouts. Each time it happened, performance dropped for me. Maybe this is causing your issue.


----------



## cfranko

I have a Bykski A-AR6900XT-X waterblock, how can I know if its pure copper or nickel plated?


----------



## tolis626

cfranko said:


> I have a Bykski A-AR6900XT-X waterblock, how can I know if its pure copper or nickel plated?


... That's a weird question. If it's the color of copper, it's copper. If it's a shiny silver, it's nickel plated.


----------



## ZealotKi11er

cfranko said:


> I have a Bykski A-AR6900XT-X waterblock, how can I know if its pure copper or nickel plated?


Nickel Plated Copper.


----------



## prostreetcamaro

jfrob75 said:


> Recently updated to the GB Aorus 6900XT Extreme WB replaceing my Poworcolor reference 6900XT, which was on water as well. Here is my latest and best Timespy results with the GB GPU. It managed to break the 23000 graphics score level
> 
> I had set min freq to 2690 and max freq to 2790 MHz for this run. Memory was set to fast timing @ 2138MHz. MPT power set to 400 watts.
> 
> View attachment 2515592



Very Nice! I have the same card and same cpu. Here is where I have landed so far. 








I scored 22 380 in Time Spy


AMD Ryzen 9 5950X, AMD Radeon RX 6900 XT x 1, 32768 MB, 64-bit Windows 11}




www.3dmark.com


----------



## CS9K

marcoschaap said:


> For instance, wth is DcBtc voltage?


After about 15 minutes of googling, I found a few articles that mentioned "dcbtc". It appears that the settings are in relation to board power calibration while the card is in an idle state.
[PATCH] drm/amd/pm: Support board calibration on aldebaran (the cyan colored text)
linux-stable (line 46)

To answer the rest of your questions, I have been providing as much information as I can to hellm and others over on Igor's Lab in the main MPT Beta thread. Head over to Igor's Lab's forums and keep an eye out on the thread, and the main beta page.






News - Neue Beta-Versionen des MorePowerTools(MPT) und Red BIOS Editors – BIOS Unlock für RDNA und alle Frequenzen für RDNA2-Karten!


Da wir unsere Software nicht weiter fragemntieren und Anwender verwirren wollen, trennen wir nun auch den RBE in zwei Pfade auf und bieten zusammen mit dem MPT eine eigene Beta-Seite mit den jeweils aktuellesten Beta-Versionen und eine Seite mit den finalen und stabilen Versionen. Wer sich also...




www.igorslab.de


----------



## ZealotKi11er

CS9K said:


> After about 15 minutes of googling, I found a few articles that mentioned "dcbtc". It appears that the settings are in relation to board power calibration while the card is in an idle state.
> [PATCH] drm/amd/pm: Support board calibration on aldebaran (the cyan colored text)
> linux-stable (line 46)
> 
> To answer the rest of your questions, I have been providing as much information as I can to hellm and others over on Igor's Lab in the main MPT Beta thread. Head over to Igor's Lab's forums and keep an eye out on the thread, and the main beta page.
> 
> 
> 
> 
> 
> 
> News - Neue Beta-Versionen des MorePowerTools(MPT) und Red BIOS Editors – BIOS Unlock für RDNA und alle Frequenzen für RDNA2-Karten!
> 
> 
> Da wir unsere Software nicht weiter fragemntieren und Anwender verwirren wollen, trennen wir nun auch den RBE in zwei Pfade auf und bieten zusammen mit dem MPT eine eigene Beta-Seite mit den jeweils aktuellesten Beta-Versionen und eine Seite mit den finalen und stabilen Versionen. Wer sich also...
> 
> 
> 
> 
> www.igorslab.de


have u notice the default clock of ur card changes from boot to boot by some MHz. That is due to dcbtc.


----------



## tolis626

Hey guys, I am in need of assistance.

So, I started overclocking my system RAM to squeeze that extra bit of performance out of my 5900x. Thing is, B-dies get pretty toasty at 1.5V, so I wanted to try some things. One of those was flipping my radiator fans to intake, so that they can blow fresh-ish air on the DIMMs to cool them (My fan setup is 3x140mm Corsair Maglev fans as intake in the front and 1x140mm as exhaust in the back, and then 3x120mm Maglev fans on top as my radiator fans in exhaust). That worked, but that's where issues started. See, I thought it was a good opportunity to check my GPU for sag. And there was a lot of it. To the point that the PCB is slightly bent. So I took it out, laid it flat and pushed a bit on it to straighten it (gently). I also, stupidly, tried to check if any of the cooler screws were loose, and to my surprise I could slightly tighten the two exposed screws at the back of the die (I did so very slightly, but still it was a stupid thing to do).

Anyway, I put the card back in and also installed the VERY fiddly anti-sag bracket that came with it. With the rad fans flipped, hotspot temps went to 109C during a run of Superposition (I tested my gaming overclock of 2600MHz core/2150MHz memory, 1150mV in Radeon Settings/1175mV in MPT with 330W MPT PL, same aggressive fan curve too). Which was sad, because it did cool the RAM and the liquid in my AIO didn't get very hot like it does when the GPU is at full tilt. Alright, I thought, time to go back. And so I flipped the fans to their original position and went to try everything out. My hotspot temps reach 100C during Superposition with the same settings. It's an improvement over the previous setup, but still worse than what I had before. Not only that, but my score was also like 100 points lower.

Unfortunately I can't do more testing right now, but I am worried that I might have messed up. Part of me thinks that the cooler is now making uneven contact with the die due to tightening only two screws and with the bending and stuff. Or maybe I'm getting paranoid because it's a very expensive piece of kit that I'm scared to mess up. Anyone have any ideas? Pretty please? 

Thanks in advance boys!

PS : The reason that I tightened only the two screws like an absolute idiot is that there are two of these "warranty void if removed" stickers on the other two screws, one with a W and the other one with a D. I don't know if Sapphire can deny a warranty claim if those are removed in Germany (where I bought the card from), so I'd love your take on that. Not only for my current predicament, but also because I'd like to repaste my GPU with a better paste, like Kryonaut.


----------



## lDevilDriverl

Hi,
Any one know if there is WB for custom cooling on Sapphire Radeon RX 6900 XT TOXIC Extreme Edition 16384MB (11308-08-20G)?
Thanks in advance!)


----------



## lestatdk

tolis626 said:


> Hey guys, I am in need of assistance.
> 
> So, I started overclocking my system RAM to squeeze that extra bit of performance out of my 5900x. Thing is, B-dies get pretty toasty at 1.5V, so I wanted to try some things. One of those was flipping my radiator fans to intake, so that they can blow fresh-ish air on the DIMMs to cool them (My fan setup is 3x140mm Corsair Maglev fans as intake in the front and 1x140mm as exhaust in the back, and then 3x120mm Maglev fans on top as my radiator fans in exhaust). That worked, but that's where issues started. See, I thought it was a good opportunity to check my GPU for sag. And there was a lot of it. To the point that the PCB is slightly bent. So I took it out, laid it flat and pushed a bit on it to straighten it (gently). I also, stupidly, tried to check if any of the cooler screws were loose, and to my surprise I could slightly tighten the two exposed screws at the back of the die (I did so very slightly, but still it was a stupid thing to do).
> 
> Anyway, I put the card back in and also installed the VERY fiddly anti-sag bracket that came with it. With the rad fans flipped, hotspot temps went to 109C during a run of Superposition (I tested my gaming overclock of 2600MHz core/2150MHz memory, 1150mV in Radeon Settings/1175mV in MPT with 330W MPT PL, same aggressive fan curve too). Which was sad, because it did cool the RAM and the liquid in my AIO didn't get very hot like it does when the GPU is at full tilt. Alright, I thought, time to go back. And so I flipped the fans to their original position and went to try everything out. My hotspot temps reach 100C during Superposition with the same settings. It's an improvement over the previous setup, but still worse than what I had before. Not only that, but my score was also like 100 points lower.
> 
> Unfortunately I can't do more testing right now, but I am worried that I might have messed up. Part of me thinks that the cooler is now making uneven contact with the die due to tightening only two screws and with the bending and stuff. Or maybe I'm getting paranoid because it's a very expensive piece of kit that I'm scared to mess up. Anyone have any ideas? Pretty please?
> 
> Thanks in advance boys!
> 
> PS : The reason that I tightened only the two screws like an absolute idiot is that there are two of these "warranty void if removed" stickers on the other two screws, one with a W and the other one with a D. I don't know if Sapphire can deny a warranty claim if those are removed in Germany (where I bought the card from), so I'd love your take on that. Not only for my current predicament, but also because I'd like to repaste my GPU with a better paste, like Kryonaut.


The stickers are illegal according to US laws, but don't know about Germany.

If you want to re-paste you'd have to take them off regardless , so maybe do that sooner instead of later. Since you only tightened 2 screws the cooler is probably not having proper contact with the die anymore hence the higher temps


----------



## kairi_zeroblade

lestatdk said:


> The stickers are illegal according to US laws, but don't know about Germany.


wow..lucky..in asia they are strict..I emailed PowerColor about my plans on putting a WB on mine, they replied it will automatically void warranty..I tried the pencil method the sticker won't come off..I tried the heat gun and carefully flip the sticker still no dice..


----------



## LtMatt

kairi_zeroblade said:


> wow..lucky..in asia they are strict..I emailed PowerColor about my plans on putting a WB on mine, they replied it will automatically void warranty..I tried the pencil method the sticker won't come off..I tried the heat gun and carefully flip the sticker still no dice..


You can buy the stickers via Ali Express so if you ever need to RMA, you can just put new ones on when you return the product. I’ll send the link later when I’m at the pc.


----------



## EastCoast

lDevilDriverl said:


> Hi,
> Any one know if there is WB for custom cooling on Sapphire Radeon RX 6900 XT TOXIC Extreme Edition 16384MB (11308-08-20G)?
> Thanks in advance!)











Not that I know of. The GPU waterblock w/fan is a bit more efficient then the full cover waterblock. And was the traditional way of watercooling your GPU back in the day. The full cover waterblock may offer more surface area (depending the block) but doesn't always activally cool mosfets,vregs, vram. So, if you want a full cover waterblock you have to actually see if the flow channels actually cover those areas properly.

This is why I never been a fan of full cover waterblocks. With just a gpu wb and fan combo you actively cool all areas of the card without worry about heat saturation of the radiator. Specially when the radiator is shared with cooling the cpu. This is why, imo, AIBs are using gpu waterblock in favor of full cover watercblocks with the inclusion of BOM of course.

Having said that. If you still prefer to get a OEM 6900xt and want to buy a full cover waterblock make sure it actively cool the other components of the video card and not passively cool them.


----------



## tolis626

lestatdk said:


> The stickers are illegal according to US laws, but don't know about Germany.
> 
> If you want to re-paste you'd have to take them off regardless , so maybe do that sooner instead of later. Since you only tightened 2 screws the cooler is probably not having proper contact with the die anymore hence the higher temps


Yeah, I was afraid you were going to say that. Oh well, it's performing fine for now, but I will get down to it at some point. My card's hotspot temps were mediocre to bad from the beginning, and I am quite sure that it's limiting my overclocking potential (when it's hot, it drops core voltage I think). It needs the Kryonaut. I'm just afraid to do it on such an expensive piece of kit. I miss my old 390x, I had probably taken it apart like 20 or 30 times, just testing different stuff. 


LtMatt said:


> You can buy the stickers via Ali Express so if you ever need to RMA, you can just put new ones on when you return the product. I’ll send the link later when I’m at the pc.


And that's what I'm gonna do. These stickers should be banned everywhere, not just the US. I mean, what do they expect? That when the thermal paste inevitably dries out we'll send them the card for RMA? Or that we'll throw it in the trash and get a new one when it's otherwise working perfectly? If I somehow damage the card, I'm gonna own my mistake and admit it, but just for repasting I will install these probably counterfeit stickers and never bat an eye.


----------



## Cjsxt

Hi all I've been following this for a few months with interest. I have a problem with my 6900xt devil cards on alpha cool water blocks. Both run around the same temps but one card scores 1000 points exactly less in time spy 😂 on the same settings. 

What concerns me at the moment. Is my temps. I'm sure they have crept up. I see 65 degrees average run temp on one card . Set at 370 mpt. When I look at other people's temps I've seen some as low as 40 Deg for an average run. 

My best card won't exceed 22500 . Using 

2485/2585/2150 fast/370 watts.

Both cards together got me 43200 at those settings 66 degrees average temp 

Liquid metal arrived today so I'll strip the cards and inspect all the clearances . I'm thinking there could be an issue with these blocks . 

Water loop temp never exceeds 31 degrees . 

Thanks for listening


----------



## EastCoast

Cjsxt said:


> Hi all I've been following this for a few months with interest. I have a problem with my 6900xt devil cards on alpha cool water blocks. Both run around the same temps but one card scores 1000 points exactly less in time spy 😂 on the same settings.
> 
> What concerns me at the moment. Is my temps. I'm sure they have crept up. I see 65 degrees average run temp on one card . Set at 370 mpt. When I look at other people's temps I've seen some as low as 40 Deg for an average run.
> 
> My best card won't exceed 22500 . Using
> 
> 2485/2585/2150 fast/370 watts.
> 
> Both cards together got me 43200 at those settings 66 degrees average temp
> 
> Liquid metal arrived today so I'll strip the cards and inspect all the clearances . I'm thinking there could be an issue with these blocks .
> 
> Water loop temp never exceeds 31 degrees .
> 
> Thanks for listening


What kind of radiator are you using?


----------



## Cjsxt

EastCoast said:


> What kind of radiator are you using?


I have two monster thick 360 thermal takes top and bottom and a slim SE 420 at the front.

Cpu runs lovely and cool for a 5950x at 4.8 . Maintains 50 deg through the entire run and hits 60 on the last test.


----------



## gtz

I'll be joining soon, my 6900XT arrives tomorrow. I just purchased the cheapest one I could find new from a retailer which ended up being the Sapphire Nitro+. Everywhere I have read basically called it a stretched reference design with RGB. I am fine with that just excited to see what it can do. My 6800XT with latest drivers with only adjusting the power limit and fan curve netted me 21000 in timespy.


----------



## kairi_zeroblade

LtMatt said:


> You can buy the stickers via Ali Express so if you ever need to RMA, you can just put new ones on when you return the product. I’ll send the link later when I’m at the pc.


its not an ordinary sticker..if it was I already did that..its the TUL warranty sticker from PowerColor..unless they have that sticker as well in Aliexpress..lol

EDIT: I ***kin found one for Powercolor..problem solved..gonna order a WB early tomorrow..lmao..


----------



## EastCoast

Cjsxt said:


> I have two monster thick 360 thermal takes top and bottom and a slim SE 420 at the front.
> 
> Cpu runs lovely and cool for a 5950x at 4.8 . Maintains 50 deg through the entire run and hits 60 on the last test.


Is that 1 360 rad for the gpu and 1 360 rad for the cpu or are you using 2 360 rads in serial?


----------



## Cjsxt

EastCoast said:


> Is that 1 360 rad for the gpu and 1 360 rad for the cpu or are you using 2 360 rads in serial?


Its all in series on the same loop cpu gpu1 gpu2 
2 x 360 monster 90mm thick rads and a slim 420 . 12 fans total


----------



## jonRock1992

kairi_zeroblade said:


> its not an ordinary sticker..if it was I already did that..its the TUL warranty sticker from PowerColor..unless they have that sticker as well in Aliexpress..lol
> 
> EDIT: I ***kin found one for Powercolor..problem solved..gonna order a WB early tomorrow..lmao..


Can you link for those stickers? I destroyed mine when installing my wb lol.


----------



## kairi_zeroblade

jonRock1992 said:


> Can you link for those stickers? I destroyed mine when installing my wb lol.











1.9US $ |360 pcs of 6 mm diameter TUL WARRANTY VOID IF REMOVED tamper evident stickers V50|Assorted Stickers| - AliExpress


Smarter Shopping, Better Living! Aliexpress.com




www.aliexpress.com


----------



## LtMatt

kairi_zeroblade said:


> its not an ordinary sticker..if it was I already did that..its the TUL warranty sticker from PowerColor..unless they have that sticker as well in Aliexpress..lol
> 
> EDIT: I ***kin found one for Powercolor..problem solved..gonna order a WB early tomorrow..lmao..


Told you, Lol.


----------



## Cjsxt

So has anyone else had high temps with an alpha cool block ?


----------



## The EX1

lDevilDriverl said:


> Hi,
> Any one know if there is WB for custom cooling on Sapphire Radeon RX 6900 XT TOXIC Extreme Edition 16384MB (11308-08-20G)?
> Thanks in advance!)


Waterblocks for the Nitro + SE work as the special edition Nitro uses the Toxic PCB. Bykski even lists the Toxic as compatible. Post results!









123.17US $ 10% OFF|Bykski Gpu Water Block For Sapphire Radeon Rx 6900 Xt 16gb Nitro+special Edition Graphics Card Cooled/radiator,a-sp6900xtse-x - Fluid Diy Cooling & Accessories - AliExpress


Smarter Shopping, Better Living! Aliexpress.com




www.aliexpress.com


----------



## EastCoast

Cjsxt said:


> Its all in series on the same loop cpu gpu1 gpu2
> 2 x 360 monster 90mm thick rads and a slim 420 . 12 fans total


Ah, I see. That appears to be the problem. Typically when you water cool you don't need to have 350 rads in serial. I bet that slim rad is really causing havoc with your flow and head pressure. Over time you get the problems you describe. 
Just use one rad for the cpu and one rad for the cpu.


----------



## The EX1

EastCoast said:


> Ah, I see. That appears to be the problem. Typically when you water cool you don't need to have 350 rads in serial. I bet that slim rad is really causing havoc with your flow and head pressure. Over time you get the problems you describe.
> Just use one rad for the cpu and one rad for the cpu.



His fluid temp is 31 degrees, his radiators aren't the problem. Most people run radiators in serial. Monsta rads have VERY low restriction anyway. The issue here is heat transfer into the fluid which most likely means the issue is between his GPU and his blocks.

OP - is that 65 degree temp core or junction/hotspot?


----------



## Cjsxt

The EX1 said:


> His fluid temp is 31 degrees, his radiators aren't the problem. Most people run radiators in serial. Monsta rads have VERY low restriction anyway. The issue here is heat transfer into the fluid which most likely means the issue is between his GPU and his blocks.
> 
> OP - is that 65 degree temp core or junction/hotspot?


This is exactly what I'm thinking. The rads barely heat up. In Graphics test 2 the GPU will hit 68 to 70 Deg and 90s hot spot for a little time.

Edit. I just connected a bits power flow meter . It's reading 1.5 LPM at 4800 rpm speed . I wonder if that flow rate is too low


----------



## The EX1

Cjsxt said:


> This is exactly what I'm thinking. The rads barely heat up. In Graphics test 2 the GPU will hit 68 to 70 Deg and 90s hot spot for a little time.
> 
> Edit. I just connected a bits power flow meter . It's reading 1.5 LPM at 4800 rpm speed . I wonder if that flow rate is too low


The general rule of thumb is to be around 1GPM which is ~ 3.79 LPM. What pump are you using?


----------



## Cjsxt

The EX1 said:


> The general rule of thumb is to be around 1GPM which is ~ 3.79 LPM. What pump are you using?


EK D5 . I've just wired up two in series and the temps haven't improved. I'll be repasting with liquid metal over the weekend. I'm sure I read someone here had the same problems with a devil card on an alpha cool block 🤦. God knows what thermal paste they supplied stock.


----------



## EastCoast

Cjsxt said:


> This is exactly what I'm thinking. The rads barely heat up. In Graphics test 2 the GPU will hit 68 to 70 Deg and 90s hot spot for a little time.
> 
> Edit. I just connected a bits power flow meter . It's reading 1.5 LPM at 4800 rpm speed . I wonder if that flow rate is too low


Like I said you are lossing a lot of head pressure. With LPM that low is why you are having issues.


----------



## lawson67

jonRock1992 said:


> Can you link for those stickers? I destroyed mine when installing my wb lol.


Ive bought a bunch of powercolor anti-tamper stickers off Aliexpress they took about 2 months to get here though


----------



## lawson67

Cjsxt said:


> So has anyone else had high temps with an alpha cool block ?


I have the Alphacool block for my powercolor 6900 Red Devil Ultimate and my temps are in the 60's and maxes out at about 72c after about 3 - 4 hours of gaming at 420w so no problems with it my end but i did put LM on my die as soon as it came when i mounted my card


----------



## Cjsxt

lawson67 said:


> I have the Alphacool block for my powercolor 6900 Red Devil Ultimate and my temps are in the 60's and maxes out at about 72c after about 3 - 4 hours of gaming at 420w so no problems with it my end but i did put LM on my die as soon as it came when i mounted my card


Thanks man . Can you remember the temps at stock not settings ? 

I'm hitting that at 360 watts MPT


----------



## lawson67

Cjsxt said:


> Thanks man . Can you remember the temps at stock not settings ?
> 
> I'm hitting that at 360 watts MPT


I've never run it at stock in the waterblock, i use 1150mv 2800mhz min / 2900mhz max at 420w since its been in there


----------



## Cjsxt

EastCoast said:


> Like I said you are lossing a lot of head pressure. With LPM that low is why you are having issues.


It's not the rads. When I first assembled this unit. My runs were 52-56 degrees then crept up over time. And it's not the pump flow/ pressure either. Today I added a D5 in series and although the flow increased temps did not go down. I also today removed the 420 at the front . Flow went up again but temps did not decrease. 

I will re-paste the cards 🤦 . I read a case recently where someone had to re- tighten the GPU block onto the die on a regular basis


----------



## lDevilDriverl

Got new Asus ROG Radeon RX 6900 XT STRIX LC T16G and stock cooling is not so bad even comparing to EKWB fullcover on my 6800xt, but still 20 degree difference.








I scored 46 649 in Fire Strike


AMD Ryzen 9 5950X, AMD Radeon RX 6900 XT x 1, 49152 MB, 64-bit Windows 10}




www.3dmark.com


----------



## J7SC

Cjsxt said:


> It's not the rads. When I first assembled this unit. My runs were 52-56 degrees then crept up over time. And it's not the pump flow/ pressure either. Today I added a D5 in series and although the flow increased temps did not go down. I also today removed the 420 at the front . Flow went up again but temps did not decrease.
> 
> I will re-paste the cards 🤦 . I read a case recently where someone had to re- tighten the GPU block onto the die on a regular basis


...over at the 3090 thread, some folks reported similar issues...myself included until I switched to a 'thicker' paste (from Kryonaut to GC Gelid Extreme)


----------



## jonRock1992

I haven't had this issue with my bykski waterblock, and I'm using liquid metal. I just tightened all of the screws as tight as I could get them without stripping them. I think my temps got a little worse over time with the original air cooler though, but I'm not 100% sure. With the Bykski block and liquid metal, I haven't seen my hotspot temp go above 68C at 428W with an overclock. My loop is dedicated to my GPU though, so I don't have to worry about the CPU interfering with temps. I'm just using a barrow pump/res at full blast with a 29mm thick 360mm rad using noctua a12x25's at 1300RPM. 

This is the waterblock that I'm using: Bykski RX 6900XT GPU Water Block For Powercolor RX 6900XT 6800XT Red Devil / Red Dragon , Graphic Card Cooler A-PC6900XT-X

This is the rad that I'm using: Amazon.com: Black Ice Nemesis 360GTS Radiator : Electronics

This is the pump/res that I'm using: Amazon.com: Barrow PWM Speed Control Mini Pump/Reservoir Combo, Black : Electronics

And this is the liquid metal that I'm using. I think it works better than conductonaut: Amazon.com: Thermalright Liquid Metal Thermal Paste, 79 W/mK High Performance, Silver King Heatsink Paste, CPU for All Coolers, 1 Grams with Cleanser and Spreader : Electronics


----------



## kairi_zeroblade

jonRock1992 said:


> This is the waterblock that I'm using: Bykski RX 6900XT GPU Water Block For Powercolor RX 6900XT 6800XT Red Devil / Red Dragon , Graphic Card Cooler A-PC6900XT-X


I ordered the same block, as this is the only available one in my region, how are your temps using this?? (the wait is 2 weeks..lol)



jonRock1992 said:


> This is the pump/res that I'm using: Amazon.com: Barrow PWM Speed Control Mini Pump/Reservoir Combo, Black : Electronics


I also have the same pump, 2 of these actually, its not a bad pump at all..(DDCv3.2) its silent at max speeds (I mostly hear my Scythe GT's than this), used to have 6-6.4lpm w/o the QDC's I just attached 2 QDC's on the loop and now flow is down to 4.4lpm flat, but temps are good/better by 1 degree (swapped out my 240 thick ass rads for a 360 thick ass rad so 2x360, 1 slim and 1 thicc), thinking of buying 2 more or I might add a D5 on the loop when I buy a new case, I am all cramped up on a Fractal Define R6 now..


----------



## Neoki

I have joined the party. Don't have my case yet, and re-thinking the radiators. Was planning on a single Corsair XD5 pump, and Hardware Labs 2xGTS480,1xGTX 360, 1x GTS120. But I think after reading more that's way too much restriction so I'm thinking of downgrading to XSPC 2xTX480's, 1xTX360, 1xTX120 (or remove the 120 entirely). Bummer is I was going all white setup and can't find white XSPC in stock in the sizes I want.

Currently using 1 GTX360 with a box fan blowing over the bench setup. Looks like my Gigabyte 6900 xt Waterforce Extreme won't go above 2750mhz Core/2100mhz Memory no matter what I do. And the test won't push it up even to 2750, it's only recording 2700 even though I have a MPL 400w tune to it. So apparently I got a dud for the core clocks. I'm still trying to tune the right PBO/Auto-OC settings for my 5950x on the DarkHero.









Result not found







www.3dmark.com


----------



## jonRock1992

kairi_zeroblade said:


> I ordered the same block, as this is the only available one in my region, how are your temps using this?? (the wait is 2 weeks..lol)
> 
> 
> 
> I also have the same pump, 2 of these actually, its not a bad pump at all..(DDCv3.2) its silent at max speeds (I mostly hear my Scythe GT's than this), used to have 6-6.4lpm w/o the QDC's I just attached 2 QDC's on the loop and now flow is down to 4.4lpm flat, but temps are good/better by 1 degree (swapped out my 240 thick ass rads for a 360 thick ass rad so 2x360, 1 slim and 1 thicc), thinking of buying 2 more or I might add a D5 on the loop when I buy a new case, I am all cramped up on a Fractal Define R6 now..


Temps are really good I have yet to see hotspot temp go above 68C. Hotspot temp is usually in the low to mid 60's during the most graphically intensive games with ray tracing. I'm just using the thermal pads that came with it. I think memory temps are around 50C with an overclock. It's been awhile since I've monitored memory/vrm temps.


----------



## CS9K

J7SC said:


> ...over at the 3090 thread, some folks reported similar issues...myself included until I switched to a 'thicker' paste (from Kryonaut to GC Gelid Extreme)


I can second this, @Cjsxt

If you search this thread for my posts, I go through the "why" about thicker pastes for bare-die applications. I personally use GC-Extreme, as do most of my friends; reference RX 6900 XT's and 3080 FE all with water blocks. Temps have been good for months!


----------



## EastCoast

Cjsxt said:


> *It's not the rads. When I first assembled this unit. My runs were 52-56 degrees then crept up over time*. And it's not the pump flow/ pressure either. Today I added a D5 in series and although the flow increased temps did not go down. I also today removed the 420 at the front . Flow went up again but temps did not decrease.
> 
> I will re-paste the cards 🤦 . I read a case recently where someone had to re- tighten the GPU block onto the die on a regular basis


Everything you said is pointing to a head pressure, flow problem Imo. 1.5 L p.m. is very low. Even if you increase head pressure slightly, as your post suggests, I wouldn't expect a change in temperatures from it. There would need to be a substantial increase in head pressure. Your post, reply, does not seem to imply that. Because even adding a second pump you still have a second radiator. When originally you were using one water pump, 3 radiators, waterblocks.


----------



## Cjsxt

EastCoast. Nice one bro. I found this earlier 🤦


----------



## kairi_zeroblade

anyone got feedback for the 21.8.2 drivers??


----------



## rodac

kairi_zeroblade said:


> anyone got feedback for the 21.8.2 drivers??


I am happy with those latest drivers, best time spy stats, well over 22 k out of the box in graphics. The oculus VR link cable issue seems to fixed , however the minimum requirement not met bug persists. Not sure that is the kind of feed back expected


----------



## lestatdk

kairi_zeroblade said:


> anyone got feedback for the 21.8.2 drivers??


Did not work for me. I'm back on 21.7.2 for now


----------



## The EX1

My EK Vector RD block and backplate came in for my Red Devil. Will post results soon.


----------



## Maulet//*//

After many Nvidia years, with per game profiles working quite well, or at least transparently.... Sorry for my bad education hahah

What do the Game pages in Adrenalin Radeon tool do? I feel lost. It is not intuitive, not proactive, not informative


----------



## EastCoast

Maulet//*// said:


> After many Nvidia years, with per game profiles working quite well, or at least transparently.... Sorry for my bad education hahah
> 
> What do the Game pages in Adrenalin Radeon tool do? I feel lost. It is not intuitive, not proactive, not informative


What is this game pages you speak of? Not sure what that is.

Edit:
Checks AMD Radeon Software again. Nope, Don't see game pages.

Edit:
Does search for a tool called Adrenalin Radeon tool. Nope, didn't find it.


----------



## lestatdk

Think he means this


----------



## Maulet//*//




----------



## Maulet//*//

And if you enter each game profile (Radeon) you get a bunch of options of all kind.... In Nvidia you get a slider (performance-quality). Fool proof. hehe.
So if you are not happy, you move 1 slider a bit and test again. In AMD you got options options options... I never feel sure what's going to happen.


----------



## lestatdk

Maulet//*// said:


> And if you enter each game profile (Radeon) you get a bunch of options of all kind.... In Nvidia you get a slider (performance-quality). Fool proof. hehe.
> So if you are not happy, you move 1 slider a bit and test again. In AMD you got options options options... I never feel sure what's going to happen.


It's similar to Nvidia control panel when you click on "manage 3d settings " where you set a lot of parameters pr game/ app actually.

Like so


----------



## Maulet//*//

omg, you are right, but.... since years the greenies got Geforce Experience, which manage all this way easier. NV Control Panel is so 80's


----------



## Maulet//*//

Getting back on track.. Let's reformulate: with any game how do you (trust) get the best settings? In-game "auto settings" (some games do have this) or do you delve into the Radeon options?


----------



## ZealotKi11er

Maulet//*// said:


> Getting back on track.. Let's reformulate: with any game how do you (trust) get the best settings? In-game "auto settings" (some games do have this) or do you delve into the Radeon options?


Personally I dont touch them. It more to enable AMD feature like Chill and fps limiters. Its not to gain more "performance"


----------



## EastCoast

Maulet//*// said:


> View attachment 2524307


I never seen red x's like that before. And not a single green checkmark.
🧐😳😲👆


----------



## Mol3culeZ

Hello everyone. I have been testing out a Red Devil 6900XT for the past couple months. I installed a byski waterblock and at first the card was behaving like a dud. It couldn't run over 2650MHz and it had to be set above 1150mV to run time spy without crashing. I was working on the system and suddenly the card wouldn't work at all. So I disassembled it and found a hunk of solder had somehow gotten lodged between the backplate and one of the IC's. The IC had actually lifted off the board on one side and had to be reheated. Luckily the fix worked and after that the card has been working pretty good.

I reached a wall with it set at 2620min-2720max MHz @ 1050mV 2150MHz Ram which achieved a time spy graphics score of 23379. I got that score using MPT to raise the power limit to 350W. (403W max with the power slider raised to 15% although I only set to +5%). Nothing I did would let me go any higher. I could raise the MHz to 2725 and get a few extra FPS in timespy GPU test 1 but I would always crash on test 2. 

I read about the EVC2SX and decided to give that a try. It came in yesterday so I tore down my system and after testing it out on my 6800XT I soldered a connector for the EVC2SX to both my 6800XT (founders-air) and my 6900XT (devil-water) and put em both in my system. So far it's promising, since with the factory power table at +15% I scored 23362. I used settings that were recommended in another forum and the wattage stayed under 300W throughout the entire run. 

So now I have to figure out the optimum settings using the EVC2sx. So far I have found that when just using MPT to adjust the power table you need to determine the lowest voltage setting that works to get high scores, which on my card is 1060-1070mV. At that voltage and with 350W the card runs pretty steady at about 2670MHz. After the EVC2SX mod, I found the clock speed stays close to 2700MHz, and it likes using the full 1175mW. The crazy part is how the wattage stays under 300Wwhile achieving a score only 17 points lower, and that was my first run!

So I did a few runs trying different settings, but I am still at the same wall basically. The trick I think is gonna be using both MPT and the EVC2SX. From what I have read and my own experience, the driver can get really messed up sometimes and has to be reset. What happens is I will open the AMD control panel and my mouse will become extremely laggy and I have to very slowly navigate to the close button to exit it. The driver also resets without any warnings, but the settings revert to Automatic mode. 

Has anyone else had this issue? Anyone have any experience or knowledge about the EVC2SX? I am looking for any in-depth information that's out there. I would like to get into the 24000 point range bracket (timespy graphics score) and I think it is possible. . 








The score in the middle is my best using MPT alone. The left is my first test using the EVC2SX and on the right is my best score using a combination of both. If I can get past the driver issues, I think I can achieve 24000pts.


----------



## weleh

What is it that you want to know?
What is it that you are asking?

I'm sorry but with your wall of text, I couldn't really understand what you want to know.

I have massive experience with EVC , MPT and 6900XT so if you can put it bluntly and simply, I can give you a hand.


----------



## Justye95

hi guys, i've searched everywhere but i can't find the exact size of the thermal pads of the rx 6900 xt reference unfortunately the old pads stupidly I threw them away (I want to kill myself) Help me


----------



## jonRock1992

Mol3culeZ said:


> Hello everyone. I have been testing out a Red Devil 6900XT for the past couple months. I installed a byski waterblock and at first the card was behaving like a dud. It couldn't run over 2650MHz and it had to be set above 1150mV to run time spy without crashing. I was working on the system and suddenly the card wouldn't work at all. So I disassembled it and found a hunk of solder had somehow gotten lodged between the backplate and one of the IC's. The IC had actually lifted off the board on one side and had to be reheated. Luckily the fix worked and after that the card has been working pretty good.
> 
> I reached a wall with it set at 2620min-2720max MHz @ 1050mV 2150MHz Ram which achieved a time spy graphics score of 23379. I got that score using MPT to raise the power limit to 350W. (403W max with the power slider raised to 15% although I only set to +5%). Nothing I did would let me go any higher. I could raise the MHz to 2725 and get a few extra FPS in timespy GPU test 1 but I would always crash on test 2.
> 
> I read about the EVC2SX and decided to give that a try. It came in yesterday so I tore down my system and after testing it out on my 6800XT I soldered a connector for the EVC2SX to both my 6800XT (founders-air) and my 6900XT (devil-water) and put em both in my system. So far it's promising, since with the factory power table at +15% I scored 23362. I used settings that were recommended in another forum and the wattage stayed under 300W throughout the entire run.
> 
> So now I have to figure out the optimum settings using the EVC2sx. So far I have found that when just using MPT to adjust the power table you need to determine the lowest voltage setting that works to get high scores, which on my card is 1060-1070mV. At that voltage and with 350W the card runs pretty steady at about 2670MHz. After the EVC2SX mod, I found the clock speed stays close to 2700MHz, and it likes using the full 1175mW. The crazy part is how the wattage stays under 300Wwhile achieving a score only 17 points lower, and that was my first run!
> 
> So I did a few runs trying different settings, but I am still at the same wall basically. The trick I think is gonna be using both MPT and the EVC2SX. From what I have read and my own experience, the driver can get really messed up sometimes and has to be reset. What happens is I will open the AMD control panel and my mouse will become extremely laggy and I have to very slowly navigate to the close button to exit it. The driver also resets without any warnings, but the settings revert to Automatic mode.
> 
> Has anyone else had this issue? Anyone have any experience or knowledge about the EVC2SX? I am looking for any in-depth information that's out there. I would like to get into the 24000 point range bracket (timespy graphics score) and I think it is possible. .
> View attachment 2524399
> 
> The score in the middle is my best using MPT alone. The left is my first test using the EVC2SX and on the right is my best score using a combination of both. If I can get past the driver issues, I think I can achieve 24000pts.


Why not put the voltage higher and also increase the power limit? If you're chasing high Timespy GPU scores you should increase the power limit to around 430W and increase the voltage so you can reach higher clock speeds.


----------



## lawson67

jonRock1992 said:


> Why not put the voltage higher and also increase the power limit? If you're chasing high Timespy GPU scores you should increase the power limit to around 430W and increase the voltage so you can reach higher clock speeds.


Its strange to hit a stable 2850mhz on air (20 runs of time spy GT2) i had to set Vcore voltage to 1150mv however when i put the card on the water block i had to drop that to 1125mv as the higher vcore of 1150 was now unstable for 20 passes of GT2 , so i had to lower the Vcore voltage, and to hit over 25k graphic score i need adjust MPT 460w OR 400W + 15% PL


----------



## J7SC

lawson67 said:


> Its strange to hit a stable 2850mhz on air (20 runs of time spy GT2) i had to set Vcore voltage to 1150mv however when i put the card on the water block i had to drop that to 1125mv as the higher vcore of 1150 was now unstable for 20 passes of GT2 , so i had to lower the Vcore voltage, and to hit over 25k graphic score i need adjust MPT 460w OR 400W + 15% PL


This happens w/ the latest boost algorithms...with w-cooling, it will boost past the air-cooled 'selected' frequency / spike higher for longer until it gets reigned in. This isn't necessarily momentary max boost but spread across the whole bench.

I ran into this with both my w-cooled 3090 and 6900XT...spent days trying to figure out why I couldn't run my previous (air-cooled) profiles until I noticed that adding a bin or two_ less_ boost compared to the stable air-cooled profiles still got higher scores (same voltages).


----------



## nyk20z3

Finally a block for the 6900XT Strix -






Phanteks Innovative Computer Hardware Design







www.phanteks.com


----------



## CS9K

J7SC said:


> This happens w/ the latest boost algorithms...with w-cooling, it will boost past the air-cooled 'selected' frequency / spike higher for longer until it gets reigned in. This isn't necessarily momentary max boost but spread across the whole bench.
> 
> I ran into this with both my w-cooled 3090 and 6900XT...spent days trying to figure out why I couldn't run my previous (air-cooled) profiles until I noticed that adding a bin or two_ less_ boost compared to the stable air-cooled profiles still got higher scores (same voltages).


Well said! I observed something similar, but hadn't been able to eloquently put into words my observations. I wish this post, among many others, were part of a big FAQ.


----------



## alceryes

I just picked up the reference RX 6900 XT. I'm loving this card so far!









Time Spy RX 6900 XT - 17614.png


PNG Image



1drv.ms





Time Spy score is at stock. It seems pretty good and card is super quiet. Case has great airflow. I'd like to increase the power but was wondering how much headroom I've got before the fans start getting noisy. Any idea how far it'll go on air while keeping fans below 60%-ish?


----------



## lDevilDriverl

nyk20z3 said:


> Finally a block for the 6900XT Strix -
> 
> 
> 
> 
> 
> 
> Phanteks Innovative Computer Hardware Design
> 
> 
> 
> 
> 
> 
> 
> www.phanteks.com


I order EK one for the same 6900xt strix (t16g). Should be delivered 17 of September. We can compare our WB performance) I know that it depends on whole loop and thermal compounds, but it will be interesting to check) 
I'll be using KPX thermal paste and Thermalright Extreme Odyssey pads


----------



## robiatti

Cjsxt said:


> EastCoast. Nice one bro. I found this earlier 🤦
> View attachment 2524106


Looks like fluid break down to me. I had some EK pastel blue do the same to my loop, however for me it happened over 2 days. After that and the headache of getting that crap removed 
I have decided on clear fluid from here on out. Sure looks great but it just isn't worth the hassle


----------



## LtMatt

21.9.1 available. Radeon™ Software Adrenalin 21.9.1 Release Notes | AMD 

@ZealotKi11er, @jonRock1992, and @lawson67 have been nominated to provide Timespy/Firestrike results.


----------



## lestatdk

LtMatt said:


> 21.9.1 available. Radeon™ Software Adrenalin 21.9.1 Release Notes | AMD
> 
> @ZealotKi11er, @jonRock1992, and @lawson67 have been nominated to provide Timespy/Firestrike results.


@lawson67 will get a better result regardless , so I want results from the others before I dare try it out 😅


----------



## jonRock1992

LtMatt said:


> 21.9.1 available. Radeon™ Software Adrenalin 21.9.1 Release Notes | AMD
> 
> @ZealotKi11er, @jonRock1992, and @lawson67 have been nominated to provide Timespy/Firestrike results.


I'll try it out later today, but I don't have my hopes up lol. The last driver gave me worse performance. So if those changes are carried over to the new driver, then I should theoretically still get less performance. I don't really know much about driver development. I'm just speculating.


----------



## lestatdk

jonRock1992 said:


> I'll try it out later today, but I don't have my hopes up lol. The last driver gave me worse performance. So if those changes are carried over to the new driver, then I should theoretically still get less performance. I don't really know much about driver development. I'm just speculating.


You and me both. I had to roll back to 7.2 to get the best performance.


----------



## LtMatt

jonRock1992 said:


> I'll try it out later today, but I don't have my hopes up lol. The last driver gave me worse performance. So if those changes are carried over to the new driver, then I should theoretically still get less performance. I don't really know much about driver development. I'm just speculating.


21.7.2 may have been peak Timespy tbf, superb driver that one.


----------



## ZealotKi11er

jonRock1992 said:


> I'll try it out later today, but I don't have my hopes up lol. The last driver gave me worse performance. So if those changes are carried over to the new driver, then I should theoretically still get less performance. I don't really know much about driver development. I'm just speculating.


But now u can one click OC.


----------



## cfranko

jonRock1992 said:


> I haven't had this issue with my bykski waterblock, and I'm using liquid metal. I just tightened all of the screws as tight as I could get them without stripping them. I think my temps got a little worse over time with the original air cooler though, but I'm not 100% sure. With the Bykski block and liquid metal, I haven't seen my hotspot temp go above 68C at 428W with an overclock. My loop is dedicated to my GPU though, so I don't have to worry about the CPU interfering with temps. I'm just using a barrow pump/res at full blast with a 29mm thick 360mm rad using noctua a12x25's at 1300RPM.
> 
> This is the waterblock that I'm using: Bykski RX 6900XT GPU Water Block For Powercolor RX 6900XT 6800XT Red Devil / Red Dragon , Graphic Card Cooler A-PC6900XT-X
> 
> This is the rad that I'm using: Amazon.com: Black Ice Nemesis 360GTS Radiator : Electronics
> 
> This is the pump/res that I'm using: Amazon.com: Barrow PWM Speed Control Mini Pump/Reservoir Combo, Black : Electronics
> 
> And this is the liquid metal that I'm using. I think it works better than conductonaut: Amazon.com: Thermalright Liquid Metal Thermal Paste, 79 W/mK High Performance, Silver King Heatsink Paste, CPU for All Coolers, 1 Grams with Cleanser and Spreader : Electronics


I also have a Bykski 6900 XT block, did you put thermal pads on the backplate? I didn’t.


----------



## jonRock1992

cfranko said:


> I also have a Bykski 6900 XT block, did you put thermal pads on the backplate? I didn’t.


Nope. I just followed the instructions exactly as bykski said to install it. I just used liquid metal as the thermal interface material, and I'm using the pads that came with it. I was thinking about getting thermal putty to put between the GPU and backplate, but I don't think that it will do anything useful.


----------



## The EX1

So any 21.9.1 driver results?


----------



## jonRock1992

Same as the previous driver. Rolled back to 21.8.1


----------



## EastCoast

Is anyone of the opinion that 21.9.1 is faster then 21.7.2?


----------



## CS9K

EastCoast said:


> Is anyone of the opinion that 21.9.1 is faster then 21.7.2?


I went to 21.9.1 from 21.7.2. Aside from Adrenalin "helpfully" trying to overclock my CPU (why is that in the Adrenalin control panel, ffs), performance and clock speeds are more or less a wash between the two driver versions. I'm keeping it since 21.7.2 had the "crash when using HEVC encoding" bug, which was fixed in 21.8.x


----------



## kairi_zeroblade

what's with the CPU Auto overclock option now?? I tried 21.9.1..I am just scratching my arse right now with these new options I am seeing..


----------



## jonRock1992

kairi_zeroblade said:


> what's with the CPU Auto overclock option now?? I tried 21.9.1..I am just scratching my arse right now with these new options I am seeing..


It looks like they are doing Ryzen Master stuff in the GPU driver control panel now. I'm not a fan. I went back to 21.8.1 anyways.


----------



## The EX1

kairi_zeroblade said:


> what's with the CPU Auto overclock option now?? I tried 21.9.1..I am just scratching my arse right now with these new options I am seeing..


 Firestrike scores are the same for me with 21.9.1 compared to the last WHQL driver. Auto OC on the GPU came up with the same settings that Auto OC did in previous drivers as well.

One thing I did notice with this driver is better ultrawide support while benching. Instead of the image being stretched, it is now displayed at the correct resolution and with black bars around the video if I am not benching at 5120 x 1440.


----------



## lestatdk

I tried upgrading. And scores are the same. So will stay on this for now.

I do not see any CPU tuning stuff in the panel. I'm not complaining I don't want it to mess around with my CPU 😅


----------



## LtMatt

If you disable the start-up Task Scheduler task of Ryzen SDK, that CPU part will be removed from the Tuning menu screen.


----------



## J7SC

...apart from 'fast memory' options, what are the drawbacks of just using MSI AB after uninstalling AMD software (other than GPU driver files) ?


----------



## lestatdk

J7SC said:


> ...apart from 'fast memory' options, what are the drawbacks of just using MSI AB after uninstalling AMD software (other than GPU driver files) ?


That's the only drawback as far as I can tell. I recently tried running AB with only the AMD driver installed and it worked great. Except for no way to enable the fast timing


----------



## J7SC

lestatdk said:


> That's the only drawback as far as I can tell. I recently tried running AB with only the AMD driver installed and it worked great. Except for no way to enable the fast timing


...Tx  
There must be some sort of Win registry entry for fast timing; s.th. to do when the weather outside turns bad...


----------



## jonRock1992

lestatdk said:


> That's the only drawback as far as I can tell. I recently tried running AB with only the AMD driver installed and it worked great. Except for no way to enable the fast timing


I really wish there was a way to enable the fast timing option from mpt. That would be amazing.


----------



## LtMatt

J7SC said:


> ...apart from 'fast memory' options, what are the drawbacks of just using MSI AB after uninstalling AMD software (other than GPU driver files) ?


You have no idea which options within Radeon Software are enabled by default (if any), despite the application not being installed.

If you want my advice, ditch MSI AB and use Radeon Software. 

Controversial i know, but look at my 3DMark scores for my clock speeds.


----------



## lestatdk

LtMatt said:


> You have no idea which options within Radeon Software are enabled by default (if any), despite the application not being installed.
> 
> If you want my advice, ditch MSI AB and use Radeon Software.
> 
> Controversial i know, but look at my 3DMark scores for my clock speeds.


What annoys me is the Radeon sw sometimes reverting to default even without my system having a crash. Rarely happens, but it does happen. AB is just so much simpler to work with.


----------



## CS9K

lestatdk said:


> What annoys me is the Radeon sw sometimes reverting to default even without my system having a crash. Rarely happens, but it does happen. AB is just so much simpler to work with.


I've had this happen twice over the months: Once when I was a weeee bit too ambitious with my GPU undervolt, and once when my CPU's fclk settings weren't 100% stable. Fixing both stopped the settings-reset.

I am only one data point, but it may be worth checking your settings and stability again, both for GPU and CPU.


----------



## CS9K

LtMatt said:


> If you want my advice, ditch MSI AB and use Radeon Software.


I second this. MSI Afterburner _works_ with RDNA2 GPU's, but it's never completely played well with the drivers, nor is MSI Afterburner particularly optimized for modern AMD GPU's.

Using the full driver install with most of the extra "features" disabled in the settings, works just fine for myself and effectively all of the RDNA2 owners I know, some stock some overclocked like myself.


----------



## lestatdk

After updating to the 21.9.1 driver the Radeon software resets my power tuning to 0% after rebooting. All the other settings remain as I saved them before rebooting. *** ?










Not the end of the world I guess I just bump up my MPT and leave it at 0 from now on. Still weird.


----------



## 99belle99

lestatdk said:


> After updating to the 21.9.1 driver the Radeon software resets my power tuning to 0% after rebooting. All the other settings remain as I saved them before rebooting. *** ?
> 
> View attachment 2524882
> 
> 
> Not the end of the world I guess I just bump up my MPT and leave it at 0 from now on. Still weird.


That is weird. I clicked the option to reset the driver during install but remember my settings which it did. So even after reboots my setting including power limit do not reset like you so I do not know what is going on with yours.


----------



## lestatdk

99belle99 said:


> That is weird. I clicked the option to reset the driver during install but remember my settings which it did. So even after reboots my setting including power limit do not reset like you so I do not know what is going on with yours.


Decided to go back to 21.7.2 and everything is working fine again  guess I'll just stay here for a while LOL


----------



## ptt1982

Oohhhh... Deathloop looks good. Anyone here tried 6900xt FSR Ultra Quality, maxed out settings including RT? Bang4Buck did a quick video of it on YT. Looks like it stays most of the time 60fps+ and is actually faster than 3090. The new drivers scare me, though. Not sure if the wattage is off on AB only or if the actual wattage is actually much higher.

@LtMatt
Why wasn't I nominated to do the new driver testing! Well okay, I've been playing games and working so I haven't had the time, I know I know I get it.


----------



## LtMatt

ptt1982 said:


> Oohhhh... Deathloop looks good. Anyone here tried 6900xt FSR Ultra Quality, maxed out settings including RT? Bang4Buck did a quick video of it on YT. Looks like it stays most of the time 60fps+ and is actually faster than 3090. The new drivers scare me, though. Not sure if the wattage is off on AB only or if the actual wattage is actually much higher.
> 
> @LtMatt
> Why wasn't I nominated to do the new driver testing! Well okay, I've been playing games and working so I haven't had the time, I know I know I get it.


Lol, you were due a vacation. That said, most users are off with COVID so expect a tag for the next driver release.


----------



## tolis626

Hey everyone. Been checking out the new driver and I noticed this in HWiNFO64. It shows ASIC power maxing out at ~450W, which is nuts considering that I have a 330W PL in MPT and +0% added in Radeon Software. TGP maxes out at the stated 330W. Temps aren't worse than they should be, so I'm not too concerned, and my PSU hasn't blown up yet (YET), so I guess it's just the reading? I dunno. I'd appreciate some feedback.


----------



## kratosatlante

new 6900xt asrock formula, tested with setting from [email protected] 600W 600A 65A, , 2200 fCLK, 2200 fCLK Boost , 1260 SOC , a little high hotspot 106 peak, other temp similar with asrock phantom(peak 96, change thermal paste and reduce to 89),
the gpu its ungry for wats, peaks 533w , and easy consume 430w


----------



## kratosatlante

tolis626 said:


> Hey everyone. Been checking out the new driver and I noticed this in HWiNFO64. It shows ASIC power maxing out at ~450W, which is nuts considering that I have a 330W PL in MPT and +0% added in Radeon Software. TGP maxes out at the stated 330W. Temps aren't worse than they should be, so I'm not too concerned, and my PSU hasn't blown up yet (YET), so I guess it's just the reading? I dunno. I'd appreciate some feedback.
> View attachment 2525141


same in stock, 325 if remeber ok, tdc 420 and 55a get 501w asic power, maxed at 600 600 65 get 533w,but barely the same temps


----------



## cfranko

I just finished my custom loop with 5900x and 6900 xt and 2 30mm 360 radiators. The GPU sits at 55 edge and 70 hotspot at 350 watts power limit. Is the GPU temp a bit high?


----------



## Oversemper

cfranko said:


> I just finished my custom loop with 5900x and 6900 xt and 2 30mm 360 radiators. The GPU sits at 55 edge and 70 hotspot at 350 watts power limit. Is the GPU temp a bit high?


Congrats! 5800 + 6900xt 400watt in one loop with one 60mm 360 radiator, hotspot (aka junction) in stress-testing touches upper 70th. But I am on a LM:
[Official] AMD Radeon RX 6900 XT Owner's Club
I think that your numbers are good.


----------



## cfranko

Oversemper said:


> Congrats! 5800 + 6900xt 400watt in one loop with one 60mm 360 radiator, hotspot (aka junction) in stress-testing touches upper 70th. But I am on a LM:
> [Official] AMD Radeon RX 6900 XT Owner's Club
> I think that your numbers are good.


 I have thermal paste not LM. Happy to hear my temps are fine. I was expecting edge temperatures around mid 40’s but mid 50’s is good enough I guess


----------



## Neoki

cfranko said:


> I just finished my custom loop with 5900x and 6900 xt and 2 30mm 360 radiators. The GPU sits at 55 edge and 70 hotspot at 350 watts power limit. Is the GPU temp a bit high?


Man I can only dream of that kind of sweet delta. I've repasted my gigabyte waterforce extreme twice, and replaced the pads last go-around, still have a 35c delta no matter what.

2x480,1x240,1x120 rads fed by 2xXD5 Corsair Pumps.


----------



## ZealotKi11er

Neoki said:


> Man I can only dream of that kind of sweet delta. I've repasted my gigabyte waterforce extreme twice, and replaced the pads last go-around, still have a 35c delta no matter what.
> 
> 2x480,1x240,1x120 rads fed by 2xXD5 Corsair Pumps.


You could have a very high leakage part. What is you stock clock?


----------



## Neoki

ZealotKi11er said:


> You could have a very high leakage part. What is you stock clock?


Stock is 2544, max I can push without GT2 crash is 2750 @ 1175mv / 380watts (seems to be the sweet spot for temps/performance).


----------



## jhatfie

Just picked up a XFX MERC319 the other day. Very pleased so far with the performance. Not messing with MPT, stock air cooling, normal bios in a 70F room and upping power limit up 15%, 2700mhz max frequency, 1050mv voltage, fast timing and 107% memory and it broke 23k graphics score in Timespy.


----------



## tolis626

jhatfie said:


> Just picked up a XFX MERC319 the other day. Very pleased so far with the performance. Not messing with MPT, stock air cooling, normal bios in a 70F room and upping power limit up 15%, 2700mhz max frequency, 1050mv voltage, fast timing and 107% memory and it broke 23k graphics score in Timespy.
> 
> View attachment 2525234
> 
> 
> View attachment 2525237


God damn it, seems like everyone and their dog in this forum has a 6900XT that clocks better than mine. Whatever did I do wrong to deserve this? Did I piss off the PCMR gods? Was it blasphemy that I put a 3x120mm rad AIO on a 3800X? Was it greed that I sold my 5700XT for a profit to fund the 6900XT purchase? I need answers!

Seriously though, that's a sick system man. Very similar to mine, but more streamlined. Mine is a bit of a mess when it comes to cables, so I'm not showing it with the side panel open. Also, I just realized that your (probably) P500A is identical inside to my Evolv X. Anyway, wanted to comment on the GPU. I was always curious how the XFX card would look in a normal system, what with it not having RGB. And I have to say, it looks kinda good. Still, if it did have RGB, I would've gone for one of these.


----------



## lestatdk

tolis626 said:


> God damn it, seems like everyone and their dog in this forum has a 6900XT that clocks better than mine. Whatever did I do wrong to deserve this? Did I piss off the PCMR gods? Was it blasphemy that I put a 3x120mm rad AIO on a 3800X? Was it greed that I sold my 5700XT for a profit to fund the 6900XT purchase? I need answers!
> 
> Seriously though, that's a sick system man. Very similar to mine, but more streamlined. Mine is a bit of a mess when it comes to cables, so I'm not showing it with the side panel open. Also, I just realized that your (probably) P500A is identical inside to my Evolv X. Anyway, wanted to comment on the GPU. I was always curious how the XFX card would look in a normal system, what with it not having RGB. And I have to say, it looks kinda good. Still, if it did have RGB, I would've gone for one of these.


If it's any comfort my card probably sucks more than yours 

It might improve a little bit , because I encountered a massive blunder when we modified the water block so will have to take it off and do some work to it.

With my luck it'll still OC like cr*p though


----------



## jhatfie

tolis626 said:


> God damn it, seems like everyone and their dog in this forum has a 6900XT that clocks better than mine. Whatever did I do wrong to deserve this? Did I piss off the PCMR gods? Was it blasphemy that I put a 3x120mm rad AIO on a 3800X? Was it greed that I sold my 5700XT for a profit to fund the 6900XT purchase? I need answers!
> 
> Seriously though, that's a sick system man. Very similar to mine, but more streamlined. Mine is a bit of a mess when it comes to cables, so I'm not showing it with the side panel open. Also, I just realized that your (probably) P500A is identical inside to my Evolv X. Anyway, wanted to comment on the GPU. I was always curious how the XFX card would look in a normal system, what with it not having RGB. And I have to say, it looks kinda good. Still, if it did have RGB, I would've gone for one of these.


I actually had a Reference model 6900XT that I traded for a while back. It would not overclock at all, so I sold it. I had a Nitro+ 6900XT that was a decent OC'er that I moved to my son's rig when I picked up a 3090FE to mess about with. But this XFX is a beast in comparison to my other two 6900XT's. So much so, I sold my 3090 as I do not mine and the performance of the 6900XT is great for my gaming needs. I do wish the XFX had RGB, but even without it, still looks pretty good. I may change around my color scheme to better match.


----------



## tolis626

lestatdk said:


> If it's any comfort my card probably sucks more than yours
> 
> It might improve a little bit , because I encountered a massive blunder when we modified the water block so will have to take it off and do some work to it.
> 
> With my luck it'll still OC like cr*p though


Well, my card will basically only go up to 2600MHz stable for 24/7 gaming and TimeSpy, so I can't imagine yours being much worse. I have yet to run TimeSpy successfully over 2600MHz and, while I can game at 2650-2700MHz, it will sometimes overboost momentarily and crash. I keep hoping that when I repaste it, it will improve because of the lower temperatures, but I can't see it clocking much higher, if at all. It sucks, I'm telling you!


jhatfie said:


> I actually had a Reference model 6900XT that I traded for a while back. It would not overclock at all, so I sold it. I had a Nitro+ 6900XT that was a decent OC'er that I moved to my son's rig when I picked up a 3090FE to mess about with. But this XFX is a beast in comparison to my other two 6900XT's. So much so, I sold my 3090 as I do not mine and the performance of the 6900XT is great for my gaming needs. I do wish the XFX had RGB, but even without it, still looks pretty good. I may change around my color scheme to better match.


Well, I'd love to have the cash on hand to try different cards, but sadly I don't. Although I wouldn't go NVidia, I've sworn off of their products a long time ago.

As for the XFX card, I think they missed on a lot of sales by not including RGB. I don't even mean having more lighting or turning it into a rainbow puke circus. No, no, no, same lit up parts, just with the ability to change the colors. As much as I despise how companies and techtubers show off RGB parts, what with rainbow puke all over the place, having the ability to change colors is actually great. I don't use any effects, just a solid color. For the time being I've settled on purple. Kind of by accident, in fact, I was going for blue and misclicked but I liked it purple and it stayed that way. But if tomorrow I want it to be yellow, I can do it and that's the beauty of it.

Here's a picture of my current rig. Excuse the cable mess and the poor picture quality, but I only have my phone on hand to take photos with.


----------



## kratosatlante

6900xt asrock formula stock clock 2594 mhz


----------



## ptt1982

kratosatlante said:


> 6900xt asrock formula stock clock 2594 mhz


I believe that's actually not stock clock, that is the clock that Adrenaline Tuning shows when you click "Manual." Stock is when you have it on "Preset" (two choices left from "Manual".) Nice card btw!


----------



## cfranko

You guys are lucky, My 6900 xt on a custom loop crashes at 2620 Max frequency at Timespy, 2610 is stable. I can get 23.000 graphics score but nothing more than that


----------



## cfranko

I get worse scores at 600w power limit than 400W power limit, my cooling is good. Why does more power limit give me worse scores?


----------



## lestatdk

Pretty similar numbers we have . Also on custom loop, but the waterblock needs to be modified a bit since I overlooked something very important when I first adapted it to this card.










Small chance it might improve, but who knows. With my luck it'll probably not change a thing  

max frequency in timespy 2630 and max in firestrike extreme 2655


----------



## cfranko

Is there any software sided way to unlock the core voltage of the gpu I wonder, I know EVC can unlock it but that is too complicated


----------



## CS9K

cfranko said:


> Is there any software sided way to unlock the core voltage of the gpu I wonder, I know EVC can unlock it but that is too complicated


There is not, currently. The great folks over on Igor's Lab, Hellm and others, have been exploring options, but so far there is no way to modify the bios on RDNA2 GPU's, which is one thing, among many, that is required to change voltage. Unfortunately, flashing a bios file from one RX 6900 XT GPU model to another, does not work either, as each bios file checks the hardware that it is running on, and will go into a 'safe' mode if the bios and GPU do not match.


----------



## J7SC

CS9K said:


> There is not, currently. The great folks over on Igor's Lab, Hellm and others, have been exploring options, but so far there is no way to modify the bios on RDNA2 GPU's, which is one thing, among many, that is required to change voltage. Unfortunately, flashing a bios file from one RX 6900 XT GPU model to another, does not work either, as each bios file checks the hardware that it is running on, and will go into a 'safe' mode if the bios and GPU do not match.


While more parameter change options would be welcome, I am glad for the options MPT _does have already_, ie. raising the PL W and TDC A, those are the most important in my mind, especially with a 3x8 pin and good w-cooling. 

GPU voltage limit options would be nice, but if anything, I undervolt a bit. The only thing I really still do miss is the ability to bump VRAM beyond 2150 w/o sending the GPU into safe mode as VRAM has about 110 MHz left in the tank.


----------



## cfranko

J7SC said:


> While more parameter change options would be welcome, I am glad for the options MPT _does have already_, ie. raising the PL W and TDC A, those are the most important in my mind, especially with a 3x8 pin and good w-cooling.
> 
> GPU voltage limit options would be nice, but if anything, I undervolt a bit. The only thing I really still do miss is the ability to bump VRAM beyond 2150 w/o sending the GPU into safe mode as VRAM has about 110 MHz left in the tank.


My custom loop cools 430W easily, which is the wattage this gpu uses at maximum with the default maximum vcore, If vcore was unlocked I would have way more headroom to overclock in terms of thermals, so I think vcore is pretty important


----------



## CS9K

J7SC said:


> While more parameter change options would be welcome, I am glad for the options MPT _does have already_, ie. raising the PL W and TDC A, those are the most important in my mind, especially with a 3x8 pin and good w-cooling.
> 
> GPU voltage limit options would be nice, but if anything, I undervolt a bit. The only thing I really still do miss is the ability to bump VRAM beyond 2150 w/o sending the GPU into safe mode as VRAM has about 110 MHz left in the tank.


I agree! I too am _very_ thankful that hellm and others have made MPT what it is today. I can't complain about being able to turn my reference RX 6900 XT loose with a water block; more that can be said for Nvidia cards with out some janky bios flashing.

I had an RX 5600 XT for my home-office machine for most of last year, and got to play around with bios modding. I have a great deal of experience tuning Samsung B-die main system memory on a few different systems, so I took what I knew and had a try at working out timing relations and modifying the timings on the RX 5600 XT. While I couldn't get much more out of the memory on that GPU (due to temps thanks to Gigabyte's mediocre cooler), I would LOVE to have bios modding ability to take another go at tuning GPU memory timings on my RX 6900 XT. Knowing that some/most RX 6900 XT's have a good deal of headroom in just memory speed alone (past 2150MHz), I'm sure those of us with water cooling could push tREFI a decent bit, and maybe get tCL and some of the secondary & tertiary timings tightened up too... AFTER maxing out (unlocked) memory speeds, that is.

We'll get there... in time :3


----------



## gtz

Forgot to post my 6900XT results on this thread. Purchased a Nitro+ 6900XT and finished my build a few weeks ago. These new driver are very impressive. The 6900XT with the power limit maxed, a fan curve, memory to 2150, and adjusted the power draw to 320 in MPT and get around a 22500 I'm timespy. This card will eventually be watercooled and will be transfered to my other system with dual 420mm rads (build in progress). Right now I am just trying to keep the hot spot under 90, but I feel once I watercool it I should easily break 24k in time spy.


----------



## EastCoast

jhatfie said:


> I actually had a Reference model 6900XT that I traded for a while back. It would not overclock at all, so I sold it. I had a Nitro+ 6900XT that was a decent OC'er that I moved to my son's rig when I picked up a 3090FE to mess about with. But this XFX is a beast in comparison to my other two 6900XT's. So much so, I sold my 3090 as I do not mine and the performance of the 6900XT is great for my gaming needs. I do wish the XFX had RGB, but even without it, still looks pretty good. I may change around my color scheme to better match.


Is this the ultra variant?


----------



## CS9K

Welp, 21.9.1 was a bloody tragedy when I tried to get my VR on last night. Giving 21.8.2 a try now that it's gone WHQL... just want to be able to game and record without the HEVC-recording driver-crash-bug from 21.7.2.


----------



## Neoki

CS9K said:


> Welp, 21.9.1 was a bloody tragedy when I tried to get my VR on last night. Giving 21.8.2 a try now that it's gone WHQL... just want to be able to game and record without the HEVC-recording driver-crash-bug from 21.7.2.
> View attachment 2525430


That might explain why I was completely unstable 2 nights ago when doing some game recording in Star Citizen. Going to try 21.8.2 as well.


----------



## UnFou02

Hello, i have a 6900 xt, i think a have a good one and i want to have to overclock the vRAM over than 2150 MHz, the problem is when I go above 2150 the card flanges to 500 MHz on GPU clock, do you know if we can override this limit?
On the vram I was able to climb to 2260 MHz stable without overvolt the vram but at 500 MHz on the core clock 

To give a little idea of what I managed to do, it is still stock for cooling, just MPT with slighty overclock dclk,fclk,soc and vclk and 350 W PL
With drivers 21.8.2
I scored 21 108 in Time Spy


----------



## cfranko

UnFou02 said:


> Hello, i have a 6900 xt, i think a have a good one and i want to have to overclock the vRAM over than 2150 MHz, the problem is when I go above 2150 the card flanges to 500 MHz on GPU clock, do you know if we can override this limit?
> On the vram I was able to climb to 2260 MHz stable without overvolt the vram but at 500 MHz on the core clock
> 
> To give a little idea of what I managed to do, it is still stock for cooling, just MPT with slighty overclock dclk,fclk,soc and vclk and 350 W PL
> With drivers 21.8.2
> I scored 21 108 in Time Spy
> View attachment 2525487


The card goes into safe mode when you try to ezceed 2150 mhz and there is no way to bypass it If I remember correctly


----------



## UnFou02

cfranko said:


> The card goes into safe mode when you try to ezceed 2150 mhz and there is no way to bypass it If I remember correctly


Tonight I will try to set the minimum frequency to 2400 MHz in MPT to see if we can override the safe mode, I will see


----------



## lestatdk

UnFou02 said:


> Tonight I will try to set the minimum frequency to 2400 MHz in MPT to see if we can override the safe mode, I will see


Can't be done. Not yet anyway. Hope there will be a fix some day


----------



## kairi_zeroblade

Anyone noticed the new drivers' low fps dipping a bit too much (when compare to 21.7.x)?? also GPU uses more power than usual..could be just me, though I am running the same exact OC and settings in Wattman..


----------



## UnFou02

kairi_zeroblade said:


> Anyone noticed the new drivers' low fps dipping a bit too much (when compare to 21.7.x)?? also GPU uses more power than usual..could be just me, though I am running the same exact OC and settings in Wattman..


I did not see that it consumes more than usual nor loss of fps in the games that I play currently


----------



## gtz

UnFou02 said:


> Hello, i have a 6900 xt, i think a have a good one and i want to have to overclock the vRAM over than 2150 MHz, the problem is when I go above 2150 the card flanges to 500 MHz on GPU clock, do you know if we can override this limit?
> On the vram I was able to climb to 2260 MHz stable without overvolt the vram but at 500 MHz on the core clock
> 
> To give a little idea of what I managed to do, it is still stock for cooling, just MPT with slighty overclock dclk,fclk,soc and vclk and 350 W PL
> With drivers 21.8.2
> I scored 21 108 in Time Spy
> View attachment 2525487


Just curious what are your timings for your quad 16gb sticks? Never really seen 4 16gb sticks above 3600. I max out at 3666 with my 4 16gb dual rank b die sticks. I can only assume you are running 1:1 with the FCLK as well.


----------



## gtz

kairi_zeroblade said:


> Anyone noticed the new drivers' low fps dipping a bit too much (when compare to 21.7.x)?? also GPU uses more power than usual..could be just me, though I am running the same exact OC and settings in Wattman..


The latest ones have weird stuttering in games. Not bad but it is noticable, I rolled back a driver and the stutters went away. I think they conflict with my monitor. Another driver did the same and did not play nice with my monitor. 3DMark does not notice anything and still score similar to the other driver, so it is not all about benchmarks.


----------



## kairi_zeroblade

gtz said:


> The latest ones have weird stuttering in games. Not bad but it is noticable, I rolled back a driver and the stutters went away. I think they conflict with my monitor. Another driver did the same and did not play nice with my monitor. 3DMark does not notice anything and still score similar to the other driver, so it is not all about benchmarks.


Yeah stuttering and drops for me are everywhere..Never had it on my previous driver install (stayed on 21.7.x), just had noticed them on my 21.9.1 install..


----------



## UnFou02

gtz said:


> Just curious what are your timings for your quad 16gb sticks? Never really seen 4 16gb sticks above 3600. I max out at 3666 with my 4 16gb dual rank b die sticks. I can only assume you are running 1:1 with the FCLK as well.


So, for my ram it's very very simple little bit complex, I have 2 different ram kits, a 2x16 GB kit with Micron E-die chips in dual ranks and a 2x16 GB kit with Micron B-die chips in single ranks.
And for the timings, originally I am at 3600 18 22 22 42, there they are at 3800 16 20 20 36 (36 or 38 I don't know anymore)


----------



## EastCoast

Can someone tell me what brand offers the best performance for the 6900xt. Besides the silicon lottery?

It seems to me that this generation the Merc 319 Black (not the ultra) is the best card. Offering the lowest temps and highest clocks. 
Then comes:
-Strix...dropped a peg do to the higher price
-Nitro SE (Special Edition)
-Tiachi

Then there is the xtxh variants that come with 3 pcie connectors. Which cost more and usually are watercooled.


----------



## J7SC

EastCoast said:


> Can someone tell me what brand offers the best performance for the 6900xt. Besides the silicon lottery?
> 
> It seems to me that this generation the Merc 319 Black (not the ultra) is the best card. Offering the lowest temps and highest clocks.
> Then comes:
> -Strix...dropped a peg do to the higher price
> -Nitro SE (Special Edition)
> -Tiachi
> 
> Then there is the xtxh variants that come with 3 pcie connectors. Which cost more and usually are watercooled.


...Some regular (non-xtxh) also come with 3x8 pin pcie, and also air-cooling. I have a Gigabyte 6900XT OC and below are its stock results after the first hard benching and day 1 (no MPT, air-cooling). Converted it to w-cooling just the same to go for an extra 100+ W PL


----------



## EastCoast

@J7SC 


> Some regular (non-xtxh) also come with 3x8 pin pcie


Which ones? 



> and also air-cooling


I would think that non xtxh would be air cooled


----------



## tolis626

EastCoast said:


> @J7SC
> 
> Which ones?
> 
> 
> I would think that non xtxh would be air cooled


Well, the Nitro+ SE doesn't come with 3x8-pin, but it does come with 2x8 + 1x6 pin connectors. That said, mine is a poor overclocker and, when you crank the power limit and thus the fan speed, it does get quite noisy without providing amazing temperatures. At stock it's quiet, but again, not really cool.


----------



## cfranko

EastCoast said:


> Which ones?


I have a ASRock Phantom Gaming XTX card that is normally air cooled which has 3x8 pins on it, I installed a waterblock on it. However the 3x8 pin is useless because the silicon on this card is terrible and it doesn't pull that much power. 2700 MHz on this card crashes in games.


----------



## J7SC

EastCoast said:


> @J7SC
> 
> Which ones?
> (...)


...in my post above (Gigab OC)


----------



## EastCoast

Thanks...
Hmm, it appears that the xtx variants aren't that attractive at the price point. I assume those are the ones with a 3 pcie connectors. And, aren't that much better then xt do to cooling, OC potential and just consumers more power, price...at the gist of it...I assume. Unless you plan on getting a waterblock for those gpus.

So, its better to just go WB xtxh. Or save a bit of coin and get the regular xt air cooled.


----------



## J7SC

EastCoast said:


> Thanks...
> Hmm, it appears that the xtx variants aren't that attractive at the price point. I assume those are the ones with a 3 pcie connectors. And, aren't that much better then xt do to cooling, OC potential and just consumers more power, price...at the gist of it...I assume. Unless you plan on getting a waterblock for those gpus.
> 
> So, its better to just go WB xtxh. Or save a bit of coin and get the regular xt air cooled.


...per HWInfo tab posted above already, I couldn't be happier with the performance, not least as I got mine at below (the old) MSRP; it went up by $200 a couple of weeks later  Also as posted above, I w-cooled it anyways (like all my GPUs) for increased PL.

I found this model attractive as it has the same PCB / 3x 8 Pin / dual bios as the factory w-cooled XTXH but for a lot less $s...it is my daily work machine slogger and mounted right next to a w-cooled 3090 Strix I tend to use more for certain games, ie those with ray tracing....best of both worlds for work-and-play machines.


----------



## Oversemper

Since a lot of people here complain how bad their 6900 clocks why not me do the same?
This is the absolute best I've achieved with my Liquid Devil (non-ultimate edition):








Draws 390 watts at maximum, junction never exceeds 75 C (repasted with LM), freq. of 2640/50 already crashes.
Tried lowering GFX and SoC voltages, only introduces instability. It needs MORE voltage to survive higher clock.


----------



## chispy

For those interested on FC H2O blocks for the ASRock OCF 6900xtx , Bykski just made one and is available. I have just received mine today , testing later. No imperfections ( like EK ) nice heavy built and backplate is very thick making contact with the hot places on the back , extra thermal pads also incuded + a screw driver. I like it , will see how it performs later.


109.51US $ 15% OFF|Bykski GPU Block For Asrock Radeon RX6900XT OC Formula 16G , With Backplate GPU Water Cooling Cooler, A AR6900XTOCF X|Fans & Cooling| - AliExpress


----------



## rodac

EastCoast said:


> Can someone tell me what brand offers the best performance for the 6900xt. Besides the silicon lottery?
> 
> It seems to me that this generation the Merc 319 Black (not the ultra) is the best card. Offering the lowest temps and highest clocks.
> Then comes:
> -Strix...dropped a peg do to the higher price
> -Nitro SE (Special Edition)
> -Tiachi
> 
> Then there is the xtxh variants that come with 3 pcie connectors. Which cost more and usually are watercooled.


Go for sapphire, their triixx r3olution enhancement is very good, like amd superfx but useable on all games


----------



## cfranko

Oversemper said:


> Since a lot of people here complain how bad their 6900 clocks why not me do the same?
> This is the absolute best I've achieved with my Liquid Devil (non-ultimate edition):
> View attachment 2525582
> 
> Draws 390 watts at maximum, junction never exceeds 75 C (repasted with LM), freq. of 2640/50 already crashes.
> Tried lowering GFX and SoC voltages, only introduces instability. It needs MORE voltage to survive higher clock.


That is the exact same as my card. The maximum I can do is 2630 MHz on the frequency, 2635 crashes. I also get 23.000 graphics points.


----------



## ZealotKi11er

cfranko said:


> That is the exact same as my card. The maximum I can do is 2630 MHz on the frequency, 2635 crashes. I also get 23.000 graphics points.


Still much higher than stock which is like 23xx.


----------



## cfranko

ZealotKi11er said:


> Still much higher than stock which is like 23xx.


Mine boosts to 2450 sometimes at stock


----------



## ZealotKi11er

cfranko said:


> Mine boosts to 2450 sometimes at stock


In TimeSpy GT2?


----------



## cfranko

ZealotKi11er said:


> In TimeSpy GT2?


Not in GT2, only in games it boosts that high.


----------



## CS9K

21.9.2 is out! That took 8 days, heh. I suppose they knew 21.9.1. was a _hot_ mess.

I'll give 21.9.2 a proper flogging and report in later today.


----------



## lestatdk

Wow. That was quick . Just uninstalled 9.1 because it was crap


----------



## Maulet//*//

thanks CS9K!


----------



## Nicklas0912

I promish to tell wich card I bought, and I got the RX 6900 XT Strix LC T16GB with the XTXH chip for 1300 Euro here in Denmark, 2525Mhz stock boost, will do some overclocking on it tomrrow.

MSRP for a ref RX 6900 XT here in Denmark is 1049 Euro, so im quite happy with the price since is the top model.


----------



## rodac

Nicklas0912 said:


> I promish to tell wich card I bought, and I got the RX 6900 XT Strix LC T16GB with the XTXH chip for 1300 Euro here in Denmark, 2525Mhz stock boost, will do some overclocking on it tomrrow.
> 
> MSRP for a ref RX 6900 XT here in Denmark is 1049 Euro, so im quite happy with the price since is the top model.


very good price


----------



## CS9K

21.9.2 _appears_ to behave itself with VR Chat now, using my Vive Pro + Vive Wireless. Will give it a more-thorough testing with OBS and other stuff running too this week.


----------



## OC-NightHawk

I'm trying hard to reach 20000 with my Ryzen 5950X and Gigabyte RX 6900 XT Xtreme Waterforce. I'm less then 100 points away and only have not done anything with More Power Tool or overclocked my RAM or done anything more then PBO on the CPU. AMD Radeon RX 6900 XT video card benchmark result - AMD Ryzen 9 5950X,Micro-Star International Co., Ltd. PRESTIGE X570 CREATION (MS-7C36) (3dmark.com) It seems this is the limit though without more power.


----------



## gtz

OC-NightHawk said:


> I'm trying hard to reach 20000 with my Ryzen 5950X and Gigabyte RX 6900 XT Xtreme Waterforce. I'm less then 100 points away and only have not done anything with More Power Tool or overclocked my RAM or done anything more then PBO on the CPU. AMD Radeon RX 6900 XT video card benchmark result - AMD Ryzen 9 5950X,Micro-Star International Co., Ltd. PRESTIGE X570 CREATION (MS-7C36) (3dmark.com) It seems this is the limit though without more power.


Disable Hyperthreading on the 5950X, should net you 16000+ on the CPU. Should help you reach 20k overall score.

Also MPT is fairly easy, since you are already on waterblock you should be able to easily archive 23k + on the graphics score.


----------



## OC-NightHawk

gtz said:


> Disable Hyperthreading on the 5950X, should net you 16000+ on the CPU. Should help you reach 20k overall score.
> 
> Also MPT is fairly easy, since you are already on waterblock you should be able to easily archive 23k + on the graphics score.


I updated my driver to version 21.8.2 and when I tested it my card was able to maintain it's speed a little better while also pushing the memory speed up. I got a little past 21000. AMD Radeon RX 6900 XT video card benchmark result - AMD Ryzen 9 5950X,Micro-Star International Co., Ltd. PRESTIGE X570 CREATION (MS-7C36) (3dmark.com) 

Is there a way to disable hyperthreading rebooting or is it something that has to be done from bios?


----------



## cfranko

gtz said:


> Disable Hyperthreading on the 5950X, should net you 16000+ on the CPU. Should help you reach 20k overall score.
> 
> Also MPT is fairly easy, since you are already on waterblock you should be able to easily archive 23k + on the graphics score.


I didn’t disable hyperthreading on my 5900X because I didn’t know that I had to disable it to reach 16K. Do I keep my Curve optimizer enabled when disabling hyperthreading?


----------



## gtz

cfranko said:


> I didn’t disable hyperthreading on my 5900X because I didn’t know that I had to disable it to reach 16K. Do I keep my Curve optimizer enabled when disabling hyperthreading?


Don't disable on a 5900X, regular time spy only tanks itself with 32 threads or more.


----------



## cfranko

gtz said:


> Don't disable on a 5900X, regular time spy only tanks itself with 32 threads or more.


Then how do people get 16K with a 5900X? My memory is tuned and I have curve optimizer, still can’t get past 15.100 physics score.


----------



## lestatdk

cfranko said:


> Then how do people get 16K with a 5900X? My memory is tuned and I have curve optimizer, still can’t get past 15.100 physics score.


My score improved 1500 pts or so, by going to 3733 MHz with tight timing instead of 4000 MHz. My CPU will not do 1933-2000 fclk without WHEA errors. And I can't boot at 1900.

Is your mclk,fclk and uclk in sync ? What are your timings ?


----------



## koji

Repasted my Asrock 6900XT Phantom D with Gelid yesterday, worked great, there was almost no paste on that thing stock... Was able to shave 9°C...











Stock application:










I mean, you'd think they'd atleast properly paste a €1600 card in the factory...


----------



## kairi_zeroblade

cfranko said:


> Then how do people get 16K with a 5900X? My memory is tuned and I have curve optimizer, still can’t get past 15.100 physics score.


static OC to 4.6ghz(instead of PBO) and above..tried it on mine..I got almost 16k, 15,8xx ish..

EDIT:
Finally stuttering is gone!! *** AMD..just installed the new one out..and it seems to give a decent 3dmark score as well..Cheers!!


----------



## gtz

cfranko said:


> Then how do people get 16K with a 5900X? My memory is tuned and I have curve optimizer, still can’t get past 15.100 physics score.


You need an all core overclock of 4.6 and above. My first 5900X I had 3 360mm rads for the CPU (same setup cooled a 9980XE) and OC'd all core to 4.725 at 1.31 vcore and scored around 15,800-15,900. I also had single rank b die running 3800 CL14. 

I'm currently building my second water cooling case just to chase benchmarks again. 

Dual 420mm rads just for the CPU.


----------



## cfranko

lestatdk said:


> My score improved 1500 pts or so, by going to 3733 MHz with tight timing instead of 4000 MHz. My CPU will not do 1933-2000 fclk without WHEA errors. And I can't boot at 1900.
> 
> Is your mclk,fclk and uclk in sync ? What are your timings ?


Yeah I run 1900 FCLK and the fclk is stable. Ram freq is 3800. I am gonna try all core overclock today.

Timings:


http://imgur.com/a/hClwrEy


----------



## cfranko

koji said:


> Repasted my Asrock 6900XT Phantom D with Gelid yesterday, worked great, there was almost no paste on that thing stock... Was able to shave 9°C...
> 
> View attachment 2525739
> 
> 
> 
> Stock application:
> 
> View attachment 2525740
> 
> 
> I mean, you'd think they'd atleast properly paste a €1600 card in the factory...


Those are incredible temps for the Phantom D, I also have a Phantom D and with the air cooler I would see 105 hotspot temperature at 330 watts repasted with MX-4. Now I have a waterblock on it and temps are much better. I guess the air cooler had some issue with the pressure on the die I so it was not making proper contact


----------



## cfranko

gtz said:


> You need an all core overclock of 4.6 and above. My first 5900X I had 3 360mm rads for the CPU (same setup cooled a 9980XE) and OC'd all core to 4.725 at 1.31 vcore and scored around 15,800-15,900. I also had single rank b die running 3800 CL14.
> 
> I'm currently building my second water cooling case just to chase benchmarks again.
> 
> Dual 420mm rads just for the CPU.
> 
> View attachment 2525742


The 5th core is terrible on my cpu. 4.725 all core may not be stable just because of the 5 th core. Is it possible to do a per core overclock?


----------



## gtz

cfranko said:


> The 5th core is terrible on my cpu. 4.725 all core may not be stable just because of the 5 th core. Is it possible to do a per core overclock?


No just CCX. Drop that CCX to 4.65 and leave the good CCX at 4.7. Ryzen becomes extremely inefficient past 4.5-4.6. The best overclocker I had was the 5800X, that could do 4.8, did not know what I had until I sold it. Did 2000FCLK to run benches as well.l


----------



## cfranko

gtz said:


> No just CCX. Drop that CCX to 4.65 and leave the good CCX at 4.7. Ryzen becomes extremely inefficient past 4.5-4.6. The best overclocker I had was the 5800X, that could do 4.8, did not know what I had until I sold it. Did 2000FCLK to run benches as well.l


My cpu gives whea errors at anything above 1900 fclk which is pretty dissapointing. 2033 fclk seems somewhat stable because it can do prime95 latge ffts for 2-3 minutes without whea errors but eventually it gives errors. With some infinity fabric voltages tuning I may get 2033 stable but I have to look into that.


----------



## lestatdk

cfranko said:


> Yeah I run 1900 FCLK and the fclk is stable. Ram freq is 3800. I am gonna try all core overclock today.
> 
> Timings:
> 
> 
> http://imgur.com/a/hClwrEy


Your TRFC seems a bit high. Did you use the DRAM calculator ?

Here's my timings










It lowered my latency in Aida64 from 73 to 54 or so. Also boosted my CPU score in TS by a lot.


----------



## cfranko

lestatdk said:


> Your TRFC seems a bit high. Did you use the DRAM calculator ?
> 
> Here's my timings
> 
> View attachment 2525744
> 
> 
> It lowered my latency in Aida64 from 73 to 54 or so. Also boosted my CPU score in TS by a lot.


I didn’t use DRAM Calculator, these are my own timings. The tRFC is high because I have 4 dimms of Micron Rev. E Die and it absolutely hates low tRFC


----------



## koji

cfranko said:


> Those are incredible temps for the Phantom D, I also have a Phantom D and with the air cooler I would see 105 hotspot temperature at 330 watts repasted with MX-4. Now I have a waterblock on it and temps are much better. I guess the air cooler had some issue with the pressure on the die I so it was not making proper contact


Yeah pretty happy with the results, liked the card since the day I've got it but it's been a constant struggle with balancing out the fan noise vs framerate... Those temperatures are after 5 loops of Time Spy graphic test 2 btw.

Those 3000RPM fans on the Phantom are bonkers...


----------



## lestatdk

cfranko said:


> I didn’t use DRAM Calculator, these are my own timings. The tRFC is high because I have 4 dimms of Micron Rev. E Die and it absolutely hates low tRFC


Ah, OK. Yeah that makes sense. 4 sticks are not easy to run. Mine are Samsung B -die and I only have 2 sticks so a lot easier.


----------



## cfranko

lestatdk said:


> Ah, OK. Yeah that makes sense. 4 sticks are not easy to run. Mine are Samsung B -die and I only have 2 sticks so a lot easier.


You tRCDRD and tRCDWR and tRP is high for B Die it should be able to do 3733 14-14-14


----------



## lestatdk

cfranko said:


> You tRCDRD and tRCDWR and tRP is high for B Die it should be able to do 3733 14-14-14


Unfortunately as low as it wants to go. It's not the best binned kit , it's 16-19-19-39 at 4000 . The better kits were unavailable at the time 😕


----------



## Nicklas0912

First timespy with my new 6900 XT, dat is a fine score for stock settings, no oc..









I scored 20 262 in Time Spy


AMD Ryzen 9 5900X, AMD Radeon RX 6900 XT x 1, 32768 MB, 64-bit Windows 10}




www.3dmark.com


----------



## J7SC

cfranko said:


> You tRCDRD and tRCDWR and tRP is high for B Die it should be able to do 3733 14-14-14





lestatdk said:


> Unfortunately as low as it wants to go. It's not the best binned kit , it's 16-19-19-39 at 4000 . The better kits were unavailable at the time 😕


RAM optimization and related testing (+more testing) can be both tedious and irritating. It certainly helps though if you can work with a good set of Samsung-B (4x8 dimms below). IMO though, TS/E et al will reward even if you're not absolutely maxed on RAM and are running fixed OC (or dynamic OC per some mobos). BTW, ignore the '2000/4000' on the right; was just testing out if it would run at all and need to have a rainy weekend to fully test / lock in.


----------



## CS9K

I've made a few posts myself over in the X570 Aorus owner's thread regarding memory tuning and timings, if anyone is interested in another data point.









(Gigabyte X570 AORUS Owners Thread)


what do you guys set your soc LLC to? with ryzen 5000. My motherboard defaults to soc 1.2 with LLC auto. I am stable with 1.15 LLC auto. Now testing 1.125 with soc LLC medium.




www.overclock.net












(Gigabyte X570 AORUS Owners Thread)


what do you guys set your soc LLC to? with ryzen 5000. My motherboard defaults to soc 1.2 with LLC auto. I am stable with 1.15 LLC auto. Now testing 1.125 with soc LLC medium.




www.overclock.net


----------



## J7SC

CS9K said:


> I've made a few posts myself over in the X570 Aorus owner's thread regarding memory tuning and timings, if anyone is interested in another data point.
> 
> 
> 
> 
> 
> 
> 
> 
> 
> (Gigabyte X570 AORUS Owners Thread)
> 
> 
> what do you guys set your soc LLC to? with ryzen 5000. My motherboard defaults to soc 1.2 with LLC auto. I am stable with 1.15 LLC auto. Now testing 1.125 with soc LLC medium.
> 
> 
> 
> 
> www.overclock.net
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> (Gigabyte X570 AORUS Owners Thread)
> 
> 
> what do you guys set your soc LLC to? with ryzen 5000. My motherboard defaults to soc 1.2 with LLC auto. I am stable with 1.15 LLC auto. Now testing 1.125 with soc LLC medium.
> 
> 
> 
> 
> www.overclock.net


Good stuff ! ...re. Zen timings sheet, I should have mentioned in my post before that I'm using Asus X570 CH8 (WiFi and 'Dark')


----------



## jonRock1992

gtz said:


> No just CCX. Drop that CCX to 4.65 and leave the good CCX at 4.7. Ryzen becomes extremely inefficient past 4.5-4.6. The best overclocker I had was the 5800X, that could do 4.8, did not know what I had until I sold it. Did 2000FCLK to run benches as well.l


The 5800X that I'm using now is also stable at 4.8 GHz all-core at 1.375V. I'm also running it at 2000MHz FCLK 24/7 with 16-16-16-16-32-48 RAM timings. Seems we had a similarly binned CPU. At one point before a BIOS update I was running 2067MHz FCLK 24/7. But after the BIOS update, it made it unstable at 2067MHz FCLK, and reverting the BIOS didn't help. Not sure what happened, but I was pretty pissed about it.


----------



## gtz

jonRock1992 said:


> The 5800X that I'm using now is also stable at 4.8 GHz all-core at 1.375V. I'm also running it at 2000MHz FCLK 24/7 with 16-16-16-16-32-48 RAM timings. Seems we had a similarly binned CPU. At one point before a BIOS update I was running 2067MHz FCLK 24/7. But after the BIOS update, it made it unstable at 2067MHz FCLK, and reverting the BIOS didn't help. Not sure what happened, but I was pretty pissed about it.


Yeah it was solid, I miss it. I bought it on launch (everything else was already sold out) and assumed all chips would act the same. Little did I know I had a great sample.


----------



## jonRock1992

gtz said:


> Yeah it was solid, I miss it. I bought it on launch (everything else was already sold out) and assumed all chips would act the same. Little did I know I had a great sample.


I got mine on launch day too! I was the first person to get one from my local micro center lol.


----------



## Nicklas0912

Anyone know if the "AMD more power tool" works for Asus Strix 6900 XT? just want to raise TDP.


----------



## J7SC

Nicklas0912 said:


> Anyone know if the "AMD more power tool" works for Asus Strix 6900 XT? just want to raise TDP.


...don't have that Asus card, but I think MPT works on pretty much any BigNavi as the values are actually written into Win registry. BTW, once you load a different/new AMD driver, you might have to repeat MPT settings so best to save your fav MPT profile


----------



## 99belle99

Got this score on a reference card with stock cooler.

I scored 19 959 in Time Spy


----------



## yns44

Gyazo







gyazo.com





6900XT Nitro+ SE
Repasted with Gelid GC Extreme

In-Game "The Ascent" the junction Temperature just instantly jumps to 97-99°C making the fans spin like stupid...

Always felt that this card is running way too hot


----------



## jhatfie

EastCoast said:


> Is this the ultra variant?


Nope. Just the regular black model.


----------



## EastCoast

jhatfie said:


> Nope. Just the regular black model.


very interesting then. I take it you are happy with it. Do you wish it had more rbg puke though?


----------



## marcoschaap

Nicklas0912 said:


> Anyone know if the "AMD more power tool" works for Asus Strix 6900 XT? just want to raise TDP.


By my knowledge it works for all 6000-series GPU's. It's not AIB dependent.


----------



## jhatfie

EastCoast said:


> very interesting then. I take it you are happy with it. Do you wish it had more rbg puke though?


Oh yeah, the card is fantastic. Do I wish I had the choice to change the light colors, sure. But it is not that big a deal.


----------



## ptt1982

Results from testing 21.9.2 drivers vs 21.8.1 using MPT 1.3.7 beta10:

-TS/Doom Eternal Edge/Junction 2-4C lower
-TS passes with Core Clock +5mhz
-TS GPU scores around +75
-No stability issues in: TS, Doom, Nier Replicant, Nioh 2

Above all tested within an hour, room/ambient temp the same.
Incremental, roughly 0.3% more performance with slightly lower temps.
Plus, support for more games, so I'll take it.

Happy to also report that edge/junction delta nowadays stays under 30C even at 400W. Maybe it's the drivers, maybe it's the GPU holder re-placement, maybe it's the weather, dunno, but I'll take it. Last time my temps were creeping higher after two weeks, but now after two months they are much lower and stay that way. So it's less testing and more: launch a game --> turn graphics to 4K60 Max HDR --> enjoy games!!


----------



## Nicklas0912

Im now starting to do some overclocking, and allready modifyet the bios, I raise the TDP from total 381Watt to 425Watt, and that did alot. still in testing phase. I aim for 25.000.


----------



## jonRock1992

ptt1982 said:


> Results from testing 21.9.2 drivers vs 21.8.1 using MPT 1.3.7 beta10:
> 
> -TS/Doom Eternal Edge/Junction 2-4C lower
> -TS passes with Core Clock +5mhz
> -TS GPU scores around +75
> -No stability issues in: TS, Doom, Nier Replicant, Nioh 2
> 
> Above all tested within an hour, room/ambient temp the same.
> Incremental, roughly 0.3% more performance with slightly lower temps.
> Plus, support for more games, so I'll take it.
> 
> Happy to also report that edge/junction delta nowadays stays under 30C even at 400W. Maybe it's the drivers, maybe it's the GPU holder re-placement, maybe it's the weather, dunno, but I'll take it. Last time my temps were creeping higher after two weeks, but now after two months they are much lower and stay that way. So it's less testing and more: launch a game --> turn graphics to 4K60 Max HDR --> enjoy games!!


Great news! Hopefully I can get +5MHz as well. Then I can run 2700/2800 in Time Spy and maybe get that 25k GPU score. Might test it out tonight.


----------



## LtMatt

ptt1982 said:


> Results from testing 21.9.2 drivers vs 21.8.1 using MPT 1.3.7 beta10:
> 
> -TS/Doom Eternal Edge/Junction 2-4C lower
> -TS passes with Core Clock +5mhz
> -TS GPU scores around +75
> -No stability issues in: TS, Doom, Nier Replicant, Nioh 2
> 
> Above all tested within an hour, room/ambient temp the same.
> Incremental, roughly 0.3% more performance with slightly lower temps.
> Plus, support for more games, so I'll take it.
> 
> Happy to also report that edge/junction delta nowadays stays under 30C even at 400W. Maybe it's the drivers, maybe it's the GPU holder re-placement, maybe it's the weather, dunno, but I'll take it. Last time my temps were creeping higher after two weeks, but now after two months they are much lower and stay that way. So it's less testing and more: launch a game --> turn graphics to 4K60 Max HDR --> enjoy games!!


**** like this is why you were pre-nominated for testing.


----------



## cfranko

21.9.2 is a really weird driver, my power consumption jumps everwhere. I have 400 watts power limit set and it suddenly jumps to 360 watts then it suddenly drops to 250 watts and then suddenly jumps back up to 350 watts, interesting. Temps are a tiny bit lower due to that.


----------



## yns44

yns44 said:


> Gyazo
> 
> 
> 
> 
> 
> 
> 
> gyazo.com
> 
> 
> 
> 
> 
> 6900XT Nitro+ SE
> Repasted with Gelid GC Extreme
> 
> In-Game "The Ascent" the junction Temperature just instantly jumps to 97-99°C making the fans spin like stupid...
> 
> Always felt that this card is running way too hot



anybody knows if I should RMA the card?


----------



## OC-NightHawk

yns44 said:


> anybody knows if I should RMA the card?


If you RMA it for cooling they could say that you messed up by improperly reinstalling the parts. They could also send you money as a refund and then you have to buy in at whatever the going price is now.

You're in for a penny in for a pound. From reading other people's posts about doing this it sounds tricky. If it only started doing that after you repasted then you should try remounting the heatsink and fan again. You might not have adequate pressure.


----------



## EastCoast

yns44 said:


> anybody knows if I should RMA the card?


You definitely messed it up. That thermal compound that you use is worse than what the they put on there. You did not put enough thermal compound on the die. You did not reinstall using cris-cross method. 

Did you put the bracket to the dye on 1st? Or did you try to mount the heatsink back to the card then put the bracket on?

Try again. Take a picture of the die. I bet you have thermal pump out because you didn't use enough thermal paste.

Like you would a layer of icing on a cake. Use more thermal compound.

Make sure that they thermal pads are on correctly.

Reattach and make sure you making a good connection before you start putting screws back in you should be able to pick the card up by the heat sink and it should not detach from the video card itself.

Put the bracket on 1st. Put all four screws in but do not tighten them. Two turns each. Then in a crisscross motion start counting the number of turns per screw.

And install the remaining screws


----------



## yns44

EastCoast said:


> You definitely messed it up. That thermal compound that you use is worse than what the they put on there. - *thats literaly the first time I am reading this - are you sure? Generally the stock paste is of pretty bad quality. *
> You did not put enough thermal compound on the die. - *I did 100%.*
> You did not reinstall using cris-cross method. - *That I didnt do, will give it a try.*
> 
> Did you put the bracket to the dye on 1st? Or did you try to mount the heatsink back to the card then put the bracket on?
> - *What do you mean exactly? I removed the X-Clamp then the cooler then the paste. then I applied paste to the die, put the sink back onto the die and tightened the X-Clamp on the back of the card.*
> 
> Try again. Take a picture of the die. I bet you have thermal pump out because you didn't use enough thermal paste. - *will take a picture for you but its literally impossible. I am not a noob when it comes to this stuff *
> 
> Like you would a layer of icing on a cake. Use more thermal compound. - *xD *
> 
> Make sure that they thermal pads are on correctly.
> 
> Reattach and make sure you making a good connection before you start putting screws back in you should be able to pick the card up by the heat sink and it should not detach from the video card itself.
> 
> Put the bracket on 1st. Put all four screws in but do not tighten them. Two turns each. Then in a crisscross motion start counting the number of turns per screw. - *i dont see why this should be the magic trick? But Ill give it a try before RMAing this card.*
> 
> And install the remaining screws


PS:
*Temps before repasting were much much worse, which is why I think there gotta be something wrong with the card.*


----------



## EastCoast

yns44 said:


> PS:
> *Temps before repasting were much much worse, which is why I think there gotta be something wrong with the card.*


1. On the Sapphire Nitro+ cards they use pretty good paste. I've used several Nitro+ variants and can confirm that their paste is slightly worst then using Thermal Grizzly Kryonaut.
Thermal Conductivity: 12.5 W/mk. I know of no other thermal paste that beats that. 
Gelid GC Extreme: 8.5 W/mk. Which I know is much weaker. 
At a guess whatever they are using is 9+ W/mk ball parking it based on the difference in temps I've seen. So, I'm pretty sure that the thermal compound they used is better. 

2. You said you put enough. But I'm talking about leaving 2 layers. 1st layer is your base layer that covers the entire die. Then you put another dollop on top of that and smooth it out on top. 

3. Yes, I am talking about the X bracket that actually tightens the gpu die to the heatsink. Typically it's the last thing you unscrew in disassembly and the 1st thing you screw for assembly. But you have to use crisscross pattern. You install all 4 screws by giving them each 2-3 turns each. Either 2 or 3 but the screw's base must not touch the X bracket (aka X Clamp as you call it). Once all 3 screws are on then you start the process of turning the screw counting each turn. At this point you can do up to 3 for each screw then cross over to the other corner screw and do the same. This is very important. Trust me that heatsink is finicky.

4. Again, put the X Bracket back on 1st. Once completed put the other screws on. I've found that this method works best for the Nitro+. 

Personally, I would only use TGN for thermal paste. As I know it makes all the difference in keeping temps low. But since GCGE is all you have use what you got. However, I cannot guarantee what temps you are going to get as a result of the remount attempt. You inadvertently bought thermal pastes that is, IMO, thermal conductively worst then stock. I do believe that the stock thermal paste may have failed prematurely which caused the higher then normal temps. So the stock paste needed to be replaced anyway. No fault of yours.


----------



## CS9K

EastCoast said:


> 1. On the Sapphire Nitro+ cards they use pretty good paste. I've used several Nitro+ variants and can confirm that their paste is slightly worst then using Thermal Grizzly Kryonaut.
> Thermal Conductivity: 12.5 W/mk. I know of no other thermal paste that beats that.
> Gelid GC Extreme: 8.5 W/mk. Which I know is much weaker.
> At a guess whatever they are using is 9+ W/mk ball parking it based on the difference in temps I've seen. So, I'm pretty sure that the thermal compound they used is better.
> 
> 2. You said you put enough. But I'm talking about leaving 2 layers. 1st layer is your base layer that covers the entire die. Then you put another dollop on top of that and smooth it out on top.
> 
> 3. Yes, I am talking about the X bracket that actually tightens the gpu die to the heatsink. Typically it's the last thing you unscrew in disassembly and the 1st thing you screw for assembly. But you have to use crisscross pattern. You install all 4 screws by giving them each 2-3 turns each. Either 2 or 3 but the screw's base must not touch the X bracket (aka X Clamp as you call it). Once all 3 screws are on then you start the process of turning the screw counting each turn. At this point you can do up to 3 for each screw then cross over to the other corner screw and do the same. This is very important. Trust me that heatsink is finicky.
> 
> 4. Again, put the X Bracket back on 1st. Once completed put the other screws on. I've found that this method works best for the Nitro+.
> 
> Personally, I would only use TGN for thermal paste. As I know it makes all the difference in keeping temps low. But since GCGE is all you have use what you got. However, I cannot guarantee what temps you are going to get as a result of the remount attempt. You inadvertently bought thermal pastes that is, IMO, thermal conductively worst then stock. I do believe that the stock thermal paste may have failed prematurely which caused the higher then normal temps. So the stock paste needed to be replaced anyway. No fault of yours.


I was the one suggesting that users use non-oil-based pastes like GC-Extreme, Hydronaut, EK Ectotherm, et al. 

1. Don't put too much faith in raw W/mK numbers. Performance is VERY close between high-end pastes, so close that performance doesn't scale with raw W/mK numbers _at all_.

Gelid GC-Extreme _will not spread nor flow like 'normal', thinner pastes!_ In my experience, GC-Extreme is VERY sensitive to application. If you're able, pull the card apart and try again. In regards to point #2:

2. I would not put two layers. Put enough to leave a thick enough layer to cover the die without being see-through, but spread it thin and even with the included spreader, until it is _just_ thick enough to not see the die through, no thicker. GC-Extreme will dry out over time and I am uncertain how it would behave if two layers were applied.

If you want the _full_ walkthrough with application, many pages back I made a post on the matter that you can find through search. You can't apply GC-Extreme like one would a thinner, oil-based paste like Kryonaut. I recommend GC-Extreme because it won't pump out near as quickly as oil-based-emulsion pastes like Kryonaut do; thus, in my experience, application is the ticket between instant-hotspot-spikes and a normal 20-25C Delta-T.


----------



## yns44

CS9K said:


> I was the one suggesting that users use non-oil-based pastes like GC-Extreme, Hydronaut, EK Ectotherm, et al.
> 
> 1. Don't put too much faith in raw W/mK numbers. Performance is VERY close between high-end pastes, so close that performance doesn't scale with raw W/mK numbers _at all_.
> 
> Gelid GC-Extreme _will not spread nor flow like 'normal', thinner pastes!_ In my experience, GC-Extreme is VERY sensitive to application. If you're able, pull the card apart and try again. In regards to point #2:
> 
> 2. I would not put two layers. Put enough to leave a thick enough layer to cover the die without being see-through, but spread it thin and even with the included spreader, until it is _just_ thick enough to not see the die through, no thicker. GC-Extreme will dry out over time and I am uncertain how it would behave if two layers were applied.
> 
> If you want the _full_ walkthrough with application, many pages back I made a post on the matter that you can find through search. You can't apply GC-Extreme like one would a thinner, oil-based paste like Kryonaut. I recommend GC-Extreme because it won't pump out near as quickly as oil-based-emulsion pastes like Kryonaut do; thus, in my experience, application is the ticket between instant-hotspot-spikes and a normal 20-25C Delta-T.


Thanks. Will take it apart in a bit and try again.
I agree that applying GC extreme is not the smoothest thing to do.


----------



## CS9K

yns44 said:


> Thanks. Will take it apart in a bit and try again.
> I agree that applying GC extreme is not the smoothest thing to do.


Slow and steady. I use the convex side of the spreader, myself, as I feel that it is easier to manage excess with. Start at the center, and spread outward, rotate the GPU around as you spread. Once it is spread over the entire die, then focus on making the spread as evenly as feasible, removing excess with the concave side of the spreader as you spread excess off of the edge of the die. You shouldn't be able to see the die through the paste, but the layer shouldn't be any thicker than that. If the overall layer is going to be un-even at all, make sure there is more paste in the center of the die and not at the edges. 

I wanted to make a video of what I am describing, but I forgot to take footage when I last pasted my RX 6900 XT. The next GPU that I paste, I will make a walkthrough video and add commentary.


----------



## yns44

CS9K said:


> Slow and steady. I use the convex side of the spreader, myself, as I feel that it is easier to manage excess with. Start at the center, and spread outward, rotate the GPU around as you spread. Once it is spread over the entire die, then focus on making the spread as evenly as feasible, removing excess with the concave side of the spreader as you spread excess off of the edge of the die. You shouldn't be able to see the die through the paste, but the layer shouldn't be any thicker than that. If the overall layer is going to be un-even at all, make sure there is more paste in the center of the die and not at the edges.
> 
> I wanted to make a video of what I am describing, but I forgot to take footage when I last pasted my RX 6900 XT. The next GPU that I paste, I will make a walkthrough video and add commentary.



Just checked my temps again playing "The Ascent":








Gyazo







gyazo.com





My delta is totally fine actually.

Funny thing is this game is constantly pushing 100% utilization on my GPU. Not even warzone is achieving this amount of power draw.

Regardless, 260w with gelid GC extreme shouldnt result in ~100°C hotspot temps IMO
especially since I lowered: Max Freq / Voltage / Power draw.
With default settings Hotspot Temp will instantly be at 110°C at least...


----------



## EastCoast

CS9K said:


> I was the one suggesting that users use non-oil-based pastes like GC-Extreme, Hydronaut, EK Ectotherm, et al.
> 
> 1. Don't put too much faith in raw W/mK numbers. Performance is VERY close between high-end pastes, so close that performance doesn't scale with raw W/mK numbers _at all_.
> 
> Gelid GC-Extreme _will not spread nor flow like 'normal', thinner pastes!_ In my experience, GC-Extreme is VERY sensitive to application. If you're able, pull the card apart and try again. In regards to point #2:
> 
> 2. I would not put two layers. Put enough to leave a thick enough layer to cover the die without being see-through, but spread it thin and even with the included spreader, until it is _just_ thick enough to not see the die through, no thicker. GC-Extreme will dry out over time and I am uncertain how it would behave if two layers were applied.
> 
> If you want the _full_ walkthrough with application, many pages back I made a post on the matter that you can find through search. You can't apply GC-Extreme like one would a thinner, oil-based paste like Kryonaut. I recommend GC-Extreme because it won't pump out near as quickly as oil-based-emulsion pastes like Kryonaut do; thus, in my experience, application is the ticket between instant-hotspot-spikes and a normal 20-25C Delta-T.


1. I've tried a few thermal pastes myself. And I've found differences in temps between thermal paste. In the case of GCE, I'm sorry, I've found TGK to be better. I'm only providing my experience.
2. Have you noticed how much thermal paste is on the die from some AIBs? Not to say all AIBs do. However, it's usually a very thick layer on the die. And for good reason. Some AIBs use thinner paste. While other AIBs use a denser variants.

Now it doesn't say what thermal conductivity they are. It just reinforces the notion that using a thicker layer isn't harmful. For me, using a thicker layer prevents TPO for well over a year's use. Versus a few weeks to a few months using thinner layers of TP. Again, this is my personal experience. However, as a caveat I do OC everytime I use the card(s). So, I don't use stock frequencies.

I've had no problem putting on a 2nd layer of GCE. It's a bit more involved, but not impossible. If he's not able to spread the 2nd layer he could simply leave it as a dollop as mentioned earlier.

I've found TPO on both thin and dense thermal paste. Be it oil based or otherwise. Thin, denser, non oil base TP doesn't prevent TPO.

Be it as it may, I've provided my feedback on the situation.


----------



## CS9K

EastCoast said:


> 1. I've tried a few thermal pastes myself. And I've found differences in temps between thermal paste. In the case of GCE, I'm sorry, I've found TGK to be better. I'm only providing my experience.
> 2. Have you noticed how much thermal paste is on the die from some AIBs? It's usually a very thick layer. And for good reason. Some AIBs use thinner paste. While other AIBs use a denser variants.
> Now it doesn't say what thermal conductivity they are. It just reinforces the notion that using a thicker layer isn't harmful.
> 
> I've had no problem putting on a 2nd layer of GCE. It's a bit more involved, but not impossible. If he's not able to spread the 2nd layer he could simply leave it as a dollop as mentioned earlier.
> 
> I've found TPO on both thin and dense thermal paste. Be it oil based or otherwise. Denser, non oil base TP doesn't prevent TPO.
> 
> Be it as it may, I've provided my feedback on the situation.


I don't disagree with you on the numbers, they are what they are.

I think I may have miscommunicated when referring to thickness, as I used it to describe both the viscosity of the paste, as well as the thickness of the layer of the paste to be applied.

On viscosity: non-oil-base pastes pump out more-slowly than the less-viscous and/or oil base pastes, when used on bare-die applications like GPU's. (With the metal heat spreader on modern CPU's, the opposite is true: Oil base pastes perform better on average and don't generally pump out any differently than a more-viscous paste would).

On layer thickness: Depending on the thickness of the thermal pads used in @yns44 's GPU, they may need to apply a thicker layer on top of the GPU die. It does indeed sound like it is a contact issue based on what we know.

I did forget to ask: @yns44 is your GPU in this orientation? Reference RDNA2 GPU's can not be in this orientation due to the design of the heat pipe. I did not think any AMD AIB GPU had the same issue, but I do not know for sure. If your GPU is in this orientation, change it, and re-test your temperatures.


----------



## yns44

CS9K said:


> I don't disagree with you on the numbers, they are what they are.
> 
> I think I may have miscommunicated when referring to thickness, as I used it to describe both the viscosity of the paste, as well as the thickness of the layer of the paste to be applied.
> 
> On viscosity: non-oil-base pastes pump out more-slowly than the less-viscous and/or oil base pastes, when used on bare-die applications like GPU's. (With the metal heat spreader on modern CPU's, the opposite is true: Oil base pastes perform better on average and don't generally pump out any differently than a more-viscous paste would).
> 
> On layer thickness: Depending on the thickness of the thermal pads used in @yns44 's GPU, they may need to apply a thicker layer on top of the GPU die. It does indeed sound like it is a contact issue based on what we know.
> 
> I did forget to ask: @yns44 is your GPU in this orientation? Reference RDNA2 GPU's can not be in this orientation due to the design of the heat pipe. I did not think any AMD AIB GPU had the same issue, but I do not know for sure. If your GPU is in this orientation, change it, and re-test your temperatures.
> View attachment 2526041


Its in classic ATX-Orientation (Fans pointing down to the ground, backplate pointing to the sky).

1. Could you give me your opinion to my post #3541?
2. Did a Timespy: I scored 20 193 in Time Spy
Temps peaking at 101°C Hotspot but generally looks like this during the benchmark:









Keep in mind: My GPU is -10% Undervolt / -75 mV / -75 MHz core.

I just cant believe those temps are reality.

Will do a TimeSpy now with the default Sapphire Settings (inb4 house going to burn)


///



99belle99 said:


> Got this score on a reference card with stock cooler.
> 
> I scored 19 959 in Time Spy


So the guy on #3522 Post does in Time Spy:

-average 220 mhz extra on core
- 11°C lower Temp during

During the Time Spy benchmark on a Ref Card with Stock cooler?
Ya okay brother.

What kind of desaster card is parasiting inside of my case?


----------



## yns44

I scored 20 011 in Time Spy


Intel Core i9-11900K Processor, AMD Radeon RX 6900 XT x 1, 32768 MB, 64-bit Windows 10}




www.3dmark.com





2nd Time Spy Run on Default Settings as expected with lower results + Peak Temperature at 

(hold it)

115°C (one hundred FIFTEEN BTW).

what a ****show of a Card


----------



## CS9K

Oof, reference RX 6900 XT's on the stock cooler barely run _that_ hot. Hmm, if a re-paste doesn't remedy it, I'm not sure what to say other than it may be worth sending the card in for RMA if that is feasible in Germany (not up to speed on "warranty void if sticker removed" legality there). 

Before you pull the cooler off, grab a torch and peek under the heatsink assembly, see if it is uneven at any angle around the core. Ensure a tiny bit of heatsink compound has ooze'd out from under the heatsink assembly _all the way around the GPU die._

If you do pull the cooler off of the GPU again, be _absolutely_ certain that the thermal pads are in-place, and nothing is preventing the GPU die from contacting the cooler.


----------



## yns44

CS9K said:


> Oof, reference RX 6900 XT's on the stock cooler barely run _that_ hot. Hmm, if a re-paste doesn't remedy it, I'm not sure what to say other than it may be worth sending the card in for RMA if that is feasible in Germany (not up to speed on "warranty void if sticker removed" legality there).
> 
> Before you pull the cooler off, grab a torch and peek under the heatsink assembly, see if it is uneven at any angle around the core. Ensure a tiny bit of heatsink compound has ooze'd out from under the heatsink assembly _all the way around the GPU die._
> 
> If you do pull the cooler off of the GPU again, be _absolutely_ certain that the thermal pads are in-place, and nothing is preventing the GPU die from contacting the cooler.


-8°C (hotspot) on my Undervolt Settings after Repaste + correctly applying the X-Clamp (crossing)

Its actually quiet unbelieveable that the Card out of the box is so terrible bad at temps that you have to re-paste it like your some sort of surgeon.

I really have to say in this regard as well as the driver UI and settings, I do miss nvidia...


----------



## EastCoast

yns44 said:


> -8°C (hotspot) on my Undervolt Settings after Repaste + correctly applying the X-Clamp (crossing)
> 
> Its actually quiet unbelieveable that the Card out of the box is so terrible bad at temps that you have to re-paste it like your some sort of surgeon.
> 
> I really have to say in this regard as well as the driver UI and settings, I do miss nvidia...


Are you getting the same Current and Junction Temps at idle/desktop?
Or
Are you seeing a 1-10C higher junction temps at idle/desktop?


----------



## yns44

EastCoast said:


> Are you getting the same Current and Junction Temps at idle/desktop?
> Or
> Are you seeing a 1-10C higher junction temps at idle/desktop?


exactly and constantly 4°C difference (higher junction)


----------



## EastCoast

yns44 said:


> exactly and constantly 4°C difference (higher junction)


What I've found is that you get the best temps (in games) when both current/junction temps are the same at idle. You can get a better pressure mount if you do the washer mod.
M3 6mmx3mmx1mm should work for you. If you decide to try it at some future point. In the meantime enjoy!!
😊


----------



## yns44

EastCoast said:


> What I've found is that you get the best temps (in games) when both current/junction temps are the same at idle. You can get a better pressure mount if you do the washer mod.
> M3 6mmx3mmx1mm should work for you. If you decide to try it at some future point. In the meantime enjoy!!
> 😊


I was thinking of putting a waterblock on it - what you think?


----------



## lestatdk

yns44 said:


> I was thinking of putting a waterblock on it - what you think?


Might be worth a try.
My card was horrible with stock cooler. It was suffering from thermal shutdown with hotspot hitting 118C when I overclocked it.
I have a modified Alphacool block on it now, and my hotspot under the same load is not 62C , delta is 14C and my idle temps are close to room temperature.
It did not clock higher after the block, but my temps are not almost 60C lower !

I used Thermal Grizzly Kryonaut paste spread thin across the die.


----------



## lDevilDriverl

yns44 said:


> anybody knows if I should RMA the card?


You should check how good thermal compound was applied. Looks like you mount back air cooling not perfectly. dismount air cooling, make few photos of gpu chip(die) and radiator platform where the thermal compound is. Repaste ones again and check how good paste spread on chip(die) and cooling. Also, i would recommend to use KPX paste.


----------



## lDevilDriverl

yns44 said:


> I was thinking of putting a waterblock on it - what you think?


EK waterblock will be the best cooling solution with alphacool 755 v3 pump and SR2 radiators


----------



## Nicklas0912

Is there a way to change the voltage? and is it worth? mine is stock at 1200mw.


----------



## lDevilDriverl

yns44 said:


> Just checked my temps again playing "The Ascent":
> 
> 
> 
> 
> 
> 
> 
> 
> Gyazo
> 
> 
> 
> 
> 
> 
> 
> gyazo.com
> 
> 
> 
> 
> 
> My delta is totally fine actually.
> 
> Funny thing is this game is constantly pushing 100% utilization on my GPU. Not even warzone is achieving this amount of power draw.
> 
> Regardless, 260w with gelid GC extreme shouldnt result in ~100°C hotspot temps IMO
> especially since I lowered: Max Freq / Voltage / Power draw.
> With default settings Hotspot Temp will instantly be at 110°C at least...


it could be just a bad contact between die and cooling solution or problems with applying of thermal compound.


----------



## lDevilDriverl

Nicklas0912 said:


> Is there a way to change the voltage? and is it worth? mine is stock at 1200mw.


best way to find good freq\voltage ratio is to start from 1.05v and increasing freq after each run of 3Dmark time spy 4k or firestrike extreme\ultra. when 3dmark will crash, you can add +25mv to vcore and try more)


----------



## lestatdk

Nicklas0912 said:


> Is there a way to change the voltage? and is it worth? mine is stock at 1200mw.


If you want to go higher you'd need something like an EVC2 hw mod


----------



## cfranko

@yns44 I had the same issue with the air cooler of the Asrock Phantom Gaming 6900 xt. At 280 watts the edge temperature reached 80c and hotspot was around 105. 280 watts is the stock power limit btw. I repasted the card with MX-4 but nothing changed. The card would thermal shutdown in timespy because of 118c hotspot.


http://imgur.com/a/Wwlw5oX

This was what I saw after I opened the card, this is after it was repasted with Mx-4. Eventually I couldn’t understand what the problem was and I bought a waterblock and built a custom loop, now my temps are perfect. Good luck with fixing that air cooler ,maybe washer mod or liquid metal may work but I can’t think of any better solution.


----------



## EastCoast

yns44 said:


> I was thinking of putting a waterblock on it - what you think?


Yes, do it if you can.


----------



## jonRock1992

I'd just like to report back and say that 12.9.2 is a pretty good driver. I also got +5MHz on the core clock over 12.8.1. I was finally, for the first time, able to do a 2700MHz min / 2800 MHz max core clock in Timespy. My ocd is now satisfied lol. Still can't get 25K gpu score, but I'm close to it. Temps also seem a little lower and games a little less stuttery. I'll be sticking with this driver.


----------



## LtMatt

jonRock1992 said:


> I'd just like to report back and say that 12.9.2 is a pretty good driver. I also got +5MHz on the core clock over 12.8.1. I was finally, for the first time, able to do a 2700MHz min / 2800 MHz max core clock in Timespy. My ocd is now satisfied lol. Still can't get 25K gpu score, but I'm close to it. Temps also seem a little lower and games a little less stuttery. I'll be sticking with this driver.


God damnit Jon you had me going there for a minute when you said you had a clock boost, only to kick me in the nads in the second half of the post by saying you still have not breached 25k.


----------



## jfrob75

I have the same experience with 21.9.2 driver. If only I could get my max graphics score and max cpu score in the same run, then my overall score would be over 24K. As it is my best 2 scores are as follows:
This is with driver version 21.9.2
This is with driver version 21.7.2

As a side note, have you guys looked at the TimeSpy hall of fame for a single GPU? What I find interesting is all of the 3090's in the top 10 are using something other than air or water for cooling. If you discount those, than the 6900XT is king for TS. A remarkable achievement IMHO.


----------



## lestatdk

I'm not impressed with 9.2 yet. I might try and roll back to 7.2 yet again 
That was the best for me so far wrt TS and performance in general .


----------



## jonRock1992

LtMatt said:


> God damnit Jon you had me going there for a minute when you said you had a clock boost, only to kick me in the nads in the second half of the post by saying you still have not breached 25k.


I can't for the life of me figure out how I got that 24957 GPU score in this run: I scored 21 808 in Time Spy
I should be able to get a higher score now that I can push a higher core clock, but I'm like 100 points less now. And it's not the driver's fault because I've been getting 100 points less with all drivers. I believe it has to do with the latest update for my dark hero motherboard, or my windows installation. Oh well. Maybe when I do a fresh Windows 11 install I'll get that 25k 🤣


----------



## 99belle99

yns44 said:


> So the guy on #3522 Post does in Time Spy:
> 
> -average 220 mhz extra on core
> - 11°C lower Temp during
> 
> During the Time Spy benchmark on a Ref Card with Stock cooler?
> Ya okay brother.
> 
> What kind of desaster card is parasiting inside of my case?


I did get that score with a reference card and stock cooler but I used MPT to increase the clocks. I ran that run in the early hours of the morning and it was chilly enough here in Ireland at the time. I will be able to get a higher score when I push it further but I'm waiting till the weather gets cooler.

My average clocks were only 2500MHz and when I got the card last spring I was able to get a bit over 2600MHz average clocks but score was lower since the drivers weren't as good as now.


----------



## Wilsonj471

Looking for some suggestions from all your experienced Tuners. I've been experimenting a little with MPT on my reference 6900xt and have the power limit set at 325watts and have left all the other settings stock TDC limits 320a/55a. In Wattman I have slider to +4 in power tuning. My concern is during benchmarks seeing power consumption of 425watts in asic power max in HWinfo. They aren't typically long pulls that high while I am watching but some last a few seconds north of 400watts. By reading threw alot of these posts that doesn't appear to be an issue power delivery wise if you can cool the card. Which my custom loops appears to be doing well. Max Temp 64c ( during 400watt pulls) with averages in the mid 40's during Time spy runs as you can see in my attachment.

Can I give this Reference card more power or am I am playing with fire already with spikes up around 400. During Time Spy runs I see typically 300-375watts as average. Currently timespy score just over 22k and am looking for advice to push it up without making mistake out of ignorance. Also instead of power if there are other settings I should be tweaking would be open to suggestions. I keep looking at Frequency tab in MPT and top left line says min 500mhz max 2660mhz GFX and that appears to be the exact spot my card fails during benchmark right at 2660 GHZ spike shuts down test when I have the max slider over 2650mhz in wattman. Full specs below for reference

Specs:
MSI X570 MEG Unify board
R9 5900x PBO Curve all -5 peak pull 180watts- water blocked
AMD RX6900xt reference card - Water Blocked - Settings below in image.
Be Quiet! Dark Power Pro 11 850watt- has 16 gauge pcie 8pin power leads and I have two separate leads feeding the card.
3-360mm X45mm radiators being pushed at @193 L on the digital display by a D5 pump


----------



## cfranko

21.9.2 driver used all the power limit I gave the GPU in Timespy. I saw it pull 500 watts for the first time with this driver. Is this normal?


----------



## Wilsonj471

Wilsonj471 said:


> Looking for some suggestions from all your experienced Tuners. I've been experimenting a little with MPT on my reference 6900xt and have the power limit set at 325watts and have left all the other settings stock TDC limits 320a/55a. In Wattman I have slider to +4 in power tuning. My concern is during benchmarks seeing power consumption of 425watts in asic power max in HWinfo. They aren't typically long pulls that high while I am watching but some last a few seconds north of 400watts. By reading threw alot of these posts that doesn't appear to be an issue power delivery wise if you can cool the card. Which my custom loops appears to be doing well. Max Temp 64c ( during 400watt pulls) with averages in the mid 40's during Time spy runs as you can see in my attachment.
> 
> Can I give this Reference card more power or am I am playing with fire already with spikes up around 400. During Time Spy runs I see typically 300-375watts as average. Currently timespy score just over 22k and am looking for advice to push it up without making mistake out of ignorance. Also instead of power if there are other settings I should be tweaking would be open to suggestions. I keep looking at Frequency tab in MPT and top left line says min 500mhz max 2660mhz GFX and that appears to be the exact spot my card fails during benchmark right at 2660 GHZ spike shuts down test when I have the max slider over 2650mhz in wattman. Full specs below for reference
> 
> Specs:
> MSI X570 MEG Unify board
> R9 5900x PBO Curve all -5 peak pull 180watts- water blocked
> AMD RX6900xt reference card - Water Blocked - Settings below in image.
> Be Quiet! Dark Power Pro 11 850watt- has 16 gauge pcie 8pin power leads and I have two separate leads feeding the card.
> 3-360mm X45mm radiators being pushed at @193 L on the digital display by a D5 pump
> 
> View attachment 2526130
> View attachment 2526131










sorry wrong screen shot originally


----------



## ptt1982

cfranko said:


> 21.9.2 driver used all the power limit I gave the GPU in Timespy. I saw it pull 500 watts for the first time with this driver. Is this normal?


This is a very good question! 

My Red Devil has the same strange power behaviour. HWinfo64 reported after a Doom Eternal (4K60 vsynced ray tracing on and ultra nightmare details) session with a max 410W peak with a TDP limit of 391W via MPT. Same MPT settings on Nier Replicant and the max peak has already gone up to 440W. Doom Eternal for sure should require more power. Even the temps are 10C higher than what Nier Replicant goes to, indicating that this could be a reporting error.

At the same time I'm speculating that this is not a reporting error, but actually new reporting behavior that came with the last two drivers. (I'm not sure if the driver controls the total wattage output that the software would then read.)

Perhaps the new drivers can simply detect more accurately the 2ms, 5ms and 10ms spikes which were already detected by youtubers who took measurements from the card or wall directly. If so, the drivers might provide a more detailed, essentially a more real reflection of how high the cards can spike up to.

I'm convinced there's someone smarter on this forum who can comment the above speculation, I might be completely off the mark!


----------



## CS9K

cc @ptt1982 @Wilsonj471 @cfranko

Re: 21.9.1 ASIC Power usage

Yep, I'm seeing it too on 21.9.1









Yall scared the crap out of me for a second there. Thankfully, I _feel_ like what we're seeing is just a different measurement being reported as the ASIC power, versus what used to be reported. In my experience, the GPU ASIC Power, TGP Power, and GPU PPT were all three effectively the same peak and average measurements in past driver editions. However, GPU PPPT, GPU Core Power, and other measurements all look nearly identical to prior measurements. Likewise, a Time Spy run at identical OC numbers came in roughly the same as before.

I scored 21 033 in Time Spy <- 21.9.1; 2715MHz, 2150MHz+Fast, 420W Limit
I scored 20 955 in Time Spy <- 21.9.2; 2715MHz, 2150MHz+Fast, 410W Limit

(lost a tiny bit of score due to 10W lower power limit)

TL;DR: I don't think anything is wrong...? *But!* do recall that RDNA2 GPU's, even with stock settings, hammer the power supply for much higher peaks than even the 3090 does. Peaks that have only been measured by a 'scope, as they're too short in duration for a multimeter or onboard sensor to reliably read... at least that's what we thought. _A_ theory, is that ASIC Power may be showing peak instantaneous current over the prior sample period, or be peak instantaneous current overall, that normally only a scope would see. I dunno if that's true, but all numbers that I see (other than GPU ASIC Power), and game performance numbers I see, are on-par with prior driver packages.


----------



## ptt1982

yns44 said:


> I was thinking of putting a waterblock on it - what you think?


Do it! I fixed my Red Devil by putting a waterblock on it. 

And I agree, one should not need to have surgical precision when doing repasting and repadding. After doing around 25 remounts and spending 100 USD on thermal pads and paste, I took the nuclear option of doing a full Corsair custom loop (I always wanted to challenge myself to do the project.) Got the clocks and power limits higher, less heat and most of all, less tuning. Cost me a lot, but still total cost is under air cooled 3090, and I got a CPU under water now as well.

While the temps are not nearly as good as most people here have with their custom loops, my Junction never goes past 88C at the absolute worse case scenario, and most of the time stays under 72C when playing games Vsynced 4K60 max settings. I've got TDP limit at 391W, but the card pushes (even with the previous drivers) around 20W over that. So at around 410W the edge stays at around 45C-55C. For example, Doom Eternal everything maxed out including RT at 4K Vsynced 60fps junction spikes to 72C junction, and edge peaks at 52C. Nier Replicant with same settings, edge 47C and junction 62C.

Strangely, I have more peace of mind with a custom loop than I ever had with the air cooled PC. That might change if there's a leak. If you buy a full custom setup, buy LeakShield, that will absolutely guarantee the peace of mind, and that also will be in my next custom loop setup.


----------



## CS9K

CS9K said:


> TL;DR: I don't think anything is wrong...? *But!* do recall that RDNA2 GPU's, even with stock settings, hammer the power supply for much higher peaks than even the 3090 does. Peaks that have only been measured by a 'scope, as they're too short in duration for a multimeter or onboard sensor to reliably read...


Just found the Igor's Lab article I was thinking of when I wrote this. GPU ASIC Power numbers seem to scale appropriately with the averages between his and my numbers. Neat!


















Is AMD getting the crown back? Radeon RX 6900 XT 16 GB review with benchmarks and a deeper technical analysis | Page 15 | igor'sLAB


AMD still has one left, and that's what we have today. They want to take the competitive Radeon RX 6800 XT 16 GB one step further and so Dr. Lisa Su pulled a last ace out of her sleeve some time ago…




www.igorslab.de


----------



## ptt1982

CS9K said:


> Just found the Igor's Lab article I was thinking of when I wrote this. GPU ASIC Power numbers seem to scale appropriately with the averages between his and my numbers. Neat!
> 
> View attachment 2526172
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> Is AMD getting the crown back? Radeon RX 6900 XT 16 GB review with benchmarks and a deeper technical analysis | Page 15 | igor'sLAB
> 
> 
> AMD still has one left, and that's what we have today. They want to take the competitive Radeon RX 6800 XT 16 GB one step further and so Dr. Lisa Su pulled a last ace out of her sleeve some time ago…
> 
> 
> 
> 
> www.igorslab.de


Thanks for this!

Based on this evidence, would you conclude that the drivers now have the ability to measure the <1ms peaks? And if so, would 1ms peaks be taxing to the PSU, triggering OCP etc.?


----------



## CS9K

ptt1982 said:


> Thanks for this!
> 
> Based on this evidence, would you conclude that the drivers now have the ability to measure the <1ms peaks? And if so, would 1ms peaks be taxing to the PSU, triggering OCP etc.?


I can't objectively conclude anything, personally, for I am but a humble _turbonerd_ that knows enough to get myself into trouble 

What I can say is that, yes, the hammering that the RX 6900 XT does puts more stress on, well.. everything, which is why people recommend a higher wattage power supply for high end RX 6900 XT systems (compared to the 3090)... and why I made a set of 16ga PCIe cables for my RX 6900 XT... if for nothing else than to feel better about my GPU hammering the crap out of my power supply.

I don't think GPU behavior itself has changed, I think we can just _see_ what was only visible to an oscilloscope before now.


----------



## cfranko

@CS9K @ptt1982 tbh I downgraded to 21.8.1, I was not comfortable seeing 500 watts power consumption and 80 hotspot with a waterblock even if it was apparently only a change in the report method of power. In my case It was not about only the power consumption and how it reports it but the temps were actually higher because of the power and my score was lower. I think this driver is worse than 21.8.1. It’s insteresting that my Corsair RM750 did not shut down when the GPU was pulling 500 and the CPU was pulling 120


----------



## Neoki

So I could not get my gigabyte extreme waterforce to go above 2740mhz core/2130-2150 mem no matter what I did. I repasted twice, replaced pads once but deltas would just not improve beyond 35c from edge to hotspot. I am running 2x480, 1x240, 1x120 rads in my custom loop. So heat/liquid temps are not the issue and flow rate is healthy at 260 L/h.

Another thing that I really didn't like was the forced use of RGB Fusion to change the LEDs. This software is like malware once you install it. I ended up having to reinstall windows to get rid of it popping up on me even after uninstalling it.

I'm getting rid of the card and waiting for my pre-order of the XFX Zero to give this another try.


----------



## OC-NightHawk

Neoki said:


> So I could not get my gigabyte extreme waterforce to go above 2740mhz core/2130-2150 mem no matter what I did. I repasted twice, replaced pads once but deltas would just not improve beyond 35c from edge to hotspot. I am running 2x480, 1x240, 1x120 rads in my custom loop. So heat/liquid temps are not the issue and flow rate is healthy at 260 L/h.
> 
> Another thing that I really didn't like was the forced use of RGB Fusion to change the LEDs. This software is like malware once you install it. I ended up having to reinstall windows to get rid of it popping up on me even after uninstalling it.
> 
> I'm getting rid of the card and waiting for my pre-order of the XFX Zero to give this another try.


That is strange. I ran RGB Fusion one time and haven't had to run it since. It doesn't open and the card simply stays as I set it. I didn't have to replace the thermal paste at all and I'm hitting 2780MHz on the GPU and 2158MHz for the memory at around 300W. I have two 280mm and one 140mm radiator and the loop is also cooling a Ryzen 9 5950X. The GPU runs at 55C with a junction of 70C while at absolute max. I wouldn't expect the memory to go that much beyond 2150 if you get anything. It's just the way it is. What pump are you using for all your radiators?


----------



## Neoki

OC-NightHawk said:


> That is strange. I ran RGB Fusion one time and haven't had to run it since. It doesn't open and the card simply stays as I set it. I didn't have to replace the thermal paste at all and I'm hitting 2780MHz on the GPU and 2158MHz for the memory at around 300W. I have two 280mm and one 140mm radiator and the loop is also cooling a Ryzen 9 5950X. The GPU runs at 55C with a junction of 70C while at absolute max. I wouldn't expect the memory to go that much beyond 2150 if you get anything. It's just the way it is. What pump are you using for all your radiators?


Under load card was around 52c but hotspot at 90c. It came by default with the same temps. I could not get that hotspot better no matter what did with the repastes and repads.

Was running 2 Corsair XD5 pumps spaced through out loop. I'm redoing my loop in prep for the Zero when it comes. Going to downgrade from HWLabs GTS's to XSPC TX series so I can remove 1 pump. Had too much flow restriction on previous GTS Rad layout for 1 pump, I was maxed at 140 L/h before adding the 2nd to take the loop to 260 L/h.

I'm just going to chalk it up to a really bad bin, which i was not expecting with an XTXH card.


----------



## OC-NightHawk

Neoki said:


> Under load card was around 52c but hotspot at 90c. It came by default with the same temps. I could not get that hotspot better no matter what did with the repastes and repads.
> 
> Was running 2 Corsair XD5 pumps spaced through out loop. I'm redoing my loop in prep for the Zero when it comes. Going to downgrade from HWLabs GTS's to XSPC TX series so I can remove 1 pump. Had too much flow restriction on previous GTS Rad layout for 1 pump, I was maxed at 140 L/h before adding the 2nd to take the loop to 260 L/h.
> 
> I'm just going to chalk it up to a really bad bin, which i was not expecting with an XTXH card.


I apologize if I missed it but what is your ambient room temperature?

I'm not surprised about the memory speed limit at all. Jays2Cents ran into the same issue with the Power Color Liquid Devil Ultimate. I doubt the XFX will do better regarding the memory.

2740MHz isn't bad if it is rock solid. 90C junction is concerning. There is a chance your next XTXH card might do better. My radiators are just the Corsair XR5 series and one Corsair XD5. It's always a possibility that you didn't win the silicon lottery but reaching over 2700MHz is still better then many. Do you have a way to confirm the actual flow and fluid temperature as the fluid passes from component to component?

Did you use the MorePowerTool? If so what are your settings?


----------



## Neoki

OC-NightHawk said:


> I apologize if I missed it but what is your ambient room temperature?
> 
> I'm not surprised about the memory speed limit at all. Jays2Cents ran into the same issue with the Power Color Liquid Devil Ultimate. I doubt the XFX will do better regarding the memory.
> 
> 2740MHz isn't bad if it is rock solid. 90C junction is concerning. There is a chance your next XTXH card might do better. My radiators are just the Corsair XR5 series and one Corsair XD5. It's always a possibility that you didn't win the silicon lottery but reaching over 2700MHz is still better then many. Do you have a way to confirm the actual flow and fluid temperature as the fluid passes from component to component?
> 
> Did you use the MorePowerTool? If so what are your settings?


Ambient room temp is about 24c (I have HVAC as I live in a lovely hellish temps part of the USA right now). The memory overclocking wasn't concerning me at all, I defaulted them back at one point to see if I could push core more. But that hotspot I think is just too big of an issue. Agree and hopefully the new card does at least better on the temps front.

As for confirming flow comp to comp, the GPU was first in series behind the pump, following that the CPU, then 2 rads, pump, 2 rads, pump and restart. Flow meter was just behind the 2nd pump.


----------



## ptt1982

I found an interesting thing from my Corsair XD3 pump reservoir combo! 

Just for giggles, (I had nothing else to do,) I put the XD3 to 100% speed and did TS custom run of GT1 and GT2 twice, and the peak junction GPU was 75C, whereas edge was 51C. Lowered the pump to 78% (the rate when I can't hear it at all from the other room where my PC is), and did the same GT1 and GT2 runs twice, resulting into peaks of 52C/77C. Previously, when the pump was at 70%, edge was at 55C/85C after one run. The difference between 70% and 100% is edge -4C junction -10C, which would indicate a slight flow problem, this is 3400rpm vs 4500rpm. I think that 78%, which is 3900rpm, is the best balance, I don't care about that 1C/2C difference, but that 3C/8C difference is quite large between 70% and 78%. I like these new temps!!


----------



## lestatdk

ptt1982 said:


> I found an interesting thing from my Corsair XD3 pump reservoir combo!
> 
> Just for giggles, (I had nothing else to do,) I put the XD3 to 100% speed and did TS custom run of GT1 and GT2 twice, and the peak junction GPU was 75C, whereas edge was 51C. Lowered the pump to 78% (the rate when I can't hear it at all from the other room where my PC is), and did the same GT1 and GT2 runs twice, resulting into peaks of 52C/77C. Previously, when the pump was at 70%, edge was at 55C/85C after one run. The difference between 70% and 100% is edge -4C junction -10C, which would indicate a slight flow problem, this is 3400rpm vs 4500rpm. I think that 78%, which is 3900rpm, is the best balance, I don't care about that 1C/2C difference, but that 3C/8C difference is quite large between 70% and 78%. I like these new temps!!


If flow is not a problem there should be a small difference from 100% to 50% pump speed


----------



## ptt1982

ptt1982 said:


> I found an interesting thing from my Corsair XD3 pump reservoir combo!
> 
> Just for giggles, (I had nothing else to do,) I put the XD3 to 100% speed and did TS custom run of GT1 and GT2 twice, and the peak junction GPU was 75C, whereas edge was 51C. Lowered the pump to 78% (the rate when I can't hear it at all from the other room where my PC is), and did the same GT1 and GT2 runs twice, resulting into peaks of 52C/77C. Previously, when the pump was at 70%, edge was at 55C/85C after one run. The difference between 70% and 100% is edge -4C junction -10C, which would indicate a slight flow problem, this is 3400rpm vs 4500rpm. I think that 78%, which is 3900rpm, is the best balance, I don't care about that 1C/2C difference, but that 3C/8C difference is quite large between 70% and 78%. I like these new temps!!


I had to retest this multiple times immediately, and it seems there is only 2C difference even at the 70% speed. I have no idea why, but my time spy junction peak temps have dropped 8-10C in the last three weeks.

This behavior did start when I changed the GPU holder position, perhaps the extremely heavy waterblock was warping the gpu. This would explain why the temps were extremely good when I first assembled the loop, and got worse over time. The second time the temps were also very good in the beginning, and got slightly worse over time. Then I changed the GPU holder position, and the pressure is now even across the waterblock, and since then the temps have been going down slowly and surely. 

On top of the GPU holder position, there can be only two other explanations 1) drivers 2) the temperature and humidity drop here in Tokyo. However, when I changed the GPU holder earlier and saw similar results (slowly declining temps), the tempeture and humidity outside was always the same, 30-35C. Hmm!!

Well, it is only good news. I'd still like to know the answer!


----------



## wasprodker

So i got a Powercolour Red devil Ultimate a couple days ago. I threw a waterblock on it yesterday and i've been benching for a bit.

Just writing down my numbers here since im unsure if its a good card or not.
50 C gputemp, 72 C Hotspot on average.
With MPT it draws upto 450 W, but its usually sitting around 350-400 W

Around 2750 in timespy before crash
Can do 2850 in games all day.

Is it usual to have such a large variance in Temp vs hotspot. And also such a big diff in clockspeeds between benchmarks and games?
And finally, is it a good card OC wise? Ive been running Nvidia all my life so im not sure about the metrics i should aim for.

Timespy; 23 900 GPU-score
One of my scores: I scored 21 467 in Time Spy


----------



## lestatdk

ptt1982 said:


> I had to retest this multiple times immediately, and it seems there is only 2C difference even at the 70% speed. I have no idea why, but my time spy junction peak temps have dropped 8-10C in the last three weeks.
> 
> This behavior did start when I changed the GPU holder position, perhaps the extremely heavy waterblock was warping the gpu. This would explain why the temps were extremely good when I first assembled the loop, and got worse over time. The second time the temps were also very good in the beginning, and got slightly worse over time. Then I changed the GPU holder position, and the pressure is now even across the waterblock, and since then the temps have been going down slowly and surely.
> 
> On top of the GPU holder position, there can be only two other explanations 1) drivers 2) the temperature and humidity drop here in Tokyo. However, when I changed the GPU holder earlier and saw similar results (slowly declining temps), the tempeture and humidity outside was always the same, 30-35C. Hmm!!
> 
> Well, it is only good news. I'd still like to know the answer!


----------



## kairi_zeroblade

wasprodker said:


> So i got a Powercolour Red devil Ultimate a couple days ago. I threw a waterblock on it yesterday and i've been benching for a bit.
> 
> Just writing down my numbers here since im unsure if its a good card or not.
> 50 C gputemp, 72 C Hotspot on average.
> With MPT it draws upto 450 W, but its usually sitting around 350-400 W
> 
> Around 2750 in timespy before crash
> Can do 2850 in games all day.
> 
> Is it usual to have such a large variance in Temp vs hotspot. And also such a big diff in clockspeeds between benchmarks and games?
> And finally, is it a good card OC wise? Ive been running Nvidia all my life so im not sure about the metrics i should aim for.
> 
> Timespy; 23 900 GPU-score
> One of my scores: I scored 21 467 in Time Spy


disable your SMT..should net you around 18k in cpu score..pushing your total score upwards..


----------



## ptt1982

wasprodker said:


> So i got a Powercolour Red devil Ultimate a couple days ago. I threw a waterblock on it yesterday and i've been benching for a bit.
> 
> Just writing down my numbers here since im unsure if its a good card or not.
> 50 C gputemp, 72 C Hotspot on average.
> With MPT it draws upto 450 W, but its usually sitting around 350-400 W
> 
> Around 2750 in timespy before crash
> Can do 2850 in games all day.
> 
> Is it usual to have such a large variance in Temp vs hotspot. And also such a big diff in clockspeeds between benchmarks and games?
> And finally, is it a good card OC wise? Ive been running Nvidia all my life so im not sure about the metrics i should aim for.
> 
> Timespy; 23 900 GPU-score
> One of my scores: I scored 21 467 in Time Spy


Non-Ultimate Red Devil here. Your temps are normal, graphics scores are decent, but I'd be careful about the Timespy clocks vs games' clocks. I thought the very same thing when I was starting to do my benchmarks, but after longer sessions the card started crashing in about every game when it was above the TS clocks. Especially when you disable vsync and run stuff like Control with RT on at 1440p render resolution upscaled to 4K and maxed out, it won't most likely hold more than a couple of hours at 2850mhz without crashing.


----------



## lestatdk

I'll do some testing on 9.2 vs 7.2. I now have a duel boot windows set up so I can play around with things without messing up my games and work stuff. Will update later with results


----------



## lestatdk

Wilsonj471 said:


> Looking for some suggestions from all your experienced Tuners. I've been experimenting a little with MPT on my reference 6900xt and have the power limit set at 325watts and have left all the other settings stock TDC limits 320a/55a. In Wattman I have slider to +4 in power tuning. My concern is during benchmarks seeing power consumption of 425watts in asic power max in HWinfo. They aren't typically long pulls that high while I am watching but some last a few seconds north of 400watts. By reading threw alot of these posts that doesn't appear to be an issue power delivery wise if you can cool the card. Which my custom loops appears to be doing well. Max Temp 64c ( during 400watt pulls) with averages in the mid 40's during Time spy runs as you can see in my attachment.
> 
> Can I give this Reference card more power or am I am playing with fire already with spikes up around 400. During Time Spy runs I see typically 300-375watts as average. Currently timespy score just over 22k and am looking for advice to push it up without making mistake out of ignorance. Also instead of power if there are other settings I should be tweaking would be open to suggestions. I keep looking at Frequency tab in MPT and top left line says min 500mhz max 2660mhz GFX and that appears to be the exact spot my card fails during benchmark right at 2660 GHZ spike shuts down test when I have the max slider over 2650mhz in wattman. Full specs below for reference
> 
> Specs:
> MSI X570 MEG Unify board
> R9 5900x PBO Curve all -5 peak pull 180watts- water blocked
> AMD RX6900xt reference card - Water Blocked - Settings below in image.
> Be Quiet! Dark Power Pro 11 850watt- has 16 gauge pcie 8pin power leads and I have two separate leads feeding the card.
> 3-360mm X45mm radiators being pushed at @193 L on the digital display by a D5 pump
> 
> View attachment 2526130
> View attachment 2526131


Try bringing up the minimum frequency. Have it around 200 below max or so and work from there. Maybe set it to 2400 initially .


----------



## lestatdk

Results are in. 7.2 vs 9.2 Timespy . I had high hopes, but it turned out to be a dud in the end.

Same MPT settings and same Radeon settings. Average over 3 runs. GPU score only. DDU used between driver installs. HWinfo64 used for temps and power readings.

Timespy GPU score difference 17 points in favor of 9.2. A whopping 0.07% 

Hotspot to edge temp difference the same. Ambient temp was a bit higher on the 7.2 runs, so I only look at delta to factor that out.

Max ASIC power is interesting. Average was 493W with the 9.2 driver and 392W with the 7.2 driver.

Somehow the reporting of power has changed. Since MPT values, GPU score and temperature is the same, it can only be down to reporting since the card is not using 100W more without heating up.

So there it is. Guess I can stay on 9.2 anyway


----------



## OC-NightHawk

Neoki said:


> Ambient room temp is about 24c (I have HVAC as I live in a lovely hellish temps part of the USA right now). The memory overclocking wasn't concerning me at all, I defaulted them back at one point to see if I could push core more. But that hotspot I think is just too big of an issue. Agree and hopefully the new card does at least better on the temps front.
> 
> As for confirming flow comp to comp, the GPU was first in series behind the pump, following that the CPU, then 2 rads, pump, 2 rads, pump and restart. Flow meter was just behind the 2nd pump.


I'm in Texas currently so I can relate with the hellish heat. I was here only for the job. However that job has come and gone and my current gig is remote only with some travel and it doesn't matter where I live for travel other then there is a airport nearby. So as soon as I am free of the lease here I am going to move someplace less oppressively hot.

The only thing I can think of for now with t he card you do have is to try lowering the voltage just a bit and bring your TDP down if you have it set high. Sometimes it will actually increase the speed ceiling just a bit more. Other then that I have my fingers crossed your next card is a better sample.


----------



## wasprodker

kairi_zeroblade said:


> disable your SMT..should net you around 18k in cpu score..pushing your total score upwards..



Thanks for the tip, but for me personally im not particularly interested in the total score


----------



## Neoki

OC-NightHawk said:


> I'm in Texas currently so I can relate with the hellish heat. I was here only for the job. However that job has come and gone and my current gig is remote only with some travel and it doesn't matter where I live for travel other then there is a airport nearby. So as soon as I am free of the lease here I am going to move someplace less oppressively hot.
> 
> The only thing I can think of for now with t he card you do have is to try lowering the voltage just a bit and bring your TDP down if you have it set high. Sometimes it will actually increase the speed ceiling just a bit more. Other then that I have my fingers crossed your next card is a better sample.


I'm in Texas looking to leave soon as well. Similar situation to you job wise.

I have packed up the card and will be shipping it back today. Hoping I get in on the first wave of shipments for the Zero, then I can hopefully tinker around again 2 weeks from now. Either way this new monster machine I built has turned more into a classic car project in the garage at the moment. My current 3770K/980ti is holding on well still while I waiting.


----------



## OC-NightHawk

Neoki said:


> I'm in Texas looking to leave soon as well. Similar situation to you job wise.
> 
> I have packed up the card and will be shipping it back today. Hoping I get in on the first wave of shipments for the Zero, then I can hopefully tinker around again 2 weeks from now. Either way this new monster machine I built has turned more into a classic car project in the garage at the moment. My current 3770K/980ti is holding on well still while I waiting.


I have my fingers crossed for you. It took me months to get my Ryzen machine parts ordered. I can relate with the waiting.


----------



## jonRock1992

It seems the Red Devil cards are the only ones that benefit from the new driver? Lol. I've noticed mine is really sensitive to different drivers.


----------



## Roacoe717

Does anyone else have a problem with msi afterburner reading inconsistant wattage? Stock with power limit +15 it goes to 350w to 250w back and forth. Never did it before.


----------



## cfranko

Roacoe717 said:


> Does anyone else have a problem with msi afterburner reading inconsistant wattage? Stock with power limit +15 it goes to 350w to 250w back and forth. Never did it before.


It’s a new behavior with the new 21.9.2 drivers. Same happens to me.


----------



## OC-NightHawk

I got my overall score up to 21763. I'm about 100 points away on the graphics score from the 100th position for one 6900 xt in the graphics score. AMD Radeon RX 6900 XT video card benchmark result - AMD Ryzen 9 5950X,Micro-Star International Co., Ltd. PRESTIGE X570 CREATION (MS-7C36) (3dmark.com)


----------



## kratosatlante

Best i get for now in SS 4k optimized, thermal paste stock




















The formula is easier to disassemble than the 6900xt phantom, I changed the paste for the thermal kryonaut extreme 14wk, for now gg, I raised 7 to 9c the temp in core and hostop, in vram and vrm under a couple, adjust a little the screws, I will leave it for a week to see if the temp changes, if it does not improve disassembly and I try again with more paste, although I put enough, for what they recommend in overclockers the gelid goes although it only has 8.5kw, in the phantom I under more than 5c the coolermaster mastermaker gel 11mwk, if the kryo does not work I put that one, to undermine a beauty together 123mh 260w, vram 58c phantom with UV 1.3v, vram 70c formula UV 1.3v (I'll see to put more) the core 650 mv both


----------



## CS9K

wasprodker said:


> Around 2750 in timespy before crash
> Can do 2850 in games all day.
> 
> Is it usual to have such a large variance in Temp vs hotspot. And also such a big diff in clockspeeds between benchmarks and games?
> And finally, is it a good card OC wise? Ive been running Nvidia all my life so im not sure about the metrics i should aim for.





ptt1982 said:


> Non-Ultimate Red Devil here. Your temps are normal, graphics scores are decent, but I'd be careful about the Timespy clocks vs games' clocks. _I thought the very same thing when I was starting to do my benchmarks, *but after longer sessions the card started crashing in about every game when it was above the TS clocks.*_ Especially when you disable vsync and run stuff like Control with RT on at 1440p render resolution upscaled to 4K and maxed out, it won't most likely hold more than a couple of hours at 2850mhz without crashing.


I had an almost identical experience, myself. 

TL;DR: Time Spy (and to a lesser extent, Port Royal) push your GPU harder than almost every game out there will. Find the clock speeds that you are fully stable at, for both the Time Spy benchmark and the Port Royal benchmark. If you run those settings, you will be rock stable in virtually everything.

What I found is, while less-demanding games _seem_ stable clock speeds higher than TS-stable-speeds, you _are not_ _actually fully stable, and you just haven't crashed yet._

YMMV, of course, but my experience mirrors @ptt1982 's



Roacoe717 said:


> Does anyone else have a problem with msi afterburner reading inconsistant wattage? Stock with power limit +15 it goes to 350w to 250w back and forth. Never did it before.


MSI Afterburner does not play well with modern AMD drivers, especially the full-install package. I usually recommend that everyone delete all traces of MSI Afterburner before installing the full package Adrenalin drivers; Adrenalin has everything one needs to overclock, and HWINFO64 has everything one needs to monitor stats.


----------



## Roacoe717

I'll mess around and figure it but Hwinfo64 reported 418w on my refrence card.


----------



## kairi_zeroblade

Roacoe717 said:


> I'll mess around and figure it but Hwinfo64 reported 418w on my refrence card.


This has been a known issue for the Drivers already reported and AMD is aware, they said that from time to time readings about the GPU may be inconsistent..

_



Radeon performance metrics and logging features may intermittently report extremely high and incorrect memory clock values.

Click to expand...

_


----------



## 6u4rdi4n

Speaking of power and power spikes. I'm currently using an EVGA SuperNova 850W G3 for my 9900K and 6900 XT. Should I get something with more power? I've been eyeing the Asus ROG Thor 1200W because it's built on Seasonic Prime with a few modifications and it comes with a set of individually sleeved cables, but I haven't really made up my mind if I should or not yet.


----------



## kairi_zeroblade

6u4rdi4n said:


> Speaking of power and power spikes. I'm currently using an EVGA SuperNova 850W G3 for my 9900K and 6900 XT. Should I get something with more power? I've been eyeing the Asus ROG Thor 1200W because it's built on Seasonic Prime with a few modifications and it comes with a set of individually sleeved cables, but I haven't really made up my mind if I should or not yet.


if you're not into chasing E-Peen numbers your 850W should be fine for mostly heavy gaming..else a 1000w or more, would be an investment at this point (a good one, I must say)


----------



## 6u4rdi4n

kairi_zeroblade said:


> if you're not into chasing E-Peen numbers your 850W should be fine for mostly heavy gaming..else a 1000w or more, would be an investment at this point (a good one, I must say)


What do you mean exactly by "E-Peen numbers"? Showing off a higher wattage (and expensive PSU) or overclocking my graphics card?


----------



## kairi_zeroblade

6u4rdi4n said:


> What do you mean exactly by "E-Peen numbers"? Showing off a higher wattage (and expensive PSU) or overclocking my graphics card?


Benchmarking and chasing world records or podium finishes on the hall of fame..


----------



## 6u4rdi4n

kairi_zeroblade said:


> Benchmarking and chasing world records or podium finishes on the hall of fame..


It's mostly 24/7 use and custom water cooling, but I do benchmark a bit and I take advantage of MPT to increase the power limit so maybe a beefier PSU isn't a completely bad idea. Might still be somewhere in the top 100 9900K + 6900 XT Time Spy.


----------



## kairi_zeroblade

6u4rdi4n said:


> It's mostly 24/7 use and custom water cooling, but I do benchmark a bit and I take advantage of MPT to increase the power limit so maybe a beefier PSU isn't a completely bad idea. Might still be somewhere in the top 100 9900K + 6900 XT Time Spy.


well, you can invest on a beefier one for your use case..I think this question on "whether I should buy a new 1xxxx PSU" should be self explanatory.. 🤣


----------



## Martin778

Hi Guys,
What would be the general advice on these two models: ASUS ROG Strix LC6900XT vs 6900XT Sapphire Toxic? The sapphire has a 360mm vs 240mm on the ASUS, price wise both are insanely overpriced (I think the Toxic is over €2k and Strix around €1,9k).
Still hesitating between going back to AMD or trying to score a 3090 FE....or the 3080Ti FE which is available from scalpers at 1300-1400.


----------



## cfranko

Martin778 said:


> Hi Guys,
> What would be the general advice on these two models: ASUS ROG Strix LC6900XT vs 6900XT Sapphire Toxic? The sapphire has a 360mm vs 240mm on the ASUS, price wise both are insanely overpriced (I think the Toxic is over €2k and Strix around €1,9k).
> Still hesitating between going back to AMD or trying to score a 3090 FE....or the 3080Ti FE which is available from scalpers at 1300-1400.


If the 3080 Ti is 1300-1400 then I don't think it is worth paying 2000 for a 6900 xt


----------



## lestatdk

cfranko said:


> If the 3080 Ti is 1300-1400 then I don't think it is worth paying 2000 for a 6900 xt


Agree, if you can get a 3080TI for 1400, go for it


----------



## ZealotKi11er

How is 1300-1400 for 3080 ti scalped? MSRP is 1200. That is not scalped. At that price get 3080 ti.


----------



## J7SC

Martin778 said:


> Hi Guys,
> What would be the general advice on these two models: ASUS ROG Strix LC6900XT vs 6900XT Sapphire Toxic? The sapphire has a 360mm vs 240mm on the ASUS, price wise both are insanely overpriced (I think the Toxic is over €2k and Strix around €1,9k).
> Still hesitating between going back to AMD or trying to score a 3090 FE....or the 3080Ti FE which is available from scalpers at 1300-1400.


...quick fyi: There are two versions of 'ASUS ROG Strix LC6900XT' - you want to make sure to grab the one with 'TOP' on the box


----------



## weleh

EU people can check pccomponentes site, they have sick prices sometimes.

Friend of mine grabbed a Toxic EE for 1500€ there.


----------



## CS9K

6u4rdi4n said:


> It's mostly 24/7 use and custom water cooling, but I do benchmark a bit and I take advantage of MPT to increase the power limit so maybe a beefier PSU isn't a completely bad idea. Might still be somewhere in the top 100 9900K + 6900 XT Time Spy.


With an EVGA G3 at 850W, you are perfectly fine with your current setup


----------



## EastCoast

fujipoly trumps all.


----------



## Martin778

So I've read but no one seems to give an answer on the thickness...some say 1.5 front 3mm back, others say 2mm front 2mm back...never ending story.


----------



## EastCoast

Martin778 said:


> So I've read but no one seems to give an answer on the thickness...some say 1.5 front 3mm back, others say 2mm front 2mm back...never ending story.


Lol, oh I thought you were talking about a 6900xt. 
Wrong thread.


----------



## tolis626

6u4rdi4n said:


> Speaking of power and power spikes. I'm currently using an EVGA SuperNova 850W G3 for my 9900K and 6900 XT. Should I get something with more power? I've been eyeing the Asus ROG Thor 1200W because it's built on Seasonic Prime with a few modifications and it comes with a set of individually sleeved cables, but I haven't really made up my mind if I should or not yet.


Well, YMMV, but I have the exact same PSU as you do, and it's been rock solid. My 5900X is pushed a bit with PBO to 180W PPT (benefits from more, but my damn AIO struggles with thermals SMH) and I've pushed my 6900XT up to 380W. The PSU never even flinched, I don't think I've ever heard its fan ramp up to audible levels. I think you'll be fine. Only problem I've had with it is that it comes with 2 single 8-pin PCIe cables and 2 8+(6+2)-pin daisy chain ones. Functionally it's fine, but having 4 cables routed o my GPU with one being unused was fugly. It was easily solved by getting a set of CableMod extensions, though. With that said, the cable clutter in the back of my Evolv X is unbelievable.


----------



## 6u4rdi4n

tolis626 said:


> Well, YMMV, but I have the exact same PSU as you do, and it's been rock solid. My 5900X is pushed a bit with PBO to 180W PPT (benefits from more, but my damn AIO struggles with thermals SMH) and I've pushed my 6900XT up to 380W. The PSU never even flinched, I don't think I've ever heard its fan ramp up to audible levels. I think you'll be fine. Only problem I've had with it is that it comes with 2 single 8-pin PCIe cables and 2 8+(6+2)-pin daisy chain ones. Functionally it's fine, but having 4 cables routed o my GPU with one being unused was fugly. It was easily solved by getting a set of CableMod extensions, though. With that said, the cable clutter in the back of my Evolv X is unbelievable.


My 9900K could probably pull 250W, lol. 

It has been holding up so far, but it feels like it's at its limit some times. Had some weird restarts when pushing clocks, but without anything showing up in logs and not having any completely shutdowns, it's a bit tricky to pinpoint. 

I've had it since November 2017, so I've gotten used to the noise from it, but I actually find it kind of loud. It became a lot better when I switched off ECO mode. I had 750W of the same series a few years ago which was completely fine to use in ECO mode, but the fan profile of the 850W is a lot more aggressive. I have 9 Noctua NF-A12X25 for my cooling, running at fairly low RPM, so the PSU could easily be noticable. 

The PCI-E cables have been annoying me as well. Right now I only need 2x8, so it doesn't matter, but I've had other cards... I've been thinking about extensions, but there would just be way too much cable. I could get some replacement cables (for example cablemod), but the total cost would be significant. That's one of the reasons I've been eyeing the ROG Thor, since it comes with a set of individually sleeved cables (24p+PCI-E+CPU).

I don't know. Maybe I'm exaggerating to justify the upgrade.


----------



## tolis626

6u4rdi4n said:


> My 9900K could probably pull 250W, lol.
> 
> It has been holding up so far, but it feels like it's at its limit some times. Had some weird restarts when pushing clocks, but without anything showing up in logs and not having any completely shutdowns, it's a bit tricky to pinpoint.
> 
> I've had it since November 2017, so I've gotten used to the noise from it, but I actually find it kind of loud. It became a lot better when I switched off ECO mode. I had 750W of the same series a few years ago which was completely fine to use in ECO mode, but the fan profile of the 850W is a lot more aggressive. I have 9 Noctua NF-A12X25 for my cooling, running at fairly low RPM, so the PSU could easily be noticable.
> 
> The PCI-E cables have been annoying me as well. Right now I only need 2x8, so it doesn't matter, but I've had other cards... I've been thinking about extensions, but there would just be way too much cable. I could get some replacement cables (for example cablemod), but the total cost would be significant. That's one of the reasons I've been eyeing the ROG Thor, since it comes with a set of individually sleeved cables (24p+PCI-E+CPU).
> 
> I don't know. Maybe I'm exaggerating to justify the upgrade.


Oh cut the BS. We all know that that last part is the truth. And you know it too. 

I'm guilty of that too. Back in May I had no intention of getting an overpriced RDNA2 GPU and I still upgraded my 3800x to a 5900X while still rocking a 5700XT. I had absolutely no reason to upgrade, and I had even less of a reason to go with 12 cores as I basically do no productivity stuff on my PC currently. But still, the itch was strong. That said, we are in the right place here. Where all the sufferers of severe chronic upgradeitis gather together. 

As for the PSU, you can actually hear the fan? I can't hear it on mine, like at all, over my case fans and AIO pump, which are already pretty quiet (just a soft woosh). In my previous rig I had an EVGA 1300G2 (4790K OC and 390x OC, but was planning to go multi-GPU, which never happened). Now THAT was loud. I never heard the fan ramp up, but at idle it was by far the loudest thing in my system.


----------



## 6u4rdi4n

tolis626 said:


> Oh cut the BS. We all know that that last part is the truth. And you know it too.
> 
> I'm guilty of that too. Back in May I had no intention of getting an overpriced RDNA2 GPU and I still upgraded my 3800x to a 5900X while still rocking a 5700XT. I had absolutely no reason to upgrade, and I had even less of a reason to go with 12 cores as I basically do no productivity stuff on my PC currently. But still, the itch was strong. That said, we are in the right place here. Where all the sufferers of severe chronic upgradeitis gather together.
> 
> As for the PSU, you can actually hear the fan? I can't hear it on mine, like at all, over my case fans and AIO pump, which are already pretty quiet (just a soft woosh). In my previous rig I had an EVGA 1300G2 (4790K OC and 390x OC, but was planning to go multi-GPU, which never happened). Now THAT was loud. I never heard the fan ramp up, but at idle it was by far the loudest thing in my system.


It's definitely audible. In ECO mode it was horrible. Sound and noise is very subjective, but compared to my 9 noctua fans, it's "loud". I guess what annoys me the most is the change in sound profile as the fan ramps up under heavy load. 

The upgraditis is real challenge! I've been working hard to keep my 9900K. I still don't believe it's going to stay here as long as my 2600K did, but I'm going to try to hold on to it for a while longer. Whish I could say the same for my 6900 XT. It's a great card, and performance is good, but it's still not enough. I guess I might be one of those that can never get enough graphics performance. Haha.


----------



## cfranko

Will using too thick washers crack my gpu die?


----------



## EastCoast

cfranko said:


> Will using too thick washers crack my gpu die?


Just don't tighten down as much. Count the number of turns. All you want is the die to have even contact with the heatsink. Not the tightest contact. 
That is the GOAL.


----------



## 99belle99

I got my best ever score just now in TimeSpy. My CPU score is down on this run for some reason. I may run it again some time just to see. This is a reference card on stock cooler as well so pretty good considering.

I scored 20 056 in Time Spy


----------



## Stopthewar

can you post your settings. Thx


----------



## 99belle99

MorePowerTool:

Power limit (W): 335
TDC Limits (A): 355 GFX and 55 Soc

That's all I changed in there and then in Radeon settings. I turned off zero fan mode and maxed out memory to 2150 with fast timings and maxed out power limit and frequency kept minimum on 500 and maximum at 2650MHz and that is pretty much it.


----------



## EastCoast

opps wrong thread.


----------



## Simzak

Does anyone have the MSI Gaming Z Trio 6900 XTXH card and is it worth buying for 1600USD? Will it be decent for overclocking if I don't plan on watercooling it?


----------



## lestatdk

Simzak said:


> Does anyone have the MSI Gaming Z Trio 6900 XTXH card and is it worth buying for 1600USD? Will it be decent for overclocking if I don't plan on watercooling it?


The PCB is the same as my Gaming X Trio. And with the stock cooler it has massive problems overheating. Also saw a reviewer having the same problem. 
Personally, I'd go with the Asus or Powercolor XTXH models instead


----------



## Simzak

lestatdk said:


> The PCB is the same as my Gaming X Trio. And with the stock cooler it has massive problems overheating. Also saw a reviewer having the same problem.
> Personally, I'd go with the Asus or Powercolor XTXH models instead


Thanks for the advice, all of the other XTXH cards are 500 dollars more expensive where I live though. Do you think the regular 6900 XT Red Devil would be a better choice for 100 dollars more?


----------



## 99belle99

Simzak said:


> Thanks for the advice, all of the other XTXH cards are 500 dollars more expensive where I live though. Do you think the regular 6900 XT Red Devil would be a better choice for 100 dollars more?


Where are you New Zealand or Australia? I cannot make out that flag.


----------



## Simzak

99belle99 said:


> Where are you New Zealand or Australia? I cannot make out that flag.


I'm from New Zealand, just trying to decide between these cards but I keep hearing mixed things about all of them and have absolutely no idea what to go for. Trying to decide which one for a few days now but I can't stop changing my mind. If someone could just tell me what to get I'd really appreciate it lol. Just want the best card to get 400fps+ on a 1080p 360hz monitor.






MSI Radeon RX 6900 XT GAMING Z TRIO 16G Graphics Card


MSI Radeon RX 6900 XT GAMING Z TRIO 16G Graphics Card 16GB GDDR6, PCIE 4.0 Max 2425 MHz 3xDisplayPort 1xHDMI PSU 850W or higer 550mm Length




www.extremepc.co.nz










Gigabyte Radeon RX 6900 XT Aorus Master Graphics Card


Gigabyte Radeon RX 6900 XT Aorus Master Graphics Card 16GB GDDR6, PCIE 4.0, Upto 2365MHz, 3XFan, 2XDisplay Port, 2XHDMI, 322mm Length, Max 4 Display Out, 3X8 Pin Power, 850W Or Higher PSU




www.extremepc.co.nz













Buy the Powercolor Red Devil AMD Radeon RX 6900 XT 16GB GDDR6 , PCIE 4.0, 3X... ( AXRX 6900XT 16GBD6-3DHE/OC ) online


pbtech.co.nz "Powercolor Red Devil AMD Radeon RX 6900 XT 16GB GDDR6 , PCIE 4.0, 3X Fan, Upto 2340MHz, 3 Slot, 3X Display Port, 1X HDMI, 320MM Length, Max 4 Display Out, 3X 8 Pin Power, 900W Or Higher PSU Recommended




www.pbtech.co.nz










ASUS TUF Gaming AMD Radeon RX 6900 XT OC Edition Graphics Card


ASUS TUF Gaming AMD Radeon RX 6900 XT OC Edition Graphics Card Max 2340 MHz 1 x HDMI 2.1 3 x DisplayPort 1.4a 5120 Stream Processors PCI-E 4.0 320 mm Length Recommended PSU 850WTUF-RX6900XT-O16G-GAMING




www.extremepc.co.nz













Buy the Sapphire Nitro+ AMD Radeon RX 6900 XT SE Gaming OC Graphics Card 16GB... ( 11308-03-20G ) online


pbtech.co.nz "Sapphire Nitro+ AMD Radeon RX 6900 XT SE Gaming OC Graphics Card 16GB GDDR6, PCIE 4.0, 3X Fan, Upto 2365MHz, 2.7 Slot, 3X Display Port, 1X HDMI, Max 4 Display Out, 2X8 Pin + 1X 6 Pin Power, 850W Or Higher PSU Recommended




www.pbtech.co.nz


----------



## Scorpion667

Snagged a 6900 XT Toxic EE as someone whispered in my ear that it gets more frames than 3090 in COD at 1080p. I can confirm this to be the case.

Using a [email protected] /w tweaked RAM, this 6900 XT core does 2875 in PR and 2855 in TS while memory sweet spot is 2160. In games 2730 min/2830 max is rock solid and passes TS and PR stress tests. Below I compare the 6900 XT to my 3080TI in TS, TS Extreme and PR. 

Time Spy 
3080TI: 20014 overall, 22006 graphics
6900 XT: 22298 overall, 25222

Time Spy Extreme
3080TI: 9765 overall, 11163 graphics
6900 XT: 10242 overall, 11836 graphics

Port Royal
3080TI: 14708
6900 XT: 12627


----------



## ZealotKi11er

Scorpion667 said:


> Snagged a 6900 XT Toxic EE as someone whispered in my ear that it gets more frames than 3090 in COD at 1080p. I can confirm this to be the case.
> 
> Using a [email protected] /w tweaked RAM, this 6900 XT core does 2875 in PR and 2855 in TS while memory sweet spot is 2160. In games 2730 min/2830 max is rock solid and passes TS and PR stress tests. Below I compare the 6900 XT to my 3080TI in TS, TS Extreme and PR.
> 
> Time Spy
> 3080TI: 20014 overall, 22006 graphics
> 6900 XT: 22298 overall, 25222
> 
> Time Spy Extreme
> 3080TI: 9765 overall, 11163 graphics
> 6900 XT: 10242 overall, 11836 graphics
> 
> Port Royal
> 3080TI: 14708
> 6900 XT: 12627



What is the stock clock of your 6900 XT? 2855MHz is very high in TS.


----------



## Scorpion667

ZealotKi11er said:


> What is the stock clock of your 6900 XT? 2855MHz is very high in TS.



On the perf vBIOS default boost clock is 2525. This Toxic EE (binned) model has one click OC to 2730.

It's not pretty but temps (including RAM, VRM) are great!


----------



## EastCoast

Simzak said:


> Does anyone have the MSI Gaming Z Trio 6900 XTXH card and is it worth buying for 1600USD? Will it be decent for overclocking if I don't plan on watercooling it?


Xtxh for that price seems like a steal at these inflated prices. 

Perhaps 30 days later it might be $1200 if there is stock.


----------



## Simzak

EastCoast said:


> Xtxh for that price seems like a steal at these inflated prices.
> 
> Perhaps 30 days later it might be $1200 if there is stock.


Yeah the xtxh card has me interested but if what @lestatdk said about the Gaming X Trio is true for the Z Trio and there's no thermal headroom what's the point right? Could it maybe undervolt and still OC better than the other cards?


----------



## EastCoast

Simzak said:


> Yeah the xtxh card has me interested but if what @lestatdk said about the Gaming X Trio is true for the Z Trio and there's no thermal headroom what's the point right? Could it maybe undervolt and still OC better than the other cards?


The z is better. You should get between 2500-26XXmhz. 2800mhz on water imo. I've seen reviews of the X. And assumptions made that the z is the same. But no review of the z itself.

6900xt owners what is your best daily GPU Frequency on air?

Edit:
Just looked at TPU's pcb pics of the Z vs X and it all looks like the X to me. I am not sure who started that rumor but. It appears to originate with biildzoid's breakdown of the pcb from....drum roll....TPU's website.

To be honest I haven't seen the Z in full production until a month ago. I read announcements about it about 4 months ago. And never found any benchmark reviews nor teardown of the z at all. Probably do to lack of availability of that card.

So, unless I actually see a tear down of the Z, not the X claiming to be the Z, I would take that rumor with a grain of salt.

Another thing, this is still aircooled not water cooled. No matter what XTXH you buy aircooled you can do better when it's watercooled. It's the nature of how this works. However, you can improve thermals using the washer mod with better thermal compound like the Thermal Grizzley Kryonaut and Fujipoly thermal pads.


----------



## lestatdk

There is not a water block for the Gaming Z. There's only one for the 6800 Gaming X model. I bought one of those blocks and modified it for my 6900 Gaming X. The PCB of the 6800XT Gaming X and the 6900 Gaming X is very similar. Only difference is the extra power connector + a VRM + a few extra components. It's fairly simple to modify the block, but it does require some work.

Unless you're prepared for that, you won't get that Z under water.

And unless it is, there's no chance you'll get even a decent OC compared to the other models.

I'm willing to bet the Z and X are the exact same PCB with only a better bin GPU. Why would they hardly change anything from the 6800 x to 6900 x only to completely modify things for the Z ? Does not make any sense


----------



## lestatdk

My GPU is horrible for OC. I get nowhere near the scores others get even now under water. But compared to the stock cooler , it no longer hits the 118C shutdown temp when OC'ing it.

Temperatures now with the modified water block is amazing. But my OC is not better. At all. But now the junction is 55-62C under max OC compared to the 118C. Also the delta is now 10-15C where it was almost 50C stock.


----------



## lestatdk

This was the review I mentioned. They had the same problems I had with high junction and delta. Asked for a new sample and it had the same problem. So now you have 3 examples of horrible temperature performance, willing to bet the Z will experience similar problems ?









MSI RX 6900 XT Gaming X Trio Review - KitGuru


Launched last month, today we take a look at MSI's flagship RX 6900 XT graphics card - the Gaming X




www.kitguru.net


----------



## lestatdk

Quote from the article:

"I did try re-pasting and re-mounting the cooler, but that didn’t change the results."

Also this:

"We have been in contact with MSI about our findings and were sent a second card to test, but that behaved exactly the same as our first sample. It is worth bearing in mind that the maximum junction temperature we saw was 102C, and AMD says up to 110C is in spec. Of course, that doesn’t change the fact that the Gaming X Trio has the highest junction temperature of any 6900 XT we have tested so far."

That 102C is with the stock MPT. If you raise that and start pushing that 118C shutdown comes at you fast ! My max clock was around 2640 in TS with raised MPT and stock cooler. This has not improved since the water block only the temps.


I rest my case. You have been warned


----------



## dwwillia

With the EK AIO watercooling kit, I was able to achieve 2700mhz overclock that is stable, but it took increasing max power using the MPT tool to get there. I have a question though - does anyone know where I could purchase a screw kit for the 6900xt reference design? I am thinking of moving to an ITX build and need to go back to air to accommodate that.


----------



## weleh

lestatdk said:


> The PCB is the same as my Gaming X Trio. And with the stock cooler it has massive problems overheating. Also saw a reviewer having the same problem.
> Personally, I'd go with the Asus or Powercolor XTXH models instead


It's not.

The Gaming Z Trio 6900XT has the best PCB out of all 6900XT's.
Yes, even better than the OCF.

Has a lot of input filtering and pretty much the whole PCB is populated which should in theory help with overclocking.

The highest binned card is the Toxic Extreme Edition (XTXH), at 2730 Mhz guaranteed followed by the Toxic Limited Edition at 2660 Mhz (non XTXH)


----------



## lestatdk

weleh said:


> It's not.
> 
> The Gaming Z Trio 6900XT has the best PCB out of all 6900XT's.
> Yes, even better than the OCF.
> 
> Has a lot of input filtering and pretty much the whole PCB is populated which should in theory help with overclocking.
> 
> The highest binned card is the Toxic Extreme Edition (XTXH), at 2730 Mhz guaranteed followed by the Toxic Limited Edition at 2660 Mhz (non XTXH)


Do you have any information on the layout difference from the Gaming X ? Or are you just guessing ? There's a ton of filtering on my PCB as well.


----------



## lestatdk

Since the pictures on TPU shows the 2 PCBs being identical, I assume you have some other information to prove that wrong ? Asking out of curiosity since my card had massive problems with the stock configuration. As as shown in the review they had the same problems . 
I'd like to see a bench result of one of those gaming Z cards. Or just some sort of hard evidence they are any different except for the XTXH GPU...


----------



## weleh

Any 6900XT air cooled is trash.
There's no good 6900XT air cooled for thermals

Some are more horrible than others but general consensus is they are all trash.

I have no more information than BZ's breakdown. In terms of input filtering and memory filtering, the Z/X (if they are the same PCB) is the best of them all. That's not to say other 6900XT don't have better VRM because they do, but from the point of view of PCB population, MSI should be the best.

I find it funny people still recommend powercolor, trash PCB, trash VRM...Also getting a XTXH card on air is useless.
Might as well save some bucks, get a reference card and put it on water. You won't break world records but you'll save 1000$ doing so.


----------



## lestatdk

Here's the Gaming Z on the caseking website.










here's the PCB of the Gaming X











Can we agree they are the same ?


Also, even on water with superb temperatures, the card is still not very good for OC. 

23.5 timespy GPU score is the highest I can get even with the water block and a junction max of 62C or so. 

To put the Z on water you'd need to buy the 6800XT block and modify like I did.


Also, I agree with you on buying a reference card and putting it on water. That was my original plan, but at the time I was unable to get one. Tried for months and eventually gave in and bought this. Much to my regret . Until I got the water block on it I was ready to sell it at a loss


----------



## lestatdk

Here's the caseking link for reference






MSI Radeon RX 6900 XT Gaming Z Trio 16G, 16384 MB GDDR6


High-end gaming graphics card, Custom design Radeon RX 6900 XT, max. 2,425 MHz boost clock speed, 16 GB GDDR6 VRAM with 128 MB AMD Infinity Cache, 3x DisplayPort 1.4a / 1x HDMI 2.1, Triple Fan cooling design with RGB-LED lighting




www.caseking.de


----------



## kairi_zeroblade

weleh said:


> I find it funny people still recommend powercolor, trash PCB, trash VRM...Also getting a XTXH card on air is useless.
> Might as well save some bucks, get a reference card and put it on water. You won't break world records but you'll save 1000$ doing so.


ouch..I will throw mine tomorrow, and get a Sapphire one..so far I have no complains on Powercolor..they offer a good warranty policy here, from months of owning a Red Devil, I never had a single issue while gaming with even my benchmarking clocks..maximum air temps I got were only 58c while gaming (overclocked and a custom fan profile)..but yeah to satisfy the statement above I will "try" to throw mine tomorrow..


----------



## lestatdk

kairi_zeroblade said:


> ouch..I will throw mine tomorrow, and get a Sapphire one..so far I have no complains on Powercolor..they offer a good warranty policy here, from months of owning a Red Devil, I never had a single issue while gaming with even my benchmarking clocks..maximum air temps I got were only 58c while gaming (overclocked and a custom fan profile)..but yeah to satisfy the statement above I will "try" to throw mine tomorrow..


Don't worry man. I'll take that card off your hands for a reasonable price. Say 100USD ? Just trying to help a brother out here


----------



## EastCoast

I still don't see a Z teardown or review. And showing a partially covered card to say that it's the same is simply silly to say the least. Even if they are it still proves once again that everything is based on the X even the 6800/6900xt Trio x. LOL

And who in their right mind would buy a xtxh on air for the sole purpose of making the top 10 in time spy instead of just getting a water cooled variant? Then complain that the card is trash when they can't. It makes just about as much sense as trying to based a z trio performance on 6800xt/6900xt X trio reviews. So, the illogical approach from that angle can at least be explained. Anyone with common sense would understand there is a hard limit with a unlocked card like that on *AIR. *That goes with any 2 cards of the same make of cards with one being air cooled and the other being water cooled. For crying out loud 

There is nothing wrong the the air cooled variants of the xtxh in general. Although there is a heirarchy of best to least. You buy one because:
-It's cost the same or slightly higher then a regular AIB 6900xt
-Its the only thing in your area at the time and you want to upgrade
-If you would buy a 6900xt, the xtxh would be the best buy out of them all.


----------



## lestatdk

Yeah , whatever. Maybe you didn't even look at the PCBs to compare. 

Let's agree to disagree. 
Someone buy the card and see how it goes. I'm done with giving MSI my money for sure. Lesson learned is to buy the card I want ( wanted the red devil) , instead of forcing the issue and buying what was available


----------



## weleh

Mate, having a good PCB isn't sinonym for super overclocker. However a good PCB will obviously let you get that extra tens of Mhz that a worse PCB can't.

Also, PCB quality has nothing to do with cooling performance. It's a shame because the Trio Cooler on Ampere is pretty decent (except memory junction but that's a backplate issue).

Either way you still rely on silicon quality. That's why buying a pre binned card like Sapphire Toxic models is your best bet for guaranteed decent clockspeed while buying a lesser binned card won't.

We can't forget these cards are rated for like 2300 Mhz and here we are crying because our cards can't do 2600+ when on Nvidia you need chilled water to get an overclock higher than 150 Mhz on the core and to think about 400 or 600 Mhz like some 6900XT do, you'ld be looking at LN2.


----------



## lestatdk

True. Maybe we are getting a bit spoiled at times  I just want "more powah"


----------



## weleh

Yea we're all spoiled, that's for sure.


----------



## coelacanth

weleh said:


> Any 6900XT air cooled is trash.
> There's no good 6900XT air cooled for thermals
> 
> Some are more horrible than others but general consensus is they are all trash.
> 
> SNIP


I have to disagree. The air cooling on the XFX 6900XT Speedster Merc 319 seems very good.


----------



## 6u4rdi4n

coelacanth said:


> I have to disagree. The air cooling on the XFX 6900XT Speedster Merc 319 seems very good.


Agreed. One of the best air coolers I've had and perfectly adequate for gaming and other "regular" use. Long benchmarking sessions and pushing clocks, not so much, unless you have some form of ear protection.


----------



## Simzak

lestatdk said:


> Yeah , whatever. Maybe you didn't even look at the PCBs to compare.
> 
> Let's agree to disagree.
> Someone buy the card and see how it goes. I'm done with giving MSI my money for sure. Lesson learned is to buy the card I want ( wanted the red devil) , instead of forcing the issue and buying what was available


I saw that the gaming Z trio is 39 grams lighter than the X trio on MSI's website. Maybe they have changed something? I think I'm going to stay away from MSI this time as returning products in NZ isn't as easy as RMAing in the US and I definitely don't have money for another 6900xt haha.


----------



## Simzak

Didn't someone say that the junction temperature limit on the XTXH cards is 95 not 110 like regular XTX cards also? Seeing that the Gaming X Trio runs at 100 degrees stock, could the Gaming Z Trio potentially instantly throttle at stock clocks?


----------



## EastCoast

With all the bickering going back and forth I decided to grab one and see for myself. MSI 6900xt Z Trio







































To be transparent; I am not a fan of time spy. It's direct 12 implementation is designed for NVIDIA hardware. Thus pushing the limits of your AMD gpu generates more power consumption and heat for no real benefit. This is do to it's heavy implementation of a "type" of asynchronous compute (IMO not AS at all). That relys heavily on context switching instead of pure parallelism. AKA Asynchronous Parallelism which AMD is good at. But I digress.


So here you go.


----------



## cfranko

EastCoast said:


> With all the bickering going back and forth I decided to grab one and see for myself. MSI 6900xt Z Trio
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> To be transparent; I am not a fan of time spy. It's direct 12 implementation is designed for NVIDIA hardware. Thus pushing the limits of your AMD gpu generates more power consumption and heat for no real benefit. This is do to it's heavy implementation of a "type" of asynchronous compute (IMO not AS at all). That relys heavily on context switching instead of pure parallelism. AKA Asynchronous Parallelism which AMD is good at. But I digress.
> 
> 
> So here you go.


What was the power limit and what was the maximum hotspot temp?


----------



## EastCoast

@cfranko


----------



## cfranko

EastCoast said:


> @cfranko


The delta is a lot but that is because the edge temperature is wayy too good, not because the hotspot is terrible imo


----------



## EastCoast

@cfranko








Valhalla


Now bare in mind this is a unlocked GPU. You have to use temperance in gpu clock rates. 2500-2600MHz seems doable depending on the game. Beyond that you need to watercool. If you have a air restrictive PC Case or don't have good airflow in general I wouldn't recommend an XTXH in general not just the Z Trio.

I am not sure what happened in that review. But at a guess their ambient temps and lack of air flow may have contributed to the higher then normal temps on the Trio X.


----------



## weleh

So in other words, **** cooler. Just like I said, 6900XT air cooled are all ****.


----------



## EastCoast

weleh said:


> So in other words....


Still in shock and awe some one would actually buy one to test it out aren't you. 
Your post is nothing more then a troll response after I showed proof that the air cool version is just fine.
No, an air cooled XTXH won't cool as well as a water cooled version. Your "valuable input" in common knowledge is shockingly obtuse.


----------



## Simzak

EastCoast said:


> This is nothing more then a troll response showing proof that the air cool version is just fine.
> No, an air cooled XTXH won't cool as well as a water cooled version. Your "valuable input" in common knowledge is shockingly obtuse.


I really appreciate you testing this card. Last question from me, I should definitely get this card right? Gaming Z Trio is the cheapest 6900 XT in New Zealand right now. Not trying to push any sort of crazy OC I just don't want it to hit 100c and thermal throttle while I'm playing. Only gonna play at 1080p pretty much and my mesh case has lots of airflow with 9 fans locked at 100% I couldn't care less about noise.


----------



## 99belle99

I just take the side of my case off while I game so there is plenty of air going through my system. I know I could just get three fans as intakes as there is space for them but I just prefer my method.


----------



## EastCoast

Simzak said:


> I really appreciate you testing this card. Last question from me, I should definitely get this card right? Gaming Z Trio is the cheapest 6900 XT in New Zealand right now. Not trying to push any sort of crazy OC I just don't want it to hit 100c and thermal throttle while I'm playing. Only gonna play at 1080p pretty much and my mesh case has lots of airflow with 9 fans locked at 100% I couldn't care less about noise.


Yes, get it. Like you said you're not trying to set World Records. You understand the limitations of having an unlocked gpu with a air-cooled setup. And the fact that it is the cheapest in your area.

Just make sure you have good case fans pushing air in and out of your PC case.

And stay away from time spy. With an air-cooled setup of the xtxh it becomes nothing more than a power virus. And is completely inaccurate representation of how games are used in DirectX 12 with proper asynchronous compute. IMO.


----------



## L!ME

I can confirm the ref LC has different memory chips









They are running Stock with No modding at 2410ft2


----------



## 99belle99

Where did you manage to pick one of those up from?


----------



## ZealotKi11er

Just because they run 2400MHz which I have done. It does not score better in TS vs 2150-2200MHz cards. Timings are probably much looser.


----------



## EastCoast

Stay away from time spy. It's crap version of Async Compute will only lower clock rates with higher power draw from Radeon gpus. If you have rely on vram timing to get a better score because higher frequencies yield lower results. That should tell you something is wrong with the benchmark.


----------



## lestatdk

Simzak said:


> I really appreciate you testing this card. Last question from me, I should definitely get this card right? Gaming Z Trio is the cheapest 6900 XT in New Zealand right now. Not trying to push any sort of crazy OC I just don't want it to hit 100c and thermal throttle while I'm playing. Only gonna play at 1080p pretty much and my mesh case has lots of airflow with 9 fans locked at 100% I couldn't care less about noise.


Ty be fair my card hit the shutdown temp after I messed with MPT.
If you don't use MPT or use very conservative values it should be OK.


----------



## lestatdk

Had to scroll back a bit to find any FS benches from before I got the water block:


----------



## ZealotKi11er

EastCoast said:


> Stay away from time spy. It's crap version of AS will only lower clock rates with higher power draw from Radeon gpus. If you have rely on vram timing to get a better score because higher frequencies yield lower results. That should tell you something is wrong with the benchmark.


I have XTX and XTXH LC. XTX can do 2600, XTXH can do 2800.
XTXH at 2600 and vRAM (2300-2400) does not score any better than XTX at 2600 with vRAM (2150).
Even if it does a bit better is nothing like what the clock speed for vRAM implies. I have not really had much time to test games and 4K.


----------



## EastCoast

ZealotKi11er said:


> I have XTX and XTXH LC. XTX can do 2600, XTXH can do 2800.
> XTXH at 2600 and vRAM (2300-2400) does not score any better than XTX at 2600 with vRAM (2150).
> Even if it does a bit better is nothing like what the clock speed for vRAM implies. I have not really had much time to test games and 4K.


There have been consistent updates to 3dmark ever since the 6000 series was released. Usually Radeon cards get better scores using old versions. I don't think tpu keeps older versions any more. You have to go to file horse, major geek or file hippo.

But yeah time spy is trash for Radeon cards.


----------



## EastCoast

lestatdk said:


> Had to scroll back a bit to find any FS benches from before I got the water block:
> 
> View attachment 2526879


Yup seems about right. Here is my 2nd attempt










*Simzak
You order it yet? *


----------



## kairi_zeroblade

Nice comparison, I am overwhelmed that in my sleep things turned out this way..even @EastCoast bought the GPU himself to try..that was fast..haha..


----------



## Henry Owens

gtz said:


> No just CCX. Drop that CCX to 4.65 and leave the good CCX at 4.7. Ryzen becomes extremely inefficient past 4.5-4.6. The best overclocker I had was the 5800X, that could do 4.8, did not know what I had until I sold it. Did 2000FCLK to run benches as well.l


Same here I returned a pretty golden 5800x before I knew to Amazon to buy a 5900x


----------



## Simzak

EastCoast said:


> Yup seems about right. Here is my 2nd attempt
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> *Simzak
> You order it yet? *


Not yet waiting for my 2080 to sell zzzz
Also I have a Corsair RM750x PSU, from what I've read it's not going to be enough. Should I throw in a 1000w PSU to the order or give the RM750x a try first?


----------



## EastCoast

Simzak said:


> Not yet waiting for my 2080 to sell zzzz
> Also I have a Corsair RM750x PSU, from what I've read it's not going to be enough. Should I throw in a 1000w PSU to the order or give the RM750x a try first?


Most definitely. The potential problem is that once these MSI Z brands dry up then you may see some other AIB flood the market for a while. It seems to be how this is going. I've never seen a Toxic air cooled nor have I seen the Asrock xtxh variant either for sale. But I didn't see the Trio Z until recently.

I have seen a lot of the XFX Merc319's out there. The black Ultimate (xtxh) I've seen scalped to the moon though.


----------



## L!ME

@99belle99 
In Germany you can buy IT Here


https://www.mindfactory.de/product_info.php/16GB-Powercolor-Radeon-RX-6900XT-LC-Edition-DDR6-Wasserkuehlung--bulk-v_1425329.html


----------



## weleh

L!ME said:


> I can confirm the ref LC has different memory chips
> View attachment 2526877
> 
> 
> They are running Stock with No modding at 2410ft2


Yes they run 18gbps chips instead of 16


----------



## cfranko

weleh said:


> Yes they run 18gbps chips instead of 16


Does 18gbps have an effect on hashrate?


----------



## weleh

cfranko said:


> Does 18gbps have an effect on hashrate?


It should.


----------



## ptt1982

weleh said:


> Yea we're all spoiled, that's for sure.


Someone completely spoiled here!💪💩


----------



## ZealotKi11er

cfranko said:


> Does 18gbps have an effect on hashrate?


When I did a quick test it was worse.


----------



## Koldan

Greetings, friends!
I have a problem with radeon 6900xt for a while at this point, i own xfx merc 319 ultra version.
Sometimes during gaming i get black screen with buzzing sound on audio output (speakers) or headphones.
I am forced to reboot after that, no logs or anything.
Happening on Cyberpunk 2077, marvel avengers (graphically intensive games that can push it actually in my 3440x1440 resolution).
Any1 had problems like that, can help me out?
I swapped psu (running 1250w atm), mobo and cpu (used to run 5800x +x570, now i run asus z590-a + 10700k). Doesnt seem to help, tho i am more stable on intel.


----------



## kairi_zeroblade

Koldan said:


> Greetings, friends!
> I have a problem with radeon 6900xt for a while at this point, i own xfx merc 319 ultra version.
> Sometimes during gaming i get black screen with buzzing sound on audio output (speakers) or headphones.
> I am forced to reboot after that, no logs or anything.
> Happening on Cyberpunk 2077, marvel avengers (graphically intensive games that can push it actually in my 3440x1440 resolution).
> Any1 had problems like that, can help me out?
> I swapped psu (running 1250w atm), mobo and cpu (used to run 5800x +x570, now i run asus z590-a + 10700k). Doesnt seem to help, tho i am more stable on intel.


that's more likely a system related crash rather than just GPU..for GPU crash alone it would just exit the game for me..no hard crash from windows resulting to a random reboot or what you have just described..


----------



## Koldan

Yeah, I'd agree if i didnt replace like every part except my ram (which passed memtest no problems on previous mobo+cpu). I can replace ram ofc but not sure its a thing.
Since its overclocking forum, i was wondering, can it be that gpu boost too high and crashes whole system? (i think nvidia had problem like that on 3080 launch).
Amd advertises like 2250 or something but card routinely runs 2450+ on stock settings.


----------



## ZealotKi11er

you can lower the max clock and see if it helps.


----------



## lestatdk

Koldan said:


> Yeah, I'd agree if i didnt replace like every part except my ram (which passed memtest no problems on previous mobo+cpu). I can replace ram ofc but not sure its a thing.
> Since its overclocking forum, i was wondering, can it be that gpu boost too high and crashes whole system? (i think nvidia had problem like that on 3080 launch).
> Amd advertises like 2250 or something but card routinely runs 2450+ on stock settings.


Try running the RAM at 2133 without XMP and see if the problem is still there.


----------



## Koldan

Are ram clocks important for gpu stability? i am running most "default" 3200 xmp for my Kingston HyperX FURY Black [HX432C18FBK2/32] .
memtest86 running for 5 hours straight were pricsine so i kinda thought it shouldnt be my memory.
But at this point its only my ram, gpu and hard drives that i didnt replace yet.
Dont really wanna start rma or something on my fancy 6900xt until i know it is for sure, especially in current times.


----------



## lestatdk

Koldan said:


> Are ram clocks important for gpu stability? i am running most "default" 3200 xmp for my Kingston HyperX FURY Black [HX432C18FBK2/32] .
> memtest86 running for 5 hours straight were pricsine so i kinda thought it shouldnt be my memory.
> But at this point its only my ram, gpu and hard drives that i didnt replace yet.
> Dont really wanna start rma or something on my fancy 6900xt until i know it is for sure, especially in current times.


Yes, they are very important. Also, don't use memtest86 for stability testing. Use testmem5 instead on the "extreme" profile.


----------



## lestatdk

Memory Testing with TestMem5 TM5 with custom configs


Hello everybody I am just making a very light tutorial with a collection of custom config files and a DOWNLOAD LINK for TM5 v0.12 anta777 absolut config *Official* Intel DDR4 24/7 Memory Stability Thread None of the work is mine but it seems like a pretty good and fast testing app




www.overclock.net


----------



## Koldan

Unfortunatelly memtest5 didnt show any errors.
I found a video with exactly same issue as i have tho:


----------



## J7SC

Koldan said:


> Yeah, I'd agree if i didnt replace like every part except my ram (which passed memtest no problems on previous mobo+cpu). I can replace ram ofc but not sure its a thing.
> Since its overclocking forum, i was wondering, can it be that gpu boost too high and crashes whole system? (i think nvidia had problem like that on 3080 launch).
> Amd advertises like 2250 or something but card routinely runs 2450+ on stock settings.





lestatdk said:


> Try running the RAM at 2133 without XMP and see if the problem is still there.


...I had that exact issue happen with an older X99 system and 'hyped & tight' 4x8 GB DDR4 RAM (3333+)...loosening timings just a bit solved that problem.


----------



## EastCoast

*6900xt Owners*

What is your daily air cooled gpu clock rates for gaming?


----------



## The EX1

EastCoast said:


> *6900xt Owners*
> 
> What is your daily air cooled gpu clock rates for gaming?


On my regular XTX Red Devil card it was 2450-2500 with a slight under volt to 1150 and minus 5% on the power limit to keep it from hitting 110C on hotspot. This was my daily driver profile until I put an EK block on it. At stock, it would run in the high 2300s and would occasionally cross 2400mhz.

The factory thermal paste application was terrible. There wasn't nearly enough on my card and the cooler wasn't mounted with very much pressure either.


----------



## Maracus

EastCoast said:


> *6900xt Owners*
> 
> What is your daily air cooled gpu clock rates for gaming?


MSI Gaming X stock 2539mhz

Undervolted to 1070mv temps are ok depending on what game and the draw usually @ 2200-2400rpm fan speed

Tried repasting it with thermal grizzly extreme, didn't really achieve much. Pitty no waterblock for these wanted to see how far it will OC


----------



## Scorpion667

Does anyone know how to lock the clock speed on these cards? My friend was testing CS:GO at below 1080P (800+fps) and he noted the card was downclocking to 500Mhz and there was a 2MS latency increase in click to screen latency in that scenario. At 1080p the card respected the minimum clock value configured in Radeon and latency was normal.

As well does someone know the temp values at which the boost algo starts to drop clock speed?


----------



## lawson67

EastCoast said:


> *6900xt Owners*
> 
> What is your daily air cooled gpu clock rates for gaming?


I used to run mpt at 330w with 2500min 2600max @1030mv when i was on air, hot spot temp was around 98c until i put LM on it which was the best thing i ever did to it as then it dropped to 80c, anyhow sold that card for the Powercolor Ultimate which is on a water block with LM on it at 420w hotspot around 70c


----------



## lestatdk

Maracus said:


> MSI Gaming X stock 2539mhz
> 
> Undervolted to 1070mv temps are ok depending on what game and the draw usually @ 2200-2400rpm fan speed
> 
> Tried repasting it with thermal grizzly extreme, didn't really achieve much. Pitty no waterblock for these wanted to see how far it will OC


Don't bother. I modified a 6800XT waterblock to fit mine, and it does not OC much further than on air. But now it stays cool at least ( 62C hotspot max when benching w. raised MPT)


----------



## OC-NightHawk

EastCoast said:


> With all the bickering going back and forth I decided to grab one and see for myself. MSI 6900xt Z Trio
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> To be transparent; I am not a fan of time spy. It's direct 12 implementation is designed for NVIDIA hardware. Thus pushing the limits of your AMD gpu generates more power consumption and heat for no real benefit. This is do to it's heavy implementation of a "type" of asynchronous compute (IMO not AS at all). That relys heavily on context switching instead of pure parallelism. AKA Asynchronous Parallelism which AMD is good at. But I digress.
> 
> 
> So here you go.


I ran Fire Strike Extreme on my machine to see how it would do and provide a comparison of air vs water cooled. This is a Gigabyte RX 6900 XT Xtreme Waterforce.


----------



## OC-NightHawk

cfranko said:


> Does 18gbps have an effect on hashrate?


I don't have another RX 6900 XT to compare with but I do have a Power Color RX 6800 XT Red Devil that makes 64MH while the 6900 XT I have does 62MH for Eth.


----------



## EastCoast

Thanks to everyone who reported their gpu clock rates. I'm was trying to figure out the norm in that area.

@OC-NightHawk
I did do a 2nd run








[Official] AMD Radeon RX 6900 XT Owner's Club


Had to scroll back a bit to find any FS benches from before I got the water block: Yup seems about right. Here is my 2nd attempt Simzak You order it yet?




www.overclock.net





So, it looks like there is still headroom left if watercooled.


----------



## lestatdk

On air my gaming clocks were similar to now, so around 2500-2550.


----------



## EastCoast

lestatdk said:


> On air my gaming clocks were similar to now, so around 2500-2550.


That seems to be the consensus so far. Good to know.

Edit:
@OC-NightHawk

















This time I enabled Fast Timing


----------



## 99belle99

My gaming clocks on reference card with stock cooler are in and around 2400MHz and hotspot of 90 degrees.


----------



## lawson67

EastCoast said:


> Thanks to everyone who reported their gpu clock rates. I'm was trying to figure out the norm in that area.
> 
> @OC-NightHawk
> I did do a 2nd run
> 
> 
> 
> 
> 
> 
> 
> 
> [Official] AMD Radeon RX 6900 XT Owner's Club
> 
> 
> Had to scroll back a bit to find any FS benches from before I got the water block: Yup seems about right. Here is my 2nd attempt Simzak You order it yet?
> 
> 
> 
> 
> www.overclock.net
> 
> 
> 
> 
> 
> So, it looks like there is still headroom left if watercooled.


With my Red Devil Ultimate XTU on a water block i run 24/7 2770min 2870max i could not run at those clocks with a PL of 420w on air, but with my air cooled card i was happy with MPT set at 330w and 2600max, i just wanted an XTU card and then soon realised i needed a water block to push it as far as the silicon would let it go which for my card was 2870mhz without the water block at those clocks the hotspot was over 100c


----------



## The EX1

Did some benches and further testing with my Red Devil Limited card (so not XTXH). I think I have a good chip considering this is not an Ultimate. Installed an EK block and backplate a couple weeks ago. Runs 2600-2810mhz in Heaven without MPT tweaks at 3440x1440. Anything above 2820 real clock results in artifacts. Firestrike Ultra benches with 400w in MPT are below. The card would run between 2740 and 2795mhz during the bench. Anything above 2800mhz and the card would artifact again. Good enough for #39 on the Hall of Fame I guess with a mediocre physics score. 








I scored 15 943 in Fire Strike Ultra


AMD Ryzen 9 5950X, AMD Radeon RX 6900 XT x 1, 32768 MB, 64-bit Windows 10}




www.3dmark.com


----------



## OC-NightHawk

@EastCoast I went in and adjusted my MPT settings.










Going above 1200mV results in driver crashes. Also if I push the power beyond 420W I get instability. I'll leave this running for a bit and see how it does with some Horizon Zero Dawn. However for now the Fire Strike Extreme test result beat every metric from my previous run.




























AMD Radeon RX 6900 XT video card benchmark result - AMD Ryzen 9 5950X,Micro-Star International Co., Ltd. PRESTIGE X570 CREATION (MS-7C36) (3dmark.com)


----------



## lestatdk

The MSI 6900XT Gaming Z Trio is now on sale here as well. Price around 1550 USD. Considering prices have been well above 2k for quite some time, that's a good deal.

Don't know if I should try and swap it for mine, now that I already have a modified block for it  Tempting for sure


----------



## OC-NightHawk

lestatdk said:


> The MSI 6900XT Gaming Z Trio is now on sale here as well. Price around 1550 USD. Considering prices have been well above 2k for quite some time, that's a good deal.
> 
> Don't know if I should try and swap it for mine, now that I already have a modified block for it  Tempting for sure


So another shot at the silicon lottery?


----------



## lestatdk

OC-NightHawk said:


> So another shot at the silicon lottery?


I've considered having Lawson order it for me. That'll guarantee a W in the lottery. 
Me, I seem to draw blanks in both GPU and CPU lotteries this time around


----------



## OC-NightHawk

lestatdk said:


> I've considered having Lawson order it for me. That'll guarantee a W in the lottery.
> Me, I seem to draw blanks in both GPU and CPU lotteries this time around


Don't sell the old card until you know the new card is better. 

This is a real world use case of my Gigabyte RX 6900 XT Xtreme Waterforce. I don't usually have both screens going when gaming but I'm playing with the profile that I used to get the results posted in post 3716.I'm not sure how stable this is because it crashed in the second graphics test of Timespy. So far though it has played two one hour sessions without crashing so that is something for now.


----------



## lestatdk

Wow, I can only dream of frequencies like that with this card. Anything much above 2600 and it bombs


----------



## EastCoast

OC-NightHawk said:


> Don't sell the old card until you know the new card is better.
> 
> This is a real world use case of my Gigabyte RX 6900 XT Xtreme Waterforce. I don't usually have both screens going when gaming but I'm playing with the profile that I used to get the results posted in post 3716.I'm not sure how stable this is because it crashed in the second graphics test of Timespy. So far though it has played two one hour sessions without crashing so that is something for now.
> View attachment 2527078


Those are some good results WC'd. And, it does appear that the card OC better in game then in benchmarks. But the key is to keep it cool. I'm not sure if I will WC. I'm pretty satisfied with the results I'm getting so far. And, I don't mess with 3DMark that much now a days. In COD this is more then enough 😁


----------



## OC-NightHawk

EastCoast said:


> Those are some good results WC'd. And, it does appear that the card OC better in game then in benchmarks. But the key is to keep it cool. I'm not sure if I will WC. I'm pretty satisfied with the results I'm getting so far. And, I don't mess with 3DMark that much now a days. In COD this is more then enough 😁


This is with two 280mm radiators and one 140mm radiator. I’m upgrading the loop with a second pump and another 280mm radiator. I’m not expecting more frequency gains but I am hoping to reduce the temperature of the CPU and GPU because it’s pretty easy for them to shoot past 60C.


----------



## EastCoast

It would seem that these XTXH gpus are released to 1 AIB at a time. It seems to have started with Power Color RD, then Asus Strix and the Sapphire Toxic. Then XFX Merc319 Limited Black, Aorus Xtreme Water Force and now...MSI Trio Z. 

The only ones I haven't seen yet are the Asrock. And some air cooled varients like the Toxic.


----------



## OC-NightHawk

EastCoast said:


> It would seem that these XTXH gpus are released to 1 AIB at a time. It seems to have started with Power Color RD, then Asus Strix and the Sapphire Toxic. Then XFX Merc319 Limited Black, Aorus Xtreme Water Force and now...MSI Trio Z.
> 
> The only ones I haven't seen yet are the Asrock. And some air cooled varients like the Toxic.


Which it seems keeps the prices high. I really hope that this shortage begins to ease soon.


----------



## EastCoast

OC-NightHawk said:


> Which it seems keeps the prices high. I really hope that this shortage begins to ease soon.


Good observation. Yeah, I can't say if it's delibrate or not.


----------



## 99belle99

I was reading that we are due more shortages as there is now shortages for capacitors which will even effect motherboards plus everything else as capacitors are on everything.


----------



## EastCoast

99belle99 said:


> I was reading that we are due more shortages as there is now shortages for capacitors which will even effect motherboards plus everything else as capacitors are on everything.


I read that also. Therefore, if the prices for the 6900xt reached record lows as of this week. I would suggest to anyone on the fence to get off it and get those cards now. I'm not just talking about XTXH. However, if it is get it now before stock drys up. 

It's very suspecious that all these electical components keep having "shortages". Which keeps prices for video cards and other circuit board compoenents high. Is it real or artificially created I don't know but it does draw up an eye brow. .

(Wish we had the raise eyebrow emoji.)


----------



## ZealotKi11er

EastCoast said:


> I read that also. Therefore, if the prices for the 6900xt reached record lows as of this week. I would suggest to anyone on the fence to get off it and get those cards now. I'm not just talking about XTXH. However, if it is get it now before stock drys up.
> 
> It's very suspecious that all these electical components keep having "shortages". Which keeps prices for video cards and other circuit board compoenents high. Is it real or artificially created I don't know but it does draw up an eye brow. .
> 
> (Wish we had the raise eyebrow emoji.)


Most of RDNA2 orders for components most likely were made before this shortage. I think the real shortage will come with next gen card.


----------



## EastCoast

ZealotKi11er said:


> Most of RDNA2 orders for components most likely were made before this shortage. I think the real shortage will come with next gen card.


If that's true then they are planning ahead. Making it more shady.


----------



## 99belle99

EastCoast said:


> I read that also. Therefore, if the prices for the 6900xt reached record lows as of this week. I would suggest to anyone on the fence to get off it and get those cards now. I'm not just talking about XTXH. However, if it is get it now before stock drys up.
> 
> It's very suspecious that all these electical components keep having "shortages". Which keeps prices for video cards and other circuit board compoenents high. Is it real or artificially created I don't know but it does draw up an eye brow. .
> 
> (Wish we had the raise eyebrow emoji.)


It's real from lockdowns due to CV19. I don't know about other countries but here in Ireland we were locked down for months and during that time I was watching YouTube videos from America and the videos it looked as if nothing changed in America. I see by the flag you're from America so you may not have experienced a lock down. For months I could only go 5km's from my house for exercise and everything else it was like being stuck in a open prison and not meet up with anybody for social reasons. You could go into shops but there were big que's spread out(social distancing) outside and only a certain number allowed in shops at the same time. Masks had to be worn and then gel your hands. I'm not sure how strict they were in Asian countries as most components are made mostly in Asia and I suppose they were locked down too. Probably not as much as we were here in Ireland.


----------



## J7SC

...out here it's still hit and miss re. RDNA2 in stores, but I actually just got back from the local outlet of a national Canadian chain and they had two XFX Merc Speedsters and a MSI Gaming X Trio 6900XTs on the shelf (no pre-orders anymore, first come-first serve) at decent prices. There were also two 6800 XTs and four 6700 XTs available.

A few weeks ago, they actually 'sat on' two Asus Strix LC TOP and ended up discounting them by about US$ 300 from MSRP - if I wouldn't already have updated my home office systems with a 3090 and 6900XT, I would have picked up one of those for sure...


----------



## EastCoast

99belle99 said:


> It's real from lockdowns due to CV19. I don't know about other countries but here in Ireland we were locked down for months and during that time I was watching YouTube videos from America and the videos it looked as if nothing changed in America. I see by the flag you're from America so you may not have experienced a lock down. For months I could only go 5km's from my house for exercise and not meet up with anybody you for social reasons. You could go into shops but there were big ques spread out and only a certain number allowed in shops at the same time. Masks had to be worn and then gel your hands. I'm not sure how strict they were in Asian countries as most components are made mostly in Asia and I suppose they were locked down too. Probably not as much as we were here in Ireland.


Oh we had a lock down over here. Nothing different except a lot of work from home and schooling from home. But business would have told investors months ago that there might be areas of improvement in future quarters do to forecasting.

For the news of component shortages to come this late in this cycle, after business have already restarted for a while now, makes it 'sus' to me. They would have known that well before now.

I remember reading an article about some big wig in China (or was it Taiwan) supplying subtrates and other components say that there is no shortage. He said he's already back up to 100% capacity. However, it would take a while to get those components to the world. There is an issue was shipping existing order do to a backlog. IE: getting ships, trains, planes, etc to deliver as all of that came to a halt. The other is how long it take for his order to get to the world moving forward. Which he said would take about a year or so. There were other points he made but I don't recall. So, if he says there is no shortage. I simply can't take this new-news at face value.


----------



## ZealotKi11er

EastCoast said:


> Oh we had a lock down over here. Nothing different except a lot of work from home and schooling from home. But business would have told investors months ago that there might be areas of improvement in future quarters do to forecasting.
> 
> For the news of component shortages to come this late in this cycle, after business have already restarted for a while now, makes it 'sus' to me. They would have known that well before now.
> 
> I remember reading an article about some big wig in China (or was it Taiwan) supplying subtrates and other components say that there is no shortage. He said he's already back up to 100% capacity. However, it would take a while to get those components to the world. There is an issue was shipping existing order do to a backlog. IE: getting ships, trains, planes, etc to deliver as all of that came to a halt. The other is how long it take for his order to get to the world moving forward. Which he said would take about a year or so. There were other points he made but I don't recall. So, if he says there is no shortage. I simply can't take this new-news at face value.


Partly to blame is some companies lowered orders in 2020 while now they are asking 2021/22 to double. Basically we lost 1 year of full production. there is nothing sus. AMD and Nvidia are selling more GPU than ever and this is simply mining demand.


----------



## OC-NightHawk

ZealotKi11er said:


> Partly to blame is some companies lowered orders in 2020 while now they are asking 2021/22 to double. Basically we lost 1 year of full production. there is nothing sus. AMD and Nvidia are selling more GPU than ever and this is simply mining demand.


Don't blame this on mining only. There where a lot of machines built around distance learning for kids and remote work for adults. At a certain point most miners run out of power capacity for their house or cannot handle the thermal loads. I suspect most miners are capped at around a maximum of 30 cards or around 4000 to 7000W total. I blame the scalpers for making things harder and more expensive then they needed to be.


----------



## EastCoast

ZealotKi11er said:


> Partly to blame is some companies lowered orders in 2020 while now they are asking 2021/22 to double. Basically we lost 1 year of full production. there is nothing sus. AMD and Nvidia are selling more GPU than ever and this is simply mining demand.


Utter rubbish explanation for the sudden "shortage" of other components. If you have to say "partly" you are just speculating no different then I am. Furthermore, we are talking about components for parts not the parts themselves. Which has more to do with how companies shorted orders. I will see it as sus for the reason I've explained. Even though it's my own opinion.

--------------------------------------------------------------------
Anyway back at the ranch this is why I don't trust HU no further then I can throw him. he can flip flop harder then a fish out of water.


Again, the question is "*Is it a value buy*?" _*Not*_, "*Should I buy it*?"
It's a simple answer; *Yes*. The 6900xt gets you the performance you are looking for and it's a lower, affordable price then it's competitor* it is most certainly is a value* *in this market*. It doesn't mean having this opinion also means you have to buy it. However, HU unanimously said no. Sighting reasons that only exemplify a *strong bias for the competitor*. Their examples were so over the top lopsided and partisanly bias they could have also included that "green" was a better color for the card.

And people say HU are even handed. This is an example proving they are not. And should allow some to "get woke" to how HU can flip-flop. Remember, they did a video saying how ridiculously priced the 6900xt/3090 was. Now that a 6900xt can be had in that market for less then a 3080ti they still say no. It's hypocrisy to a new level of forked tongue heights. Requiring a 10 year gynmast to give you play by play narration of their mental gymnastic to say in one hand it cost to much therefore not a value buy. Later on, ignore that it's a value buy but still perfer the competitor. 

In any case it appears that the 6900xt prices have drop in a few markets. This is what I originally wanted to discuss until I heard their ludicrous response. Which goes back to what I was saying before. If what they offer is satisfactory and affordable for you buy it now before this 'new shortage' raises prices again (IMO). Hopefully that doesn't happen.


----------



## ZealotKi11er

OC-NightHawk said:


> Don't blame this on mining only. There where a lot of machines built around distance learning for kids and remote work for adults. At a certain point most miners run out of power capacity for their house or cannot handle the thermal loads. I suspect most miners are capped at around a maximum of 30 cards or around 4000 to 7000W total. I blame the scalpers for making things harder and more expensive then they needed to be.


Mining will keep the prices high. There is never enough and too late to get into mining. 30 cards is a lot of cards to get for one mininer. That could 30 gamers. We saw with RTX3 launch that we had to wait but it was not this bad 1-2 months is nothing really.


----------



## OC-NightHawk

EastCoast said:


> Utter rubbish explanation for the sudden "shortage" of other components. If you have to say "partly" you are just speculating no different then I am. Furthermore, we are talking about components for parts not the parts themselves. Which has more to do with how companies shorted orders. I will see it as sus for the reason I've explained. Even though it's my own opinion.
> 
> --------------------------------------------------------------------
> Anyway back at the ranch this is why I don't trust Steve no further then I can throw him.
> 
> 
> Again, the question is "Is it a value buy?" Not, "Should I buy it?"
> It's a simple answer. Yes, if the 6900xt gets you the performance you are looking for and it's a price you can afford then most certainly.
> Wow, they said no. And people say they are even handed. No, they are not. They are telling people to pay for the competitor because their own biases.
> 
> In any case it appears that the 6900xt prices have drop in a few markets. This is what I originally wanted to discuss until I heard their ludicrous response. Which goes back to what I was saying before. If it's affordable to you buy now before this 'new shortage' raises prices again. Hopefully it doesn't.


Is that Hardware Unboxed or Gamers Nexus. This is part of the reason I don't bother much with youtube channels beyond specs and product discovery. Gamer's Nexus will do tests in such a way as to skew the data towards the outcome they want. I find it much more valuable to see the results from regular users.



ZealotKi11er said:


> Mining will keep the prices high. There is never enough and too late to get into mining. 30 cards is a lot of cards to get for one mininer. That could 30 gamers. We saw with RTX3 launch that we had to wait but it was not this bad 1-2 months is nothing really.


First, you are exhibiting a sense of entitlement as if those cards are yours and miners do not have a right to purchase the cards. Further you seem to be implying that miners are hording them and in some way keeping you from buying them. You are overlooking scalpers and the fact that a lot of miners have been buying used parts from people who upgraded to newer cards. Nothing is preventing you from buying cards too in order to better afford your other components. You might consider the lessons of the rich vs poor mindsets. What REALLY Separates The Rich from The Poor - Rich vs Poor Mindset - YouTube


----------



## Simzak

Ordered my 6900 XT Z Trio @EastCoast thanks a lot for the help


----------



## OC-NightHawk

Simzak said:


> View attachment 2527128
> 
> Ordered my 6900 XT Z Trio @EastCoast thanks a lot for the help


Out of curiosity are you planning on installing it vertically or traditionally. It does have a very nice heat sink design.


----------



## Simzak

OC-NightHawk said:


> Out of curiosity are you planning on installing it vertically or traditionally. It does have a very nice heat sink design.


Just traditionally, might look at getting a vertical bracket later.


----------



## EastCoast

OC-NightHawk said:


> Is that Hardware Unboxed or Gamers Nexus. This is part of the reason I don't bother much with youtube channels beyond specs and product discovery. Gamer's Nexus will do tests in such a way as to skew the data towards the outcome they want. I find it much more valuable to see the results from regular users.


It was from today's video from HU. There are a few here on OC that take their video as 'gospel'. Even when I point out examples like this showing how one sided they are (and have always been) some make excuses for them. This time, I decided to take an excerpt from their video showing how one shouldn't be so gullible. 

What's interesting was that I originally was going to use that video to reinforce a purchase decision now if (and that's an if) prices don't go any lower. As they clearly state prices are indeed lower then it has been in days pass. 




Simzak said:


> View attachment 2527128
> 
> Ordered my 6900 XT Z Trio @EastCoast thanks a lot for the help



I thought it was priced around $1500?? In any case you shouldn't be disappointed. 



Simzak said:


> Just traditionally, might look at getting a vertical bracket later.


QFT, the bracket that comes with it was obstructed by my audio card so I couldn't use it. But you are right to suggest the installation of the brace bracket. I've improvised by using a stand to hold up the pcie connector side of the card.


----------



## OC-NightHawk

EastCoast said:


> I thought it was priced around $1500?? In any case you shouldn't be disappointed.


That is very steep. It makes my Gigabyte RX 6900 XT Xtreme Waterforce look like a steal at $1800+tax and I remember how sick I felt parting with that much money for the card.


----------



## EastCoast

OC-NightHawk said:


> That is very steep. It makes my Gigabyte RX 6900 XT Xtreme Waterforce look like a steal at $1800+tax and I remember how sick I felt parting with that much money for the card.


I was very close on buying the XW but I had no information on that card. By the time I realize it was a good buy the price went up to $2k. Now it's over $2.2k now. And it seem to incease right around the time of the news of the capacitor shortage. 🤔


----------



## OC-NightHawk

EastCoast said:


> I was very close on buying the XW but I had no information on that card. By the time I realize it was a good buy the price went up to $2k. Now it's over $2.2k now. And it seem to incease right around the time of the news of the capacitor shortage. 🤔


When it went from $1800 to $2000, Newegg did have a mail in $200 rebate. The problem is that rebate is gone now and the price went up another $50. Compared to the MSI air cooled card it is a better deal if you intend to water cool your card because it already has the block but it's still just insane. I remember when the MSRP of the RX 6900 XT was $999 and asking myself how bad did I want it. Eventually this will sort itself out but it is down right painful right now.


----------



## cfranko

My temps with my custom loop were great when I first built the loop. Now a month passed since I built it and my 6900 xt temps are really high for a custom loop. 63 edge 82-83 hotspot at 370 watts. Can anyone guess why temps are so high?

My Firestrike Extreme run:








I scored 27 550 in Fire Strike Extreme


AMD Ryzen 9 5900X, AMD Radeon RX 6900 XT x 1, 32768 MB, 64-bit Windows 11}




www.3dmark.com


----------



## lawson67

..


----------



## lawson67

cfranko said:


> My temps with my custom loop were great when I first built the loop. Now a month passed since I built it and my 6900 xt temps are really high for a custom loop. 63 edge 82-83 hotspot at 370 watts. Can anyone guess why temps are so high?
> 
> My Firestrike Extreme run:
> 
> 
> 
> 
> 
> 
> 
> 
> I scored 27 550 in Fire Strike Extreme
> 
> 
> AMD Ryzen 9 5900X, AMD Radeon RX 6900 XT x 1, 32768 MB, 64-bit Windows 11}
> 
> 
> 
> 
> www.3dmark.com


Did you use grizzly kryonaut?, if you did that might be why as your suffering from the pump out effect over 80c, so seems good for a few weeks then temps go up, Kryonaut, is stable up to 80C, then it breaks down


----------



## cfranko

lawson67 said:


> Did you use grizzly kryonaut?, if you did that might be why as your suffering from the pump out effect over 80c, so seems good for a few weeks then temps go up


I used Cooler Master Mastergel Maker


----------



## lestatdk

OC-NightHawk said:


> That is very steep. It makes my Gigabyte RX 6900 XT Xtreme Waterforce look like a steal at $1800+tax and I remember how sick I felt parting with that much money for the card.


I'm guessing those are prices in NZD not USD


----------



## lestatdk

cfranko said:


> My temps with my custom loop were great when I first built the loop. Now a month passed since I built it and my 6900 xt temps are really high for a custom loop. 63 edge 82-83 hotspot at 370 watts. Can anyone guess why temps are so high?
> 
> My Firestrike Extreme run:
> 
> 
> 
> 
> 
> 
> 
> 
> I scored 27 550 in Fire Strike Extreme
> 
> 
> AMD Ryzen 9 5900X, AMD Radeon RX 6900 XT x 1, 32768 MB, 64-bit Windows 11}
> 
> 
> 
> 
> www.3dmark.com


What kind of coolant are you using ?

Could be flow related or could be the seating.


----------



## cfranko

lestatdk said:


> What kind of coolant are you using ?
> 
> Could be flow related or could be the seating.


I use corsair xl5 as coolant

Right now I set all my fans to 100% speed and changed the power limit to 330 watts and the maximum edge temp was 58 and hotspot was 72. I think my issue is with the coolant temperature I am not sure though


----------



## lestatdk

cfranko said:


> I use corsair xl5 as coolant
> 
> Right now I set all my fans to 100% speed and changed the power limit to 330 watts and the maximum edge temp was 58 and hotspot was 72. I think my issue is with the coolant temperature I am not sure though


Some coolants are not meant for long term use. I don't know anything about Corsair coolant variants to be honest.

You need to check if the flow is OK. Can you see if the coolant circulates properly ? In my setup it's actually a bit hard to see it moving around unless I look really close


----------



## lestatdk

I would recommend always run the pump at 100%. So start by checking pump speed. And then check if the coolant is actually circulating


----------



## cfranko

lestatdk said:


> I would recommend always run the pump at 100%. So start by checking pump speed. And then check if the coolant is actually circulating


When I look inside my gpu block I can’t see the water moving it look like the water is just sitting there static, however sometimes I can see the water moving through it but right now I can’t. The pump is at 100 speed


----------



## Simzak

EastCoast said:


> It was from today's video from HU. There are a few here on OC that take their video as 'gospel'. Even when I point out examples like this showing how one sided they are (and have always been) some make excuses for them. This time, I decided to take an excerpt from their video showing how one shouldn't be so gullible.
> 
> What's interesting was that I originally was going to use that video to reinforce a purchase decision now if (and that's an if) prices don't go any lower. As they clearly state prices are indeed lower then it has been in days pass.
> 
> 
> 
> 
> I thought it was priced around $1500?? In any case you shouldn't be disappointed.
> 
> 
> QFT, the bracket that comes with it was obstructed by my audio card so I couldn't use it. But you are right to suggest the installation of the brace bracket. I've improvised by using a stand to hold up the pcie connector side of the card.


It's $1600 USD/$2300 NZD most 6900 XT's here are like $2800 NZD and 3080's are like $2500.


----------



## lestatdk

cfranko said:


> When I look inside my gpu block I can’t see the water moving it look like the water is just sitting there static, however sometimes I can see the water moving through it but right now I can’t. The pump is at 100 speed


You might have to drain it and check if it's clogged . Consider getting a flow meter. I regret I did not fit one last time I had my loop drained.


----------



## OC-NightHawk

lestatdk said:


> I'm guessing those are prices in NZD not USD


Sadly no USD.


----------



## lestatdk

OC-NightHawk said:


> Sadly no USD.


I was referring to Simzak's prices


----------



## KGV

Hi guys.
I'm joining the club.
I just buy Asrock 6900XT. Awesome card.
I was choosing between rtx3080ti and 6900xt. And i was having gtx1080 that i buy in 2017 when they was released.
So i choose 6900xt.
I did pay 1905 USD for the card, and 200usd for the new 1000W Gold PSU. (replacing my old 750 gold PSU)


----------



## EastCoast

KGV said:


> Hi guys.
> I'm joining the club.
> I just buy Asrock 6900XT. Awesome card.
> I was choosing between rtx3080ti and 6900xt. And i was having gtx1080 that i buy in 2017 when they was released.
> So i choose 6900xt.
> I did pay 1905 USD for the card, and 200usd for the new 1000W Gold PSU. (replacing my old 750 gold PSU)


Welcome
Hey could you post some pics of the card and box? I haven't seen this card sold in my area.


----------



## kratosatlante

KGV said:


> Hi guys.
> I'm joining the club.
> I just buy Asrock 6900XT. Awesome card.
> I was choosing between rtx3080ti and 6900xt. And i was having gtx1080 that i buy in 2017 when they was released.
> So i choose 6900xt.
> I did pay 1905 USD for the card, and 200usd for the new 1000W Gold PSU. (replacing my old 750 gold PSU)


in argentina, my 6900xt asrock phantom buy in march cost 2.034 usd, my 6900xt asrock formula cost 1.555 usd buy in september, Here the price varies a lot according to the price of ethereum, the stock, the obstacles to imports and a long etc.


----------



## J7SC

All is well that ends well !  -



Spoiler



To briefly recap: I picked up a 6900XT about four months or so ago...it was the only one they had anyhow, never mind at a good MSRP, but I already had shortlisted the Gigabyte 6900 XT Gaming OC before as it has the same PCB as the much more expensive 'H' Waterforce model (3x 8 Pin etc).When I opened the box, it must have been the least-expensive way to 'package' a card ever, with zero documentation, cables, stickers or anything else...but no matter, it turned out to be a great clocker on air for a non-'H' model. Per earlier posts, it would clock up to 2800 MHz in some benches...as long as I did not push MPT PL past 360 W, then it simply got too hot, even with fans at 3.7k rpm inferno setting.

I briefly tried a universal GPU water-block setup I had laying around from my HWBot days (complete w/rubber band 120mm VRM fan retention ). Then ordered a Bykski block for my exact model, double-checking everything. Long story short, the Bykski block I was sent didn't quite fit as it was made for the regular 2x 8 pin, fewer phase Gigabyte 6900XT. Again, it was advertised online for the exact Gaming OC (3x 8pin) model I have, and others ran into the exact same thing (there's a THW thread on it). In fairness, BykskiUS was quick to respond and allowed for an exchange, but did not have the 'correct' model for the 3x 8 pin available...and still doesn't (as of yesterday: 'available soon'). The only other problem was 'lost in translation'...the included manual of the otherwise high-quality block and presentation was for a completely different CPU instead of GPU block, and only in Chinese...while the online 'manual' is just plain wrong re. paste vs. pads.

Per pics in the spoiler below, I finally decided to hard-mod the block which involved cutting out some the acrylic for extra caps on the longer (!) PCB, and also cutting and grinding away at the nickel-plated block to make room for the extra phases this card has over the base Gigabyte 6900XT model while also being able to cool those extra phases. I also removed a stand-off that otherwise came perilously close to touching PCB components. If you ever plan to do the same thing _*'measure thrice and cut once'*_ ! I went through multiple rounds of test fitting with gobs of temp paste on all pressure points and that sort of thing...

I am thrilled to report that it worked out perfectly ! GPU and Hotspot temps dropped by over 45 C with MPT PL...and I picked up about 20 + - MHz extra, but more importantly, it holds effective clocks for much, much longer given modern-day GPU boost algorithms - even with just a temp cooling setup of a single rad for CPU and GPU and fans at 800 rpm. Every personal best benchmark result I had from before was immediately crushed.

I do think that some of it has also to do with the back-plate and modded cooling. The stock back-plate was not only ultra-thin and flimsy, but got searing hot - much, much hotter than my Asus 3090 Strix in another work-play setup even though that one has the hotter-running 24 GB of GDDR6X double-sided VRAM. When I took the stock back-plate off, there seemed to be all kinds of pads missing , including for exposed double-sided VRM components. So I added Thermalright 12.8 W/Mk pads there as well, not to mention a big helping of thermal putty (highly recommend it, btw) which I also used on the front side VRAM, per pics. Then I added an absolutely massive heatsink from Amazon meant for big amps, along with a big slather of MX5 underneath...





Spoiler: mod Pics


----------



## Thick8

Yo All,
Just installed the Gigabyte 6900 xtxh water blocked card. Still running my geothermal cooling system. What's a good OC starting point for it? Have to reacquaint myself with AMD GPU tweaking.
John


----------



## Scorpion667

I found a way to use Afterburner while retaining the "minimum clock speed" benefit from the AMD Tuning tab. This works on 21H1, not tested on 11.
-find your stable OC with optimal Min/Max core clock values set in AMD Tuning tab; hit apply
-open afterburner and click the save button (Save to profile 1 or whichever)
-install AMD driver over itself except select Minimal Install and reboot
-load the profile you saved earlier in afterburner, hit save and set to apply on startup etc

No difference in boost behavior, 0.1% lows or benchmarks which is perfect as I love Afterburner. On a fresh 21H1 install the clock speeds vary wildly if OC is set in afterburner so this appears to be the workaround



Thick8 said:


> Yo All,
> Just installed the Gigabyte 6900 xtxh water blocked card. Still running my geothermal cooling system. What's a good OC starting point for it? Have to reacquaint myself with AMD GPU tweaking.
> John


I would love to see some pics of the cooling system! Set core to 2700 and run 3dmark Time Spy. Bump it up 50 Mhz until it crashes then back out 25Mhz and do the Time Spy Stress Test. For memory just try 2150 and test; adjust up or down 50Mhz as needed. Also keep a close eye on scores as there is score regression when you OC too high just like Nvidia...


----------



## kairi_zeroblade

Scorpion667 said:


> I found a way to use Afterburner while retaining the "minimum clock speed" benefit from the AMD Tuning tab. This works on 21H1, not tested on 11.
> -find your stable OC with optimal Min/Max core clock values set in AMD Tuning tab; hit apply
> -open afterburner and click the save button (Save to profile 1 or whichever)
> -install AMD driver over itself except select Minimal Install and reboot
> -load the profile you saved earlier in afterburner, hit save and set to apply on startup etc
> 
> No difference in boost behavior, 0.1% lows or benchmarks which is perfect as I love Afterburner. On a fresh 21H1 install the clock speeds vary wildly if OC is set in afterburner so this appears to be the workaround


how about the fast timings?? (memory tweaking)


----------



## Scorpion667

kairi_zeroblade said:


> how about the fast timings?? (memory tweaking)


0.1% lows in games and benchmark scores appear to be the same. I'm guessing the AMD Tuning Tab OC is writing something to registry which doesn't get wiped when the driver is reinstalled over itself as the "minimal install". But guessing it's not good. How could I test to see if it's enabled?


----------



## kairi_zeroblade

Scorpion667 said:


> 0.1% lows in games and benchmark scores appear to be the same. I'm guessing the AMD Tuning Tab OC is writing something to registry which doesn't get wiped when the driver is reinstalled over itself as the "minimal install". But guessing it's not good. How could I test to see if it's enabled?


we, exactly have no clue either as how to test it as well..as that option is unique to wattman alone..


----------



## lestatdk

kairi_zeroblade said:


> we, exactly have no clue either as how to test it as well..as that option is unique to wattman alone..


Only by benching . It does give a small improvement in performance.


----------



## kairi_zeroblade

lestatdk said:


> Only by benching . It does give a small improvement in performance.


Oh, sorry I don't bench all day..hahaha..I mostly play and do some extra office work on my PC..I care less for E-Peen as it won't feed me nor support my ludicrous PC hobby..hahaha


----------



## hangquan0

Hi guys! I was confused between choosing 3080ti and 6900xt. I finally chose a 6900xt for myself with water block design from Gigabyte. So I can feel the strongly power gaming and sighly overclocking from it. However, I suspect that this block is not really as good as EK or others top tier block cooling. And I hear the coil-while when full-loading.


----------



## KGV

EastCoast said:


> Welcome
> Hey could you post some pics of the card and box? I haven't seen this card sold in my area.


----------



## amigafan2003

hangquan0 said:


> Hi guys! I was confused between choosing 3080ti and 6900xt. I finally chose a 6900xt for myself with water block design from Gigabyte. So I can feel the strongly power gaming and sighly overclocking from it. However, I suspect that this block is not really as good as EK or others top tier block cooling. And I hear the coil-while when full-loading.
> 
> View attachment 2527340
> 
> 
> View attachment 2527345


I had to repaste my GB 6900 XT WF to get acceptable temps.

And yes, mine has coil whine as well.


----------



## hangquan0

amigafan2003 said:


> I had to repaste my GB 6900 XT WF to get acceptable temps.
> 
> And yes, mine has coil whine as well.


what kind of thermal pad did you change?


----------



## Thick8

What's with the coil whine? is this an AMD issue? Sounds like mini bagpipes playing.


----------



## hangquan0

Thick8 said:


> What's with the coil whine? is this an AMD issue? Sounds like mini bagpipes playing.


It's like the sound of a mosquito flying in your ear, it's normal for the graphics card, no problem


----------



## 99belle99

Thick8 said:


> What's with the coil whine? is this an AMD issue? Sounds like mini bagpipes playing.


You get in on Nvidia cards as well and will do on Intel when they will release cards. I actually have coil whine too but only hear it for a few seconds until my fans come on. So start game hear a bit of coil whine when on zer0 fans and then my fans switch on and I can no longer hear it. I'm just thinking out loud now but I suppose if I was to watercool the card I would probably hear coil whine all the time.


----------



## tolis626

Scorpion667 said:


> I found a way to use Afterburner while retaining the "minimum clock speed" benefit from the AMD Tuning tab. This works on 21H1, not tested on 11.
> -find your stable OC with optimal Min/Max core clock values set in AMD Tuning tab; hit apply
> -open afterburner and click the save button (Save to profile 1 or whichever)
> -install AMD driver over itself except select Minimal Install and reboot
> -load the profile you saved earlier in afterburner, hit save and set to apply on startup etc
> 
> No difference in boost behavior, 0.1% lows or benchmarks which is perfect as I love Afterburner. On a fresh 21H1 install the clock speeds vary wildly if OC is set in afterburner so this appears to be the workaround
> 
> 
> 
> I would love to see some pics of the cooling system! Set core to 2700 and run 3dmark Time Spy. Bump it up 50 Mhz until it crashes then back out 25Mhz and do the Time Spy Stress Test. For memory just try 2150 and test; adjust up or down 50Mhz as needed. Also keep a close eye on scores as there is score regression when you OC too high just like Nvidia...


Ehm... Or you can go to Afterburner and click on "Custom Curve" or whatever it's called under the clockspeed slider and then slide up the minimum frequency to where you want it and achieve the same result. Just sayin'.


----------



## EastCoast

99belle99 said:


> It's real from lockdowns due to CV19. I don't know about other countries but here in Ireland we were locked down for months and during that time I was watching YouTube videos from America and the videos it looked as if nothing changed in America. I see by the flag you're from America so you may not have experienced a lock down. For months I could only go 5km's from my house for exercise and everything else it was like being stuck in a open prison and not meet up with anybody for social reasons. You could go into shops but there were big que's spread out(social distancing) outside and only a certain number allowed in shops at the same time. Masks had to be worn and then gel your hands. I'm not sure how strict they were in Asian countries as most components are made mostly in Asia and I suppose they were locked down too. Probably not as much as we were here in Ireland.





EastCoast said:


> Oh we had a lock down over here. Nothing different except a lot of work from home and schooling from home. But business would have told investors months ago that there might be areas of improvement in future quarters do to forecasting.
> 
> For the news of component shortages to come this late in this cycle, after business have already restarted for a while now, makes it 'sus' to me. They would have known that well before now.
> 
> I remember reading an article about some big wig in China (or was it Taiwan) supplying subtrates and other components say that there is no shortage. He said he's already back up to 100% capacity. However, it would take a while to get those components to the world. There is an issue was shipping existing order do to a backlog. IE: getting ships, trains, planes, etc to deliver as all of that came to a halt. The other is how long it take for his order to get to the world moving forward. Which he said would take about a year or so. There were other points he made but I don't recall. So, if he says there is no shortage. I simply can't take this new-news at face value.



























And here is the proof. News outlets all over the country are taking notice of the lack of capacity to process shipment of cargo that was ordered and paid for. They don't have enough workers. Yet, they try to treat it as business as usually instead of trying to do something about. An all hands on deck. Emergency hiring, etc. This is happening everywhere. This is why I don't see it as a "chip shortage" as implied. That guy already said he's back up to capacity. And said that it was shipping logistic issue and he was right!

This will get alleviated even if it take government involvement. Once alleviated prices will go down back to normal IMO.


----------



## ZealotKi11er

EastCoast said:


> And here is the proof. News outlets all over the country are taking notice of the lack of capacity to process shipment of cargo that was ordered and paid for. They don't have enough workers. Yet, they try to treat it as business as usually instead of trying to do something about. An all hands on deck. Emergency hiring, etc. This is happening everywhere. This is why I don't see it as a "chip shortage" as implied. That guy already said he's back up to capacity. And said that it was shipping logistic issue and he was right!
> 
> This will get alleviated even if it take government involvement. Once alleviated prices will go down back to normal IMO.


Chip shortage is just a term because it affects us directly. A delay at any point of the production chain is a shortage.


----------



## Thick8

99belle99 said:


> You get in on Nvidia cards as well and will do on Intel when they will release cards. I actually have coil whine too but only hear it for a few seconds until my fans come on. So start game hear a bit of coil whine when on zer0 fans and then my fans switch on and I can no longer hear it. I'm just thinking out loud now but I suppose if I was to watercool the card I would probably hear coil whine all the time.


I have no fans so it's a little more pronounced. Maybe I didn't notice it on the 1080ti because I always have my VR HMD on when gaming. Haven't done any benchmarking in a about 4 years.


----------



## Neoki

Ended up returning my Gigabyte Waterforce. The new XFX Zero WB (also XTXH) should arrive Tuesday. Will be interesting to see if their claims of 3Ghz hold any truth.


----------



## OC-NightHawk

My Gigabyte RX 6900 XT Xtreme Waterforce gets close to 2800MHz. It would be worth knowing if the XFX can get another 220MHz.


----------



## Neoki

OC-NightHawk said:


> My Gigabyte RX 6900 XT Xtreme Waterforce gets close to 2800MHz. It would be worth knowing if the XFX can get another 220MHz.


Yeah I'm jealous of yours. Nice card! 

I didn't like how I wouldn't be able to take apart the block for cleaning though. I like that the Zero is a EK block with the standard easy access screws. I'll share my finds soon!


----------



## Thick8

Neoki said:


> Yeah I'm jealous of yours. Nice card!
> 
> I didn't like how I wouldn't be able to take apart the block for cleaning though. I like that the Zero is a EK block with the standard easy access screws. I'll share my finds soon!


That's the card I wanted but couldn't get an answer from XFX about a release date. Hopefully you can get it up and running by Thursday as that's the last day I can RMA the Gigabyte at the Egg.


----------



## EastCoast

Neoki said:


> Ended up returning my Gigabyte Waterforce. The new XFX Zero WB (also XTXH) should arrive Tuesday. Will be interesting to see if their claims of 3Ghz hold any truth.


Why did you return it.


----------



## Neoki

EastCoast said:


> Why did you return it.


Very bad hotspot temp, even after 2 repastes. Max load would be around 50 Edge, 90 hotspot. Could not get it above 2660 stable and 2720 peak in Timespy. Others appear to have gotten better bins of this, apparently mine was a dud. Also, realizing that future maintenance on the very badly designed proprietary waterblock will be next to impossible as the Aluminium + Plastic cover covered the actual waterblock screws. You can remove it from the PCB, but not take it apart without ruining the aesthetics of the block.

After all that I ended up having a clash between the RGB fusion software and my iCUE that I run my pump/res, fans, CPU block on. I actually got a corrupted windows install after RGB fusion would not stop erroring out. I already had 2 bad other previous experiences with Gigabyte, this just sealed the deal for me and I realized since I was still in the return window, I needed to stop trying. Especially when the day after all that windows mess happened, the XFX card became available on Newegg for Pre-Order. It was like a sign from the PC Gods.


----------



## kratosatlante

6900xt asrock formula stock paste vs kryonaut extreme vs mastemaker gel
stock




















kryo









kryo more paste


















mastermaker gel and paste in thermal pads all






















mining










Kryonaut extreme
First application (very tight) bad hotspot temps peaks of 110, I tried with the same amount and just adjusted the die, tighten the other screws all the way, there is almost = no paste on the die.

2 application plus paste and screws, medium die adjustment, blacplate adjustment at 80-90%, I put paste on the thermal pads of the vram had 10c of dif with the asrock phantom, between 68c and 74c the formula mining vs 58c at 62c, the ram temp lowered 8c, core 2 or 3 c.

Mastermaker gel, paste on all thermal pads, tighten 80-90% all screws, x tighten the die
100pts less in ss vs pasta stock, 3c + core, 4c + hospot.

Mining vram 54c formula , 56c phantom, both uv vram a 1.30v


----------



## J7SC

...that modded water-block, custom back-plate and paste/putty combo I outlined yesterday is already showing some good promise for a non-H 6900XT. I'm not on full MPT PL (currently 360W + 15% slider) nor full clocks for Port Royal ray tracing test yet as I'm still on a temp loop w/ a single 360 for both 3950X and 6900XT - still Port Royal ray tracing bench result below is getting deep into NVidia RTX 3080 territory already...


----------



## EastCoast

*kratosatlante*

The original paste + mount gave the best performance.
I know KE runs a little thin. But the mount seems to be more to the top side of the die. But I also see you applied more KE. 
Hmm...


----------



## OC-NightHawk

I updated my machine Building a Ryzen 5950X with RX 6900 XT system | Overclock.net and added in a XD3 pump in order to increase the flow and I just inched out a little bit more of a score in Firestrike Extreme. Result (3dmark.com)



















I made no adjustments to my settings used previously. So this was purely the hardware able to function better because of the improved thermal management. The CPU peaked at 72.63C and the GPU peaked at 54C in this test where during my previous best run the CPU peaked at 76.63C and the GPU peaked at 59C. So that is a improvement of 4C for the CPU and 5C for the GPU.


----------



## OC-NightHawk

I tried a Port Royal run and got this result, AMD Radeon RX 6900 XT video card benchmark result - AMD Ryzen 9 5950X,Micro-Star International Co., Ltd. PRESTIGE X570 CREATION (MS-7C36) (3dmark.com) .



















Edit:
added CPU-Z and GPU-Z


----------



## OC-NightHawk

I was just looking on Newegg comparing the XFX and Gigabyte card and noticed that the Gigabyte variant in the US store is $1910 plus tax and shipping. It’s below $2000 right now and it’s been a good card for me. The XFX is a hundred less and in stock too.


----------



## lDevilDriverl

kratosatlante said:


> 6900xt asrock formula stock paste vs kryonaut extreme vs mastemaker gel
> stock
> 
> First application (very tight) bad hotspot temps peaks of 110, I tried with the same amount and just adjusted the die, tighten the other screws all the way, there is almost = no paste on the die.
> 
> 2 application plus paste and screws, medium die adjustment, blacplate adjustment at 80-90%, I put paste on the thermal pads of the vram had 10c of dif with the asrock phantom, between 68c and 74c the formula mining vs 58c at 62c, the ram temp lowered 8c, core 2 or 3 c.
> 
> Mastermaker gel, paste on all thermal pads, tighten 80-90% all screws, x tighten the die
> 100pts less in ss vs pasta stock, 3c + core, 4c + hospot.
> 
> Mining vram 54c formula , 56c phantom, both uv vram a 1.30v


Hi,
69000xt stable in mining on 0.65vcore and 1250Mhz? was min vcore chenged using MPT?


----------



## EastCoast

Did anyone play the Halo Infinite beta with there 6900xt comment on the performance?


----------



## Thick8

What is considered a good clock fot the XTXH series card? I'm stable at 2880 GPU and 2200 Memory. It's using 380 watts as per Radeon Software during Timespy so I'm not sure if there is any more it can give.


----------



## OC-NightHawk

Thick8 said:


> What is considered a good clock fot the XTXH series card? I'm stable at 2880 GPU and 2200 Memory. It's using 380 watts as per Radeon Software during Timespy so I'm not sure if there is any more it can give.


That seems very fast. What are your MPT settings when you ran your card at that speed.


----------



## Thick8

OC-NightHawk said:


> That seems very fast. What are your MPT settings when you ran your card at that speed.


I've just switched back to AMD so I'm not sure what you're asking. It's set at + 15% watts. The voltage slider is at 1200mV. Temperature is 74c. It's the new Gigabyte WB card.


----------



## OC-NightHawk

Thick8 said:


> I've just switched back to AMD so I'm not sure what you're asking. It's set at + 15% watts. The voltage slider is at 1200mV. Temperature is 74c. It's the new Gigabyte WB card.


You got timespy to pass at 2880MHz at +15% power of a stock Gigabyte RX 6900 XT Xtreme Waterforce without using More Power Tool to increase the power limits? Can I see a link to your results on 3DMark?


----------



## ZealotKi11er

OC-NightHawk said:


> You got timespy to pass at 2880MHz at +15% power of a stock Gigabyte RX 6900 XT Xtreme Waterforce without using More Power Tool to increase the power limits? Can I see a link to your results on 3DMark?


Most likely at 380w he is not hitting those clks.


----------



## Neoki

Thick8 said:


> What is considered a good clock fot the XTXH series card? I'm stable at 2880 GPU and 2200 Memory. It's using 380 watts as per Radeon Software during Timespy so I'm not sure if there is any more it can give.


Much faster than what I was able to achieve on the same card. I could gently touch 2750 @ 390 MPT, but was a gamble if Timespy would crash on Graphics Test #2. And stable 2660 @ 350 MP, although it seemed 2715 held up OK. But I could never once go above 2750 without a lock up, BSOD, or Timespy crash. Memory was OK at 2100, seemed like 2110-2130 was OK, but above that also gambling on the errors after that especially when I up'd the core above 2710.

I was fighting really bad temp delta's, which seemed unrepairable. Mine was 50-52 Edge, 90-92 Hotspot at peak load.


----------



## 99belle99

Neoki said:


> Much faster than what I was able to achieve on the same card. I could gently touch 2750 @ 390 MPT, but was a gamble if Timespy would crash on Graphics Test #2.


What is it with graphics test 2 that causes the crashes. I can always pass the first test but once on GT2 it's a gamble if it will pass or not when pushing the limits.


----------



## OC-NightHawk

ZealotKi11er said:


> Most likely at 380w he is not hitting those clks.


I’m thinking that too but it would be interesting to see the average clock speed to see where the card is operating at as another sample of the card.


----------



## CS9K

99belle99 said:


> What is it with graphics test 2 that causes the crashes. I can always pass the first test but once on GT2 it's a gamble if it will pass or not when pushing the limits.


Load. My reference RX 6900 XT w/water touches 416W once and 410W a few times in Time Spy #2 @ 2700MHz, where it barely hits 360-380W in most games and Time Spy #1


----------



## amigafan2003

hangquan0 said:


> what kind of thermal pad did you change?


I didn't change any of the pads. just repasted.


----------



## cfranko

CS9K said:


> Load. My reference RX 6900 XT w/water touches 416W once and 410W a few times in Time Spy #2 @ 2700MHz, where it barely hits 360-380W in most games and Time Spy #1


You’re really lucky for a referance card doing 2700 mhz stable in Time Spy if you ask me. My Asrock card can’t do that


----------



## OC-NightHawk

J7SC said:


> ...that modded water-block, custom back-plate and paste/putty combo I outlined yesterday is already showing some good promise for a non-H 6900XT. I'm not on full MPT PL (currently 360W + 15% slider) nor full clocks for Port Royal ray tracing test yet as I'm still on a temp loop w/ a single 360 for both 3950X and 6900XT - still Port Royal ray tracing bench result below is getting deep into NVidia RTX 3080 territory already...
> 
> View attachment 2527458


I just noticed that your More Power Tool Power and Voltage section has more controls then mine. How do I get these extra controls?


----------



## cfranko

OC-NightHawk said:


> I just noticed that your More Power Tool Power and Voltage section has more controls then mine. How do I get these extra controls?


You need to update More Power Tool to the latest beta version.


----------



## J7SC

OC-NightHawk said:


> I just noticed that your More Power Tool Power and Voltage section has more controls then mine. How do I get these extra controls?


...latest MPT - beta 10 > here


----------



## kratosatlante

lDevilDriverl said:


> Hi,
> 69000xt stable in mining on 0.65vcore and 1250Mhz? was min vcore chenged using MPT?


yes using mpt to UV to 650mv core both 6900xt asrock Phantom and Formula, uv vram from 1.35 0.850 to 1.30 0.80, both 100% stable at 1250 core, formula go to1500mhz at 650mnv but i play youtube or play very old game drop mh to 60 -56mh, 99% stable at 1500 , Ever fell watching youtube or playing, phantom up 1mh now Increase without connecting a screen do not test if the core rises to 1500 stable even


----------



## CS9K

cfranko said:


> You’re really lucky for a referance card doing 2700 mhz stable in Time Spy if you ask me. My Asrock card can’t do that


It's about what I've seen around the 'net for water cooled reference RX 6900 XT's. Most people never make it to 2700MHz on air just due to thermals.That 416W peak on water... add 40W to that if it were on air just due to heat. One doesn't want to run a reference GPU above 400W very often, especially on air.


----------



## Thick8

OC-NightHawk said:


> You got timespy to pass at 2880MHz at +15% power of a stock Gigabyte RX 6900 XT Xtreme Waterforce without using More Power Tool to increase the power limits? Can I see a link to your results on 3DMark?


I apologize. That was supposed to be 2800 not 2880. This is the first time I've used the new Radeon software and it has me a little perplexed. The results don't display what I enter into the tuning window. I assume what is displayed is the actual GPU frequency achieved. That value is significantly less than what I entered. I downloaded the MorePowerTool but it lacks any documentation on it's use; and I can't seem to find any. Do I have to have a BIOS to load into it, make changes, then save it? Do I have to then flash the cards firmware with this new BIOS?
Here's a link to my highest result so far. AMD Radeon RX 6900 XT video card benchmark result - AMD Ryzen 7 5800X,ASUSTeK COMPUTER INC. ROG CROSSHAIR VII HERO (3dmark.com)


----------



## tolis626

Thick8 said:


> I apologize. That was supposed to be 2800 not 2880. This is the first time I've used the new Radeon software and it has me a little perplexed. The results don't display what I enter into the tuning window. I assume what is displayed is the actual GPU frequency achieved. That value is significantly less than what I entered. I downloaded the MorePowerTool but it lacks any documentation on it's use; and I can't seem to find any. Do I have to have a BIOS to load into it, make changes, then save it? Do I have to then flash the cards firmware with this new BIOS?
> Here's a link to my highest result so far. AMD Radeon RX 6900 XT video card benchmark result - AMD Ryzen 7 5800X,ASUSTeK COMPUTER INC. ROG CROSSHAIR VII HERO (3dmark.com)


You need to go into GPU-z and export your card's BIOS (the little arrow next to the BIOS readout). Then you go into MPT, click load and then find the BIOS file you exported. It reads the values it needs from the BIOS, you change whatever you want (although realistically you'd only need to change power limit with an XTXH, unless you want to undervolt) and then click "Write SPPT". No flashing BIOS needed. It's actually pretty easy once you get around to it.


----------



## Thick8

tolis626 said:


> You need to go into GPU-z and export your card's BIOS (the little arrow next to the BIOS readout). Then you go into MPT, click load and then find the BIOS file you exported. It reads the values it needs from the BIOS, you change whatever you want (although realistically you'd only need to change power limit with an XTXH, unless you want to undervolt) and then click "Write SPPT". No flashing BIOS needed. It's actually pretty easy once you get around to it.


Thanks for that.
So I only raised the power and TDC limits. AMD stress test watts rose to a peak of 460w. Gonna give Timespy another go.


----------



## OC-NightHawk

I got my FIrstrike Extreme score a little more
AMD Radeon RX 6900 XT video card benchmark result - AMD Ryzen 9 5950X,Micro-Star International Co., Ltd. PRESTIGE X570 CREATION (MS-7C36) (3dmark.com)


----------



## OC-NightHawk

Thick8 said:


> I apologize. That was supposed to be 2800 not 2880. This is the first time I've used the new Radeon software and it has me a little perplexed. The results don't display what I enter into the tuning window. I assume what is displayed is the actual GPU frequency achieved. That value is significantly less than what I entered. I downloaded the MorePowerTool but it lacks any documentation on it's use; and I can't seem to find any. Do I have to have a BIOS to load into it, make changes, then save it? Do I have to then flash the cards firmware with this new BIOS?
> Here's a link to my highest result so far. AMD Radeon RX 6900 XT video card benchmark result - AMD Ryzen 7 5800X,ASUSTeK COMPUTER INC. ROG CROSSHAIR VII HERO (3dmark.com)


Yeah in that regard AMD is a bit different. Overclocking a Radeon is a lot like Precision Overboost for a Ryzen CPU. You can set the power limits through the More Power Tool and you can set the GPU and memory speed to what you can. Then the card will do it's best to reach the specified speed as long as it doesn't reach the thermal limit and throttle. The thing about the XTXH like the Gigabyte RX 6900 XT Xtreme Waterforce is that it has a higher adjustable ceiling for the GPU and memory speed, but it's thermal ceiling is a lot lower. If you can manage the heat your card stands a good chance of getting close to 2800MHz.

My card struggles to reach 2782MHz. Timespy doesn't like it on graphics test 2 so I know it isn't 100% stable. I suspect that most of the Gigabyte 6900 XT Xtreme cards will reach 2700MHz and maybe 2750MHz. Beyond that more and more cooling is required and the power consumption of the card starts to become a factor. Maybe undervolting slightly might help my card out.

In terms of gaming though if you can reach 2700MHz and get the memory up to 2150MHz you will be golden for any game at 1440P 145Hz or 4K 60Hz easy.


----------



## EastCoast

21.10.1 out


----------



## OC-NightHawk

EastCoast said:


> 21.10.1 out


I'm getting results that say the driver is not approved in the 3DMark application but compare results online shows them as valid.


----------



## cfranko

Should I get Nitro+ SE or Merc 319? Same price.


----------



## cfranko

And is the Liquid Cooled referance 6900 xt any good? I am asking because a 120mm radiators seems too small.


----------



## gtz

cfranko said:


> Should I get Nitro+ SE or Merc 319? Same price.


I am a big fan of the Merc cooler, my 6800XT Merc is built better than the cooler on my 6900XT Nitro+. I bought the nitro because it is slightly shorter card and could fit in smaller cases but the Merc has a superb cooler.


----------



## amigafan2003

OC-NightHawk said:


> I'm getting results that say the driver is not approved in the 3DMark application but compare results online shows them as valid.


Yeah, it always takes a couple of days for new drivers to get approved.


----------



## cfranko

gtz said:


> I am a big fan of the Merc cooler, my 6800XT Merc is built better than the cooler on my 6900XT Nitro+. I bought the nitro because it is slightly shorter card and could fit in smaller cases but the Merc has a superb cooler.


The Merc is a XTX but the Nitro SE is a XTXH though. Is it worth getting a XTX for a better cooler?


----------



## deadfelllow

gtz said:


> I am a big fan of the Merc cooler, my 6800XT Merc is built better than the cooler on my 6900XT Nitro+. I bought the nitro because it is slightly shorter card and could fit in smaller cases but the Merc has a superb cooler.


Hmmm. So basically you're saying that XFX Merc > Nitro + .
But i'am also wondering that is there any cooling difference between sapphire nitro+ and sapphire nitro + SE?


----------



## OC-NightHawk

deadfelllow said:


> Hmmm. So basically you're saying that XFX Merc > Nitro + .
> But i'am also wondering that is there any cooling difference between sapphire nitro+ and sapphire nitro + SE?


I believe the SE version is an XTXH card. It will have higher limits. I would strongly advise putting a water block on it though because the thermal limit is lower.


----------



## deadfelllow

OC-NightHawk said:


> I believe the SE version is an XTXH card. It will have higher limits. I would strongly advise putting a water block on it though because the thermal limit is lower.


Precisely, I was thinking the same. But consider that I'm living in Kebapland(TURKEY). Sapphire Nitro + SE equals 17k turkish liras. And i just looked the LC/WB cards which is 24k + turkish liras  so it is really expensive. I dont think i'll be able to buy custom loop or even liquid cooling card. I'll stick with air cooling. Thank you for your reply!

PS : Minimum wage in Turkey = 2850 turkish liras.

sad country


----------



## tolis626

cfranko said:


> The Merc is a XTX but the Nitro SE is a XTXH though. Is it worth getting a XTX for a better cooler?





OC-NightHawk said:


> I believe the SE version is an XTXH card. It will have higher limits. I would strongly advise putting a water block on it though because the thermal limit is lower.


Seems you guys fell for the same bad info as I had at first, before I bought the Nitro+ SE. It's not an XTXH, sadly, just XTX. That said, it uses the same PCB as the Toxic, which should be better than the standard Nitro+. The cooler is mediocre, my temps are barely manageable at 330W for gaming.


deadfelllow said:


> Precisely, I was thinking the same. But consider that I'm living in Kebapland(TURKEY). Sapphire Nitro + SE equals 17k turkish liras. And i just looked the LC/WB cards which is 24k + turkish liras  so it is really expensive. I dont think i'll be able to buy custom loop or even liquid cooling card. I'll stick with air cooling. Thank you for your reply!
> 
> PS : Minimum wage in Turkey = 2850 turkish liras.
> 
> sad country


Hey neighbor. Greetings from Greece. Seems you guys are suffering from similar problems to us. Minimum wage here is like 550-600€ and a 6900XT goes for over 2k€ here. It's absurd.


----------



## deadfelllow

tolis626 said:


> Seems you guys fell for the same bad info as I had at first, before I bought the Nitro+ SE. It's not an XTXH, sadly, just XTX. That said, it uses the same PCB as the Toxic, which should be better than the standard Nitro+. The cooler is mediocre, my temps are barely manageable at 330W for gaming.
> 
> Hey neighbor. Greetings from Greece. Seems you guys are suffering from similar problems to us. Minimum wage here is like 550-600€ and a 6900XT goes for over 2k€ here. It's absurd.


Bro please. You can buy in 4 months  think about me i can buy in 6 months lmao


----------



## lestatdk

cfranko said:


> Should I get Nitro+ SE or Merc 319? Same price.


The Merc has the best cooler of the two. So unless you plan to put a waterblock on , get the Merc


----------



## tolis626

deadfelllow said:


> Bro please. You can buy in 4 months  think about me i can buy in 6 months lmao


Well, in my defence, I said similar, not the same. 

Fortunately, this minimum wage stuff doesn't apply to me, but man is it bad over here for a lot of people. For most things we have Europe prices, but man, we don't even come close to Europe salaries. And electronics, as expensive as they've got over the years, are still a rather inexpensive hobby compared to others. As an example, I'm a doctor, and still a nice car with a larger engine (and I mean >2L of displacement) is prohibitively expensive, I can't afford to have one, with taxes and all. It's messed up. Of course, these past few months especially, you guys have it worse, but just sayin'... At least according to politicians you guys have big bad Greece to hate and disorient the masses, like we have big bad Turkey. Ridiculous...

Sorry for the off topic guys!


----------



## 99belle99

Are you sure minimum wage is that high is Greece(€550-600). I live in Ireland and we use the Euro currency too but afaik our minimum wage isn't that high.


----------



## 99belle99

Right they are paid nothing over there in Greece. I didn't realise that was a months wage in Ireland they are paid weekly but per month they are paid €1574 minimum wage.


----------



## CS9K

Has anyone been brave enough to give 21.10.1 a try yet *on Windows 10? I can't risk it yet since 21.9.2 works and I have flight sim to play Wednesday evening with no spare time to fight with drivers.

Actually, you know what... no, I'm not going to fiddle with drivers anymore until I make the jump to Windows 11.


----------



## tolis626

99belle99 said:


> Are you sure minimum wage is that high is Greece(€550-600). I live in Ireland and we use the Euro currency too but afaik our minimum wage isn't that high.





99belle99 said:


> Right they are paid nothing over there in Greece. I didn't realise that was a months wage in Ireland they are paid weekly but per month they are paid €1574 minimum wage.


Right-o. It used to be even lower, sub 550€. And, mind you, the worst part is that we're not talking about the typical jobs you'd expect minimum wage from. You have young scientists, teachers, engineers and programmers making that kind of money. And that's if they're lucky sometimes, many employers will offer only part time jobs to these people, or convince them to work as interns for free or for scraps. Then you have full time workers, like teachers, making like 700€ for the first few years. And one of the worst offenders is my sector, healthcare. Sure, I was making a rather healthy 1600€ a month (healthy by Greek standards) when I started, but I also had to work endless hours and 24 hour shifts. We were paid 80€ per 24 hour ER shift. That's just over 3€/hr. For a doctor whom you expect to save lives. Then there were periods where I had to work over 80 hours a week, with the worst of the worst getting to over 100 hours in a single week. I basically lived at work. I was burnt out in my first year. Until one day I just quit in the most spectacular fashion I could, middle fingers blazing.

I get that there's other countries that have struggled too these past few years. But I always get the feeling that the Greek crisis has been misreported on. It wasn't just a recession. It wasn't only corruption or debt. Many people lost jobs, livelihoods, went into depression. My father (a civil engineer working on the water supply) used to have a respectable salary as head of his department back in 2007, as did my mother (a philologist). They bought a second car, renovated the house and decided to buy a piece of land near the sea where they wanted to build a cottage for when they'd retire. That required a loan. Two years later, both their salaries were cut in half, they had me in med school, my sister in high school and their loan payments didn't decrease one cent. And the situation got even worse as time went on. It only got somewhat better in the last 3 or so years. You can imagine what situations like that do to people. And my parents weren't among those affected in the worst ways.


CS9K said:


> Has anyone been brave enough to give 21.10.1 a try yet *on Windows 10? I can't risk it yet since 21.9.2 works and I have flight sim to play Wednesday evening with no spare time to fight with drivers.
> 
> Actually, you know what... no, I'm not going to fiddle with drivers anymore until I make the jump to Windows 11.


Unless there's a change of plans for me, I'll be your guinea pig. But don't expect my card to perform like yours. Just expect me to find out if anything's wrong.


----------



## CS9K

tolis626 said:


> Unless there's a change of plans for me, I'll be your guinea pig. But don't expect my card to perform like yours. Just expect me to find out if anything's wrong.


I would appreciate it! That said, if you have problems like I did on 21.9.1, you won't need to overclock for the drivers to faceplant.


----------



## tolis626

CS9K said:


> I would appreciate it! That said, if you have problems like I did on 21.9.1, you won't need to overclock for the drivers to faceplant.


Well, I didn't have driver problems since 21.7.2 came out. My problems have always been temps and the silicon lottery loser of a card I got. And it strikes again. I can't complete TimeSpy at 2600MHz, it reaches 105C hotspot and I have a warning set up in HWiNFO64 which pops up when it reaches that temperature and it shuts the benchmark down. I refuse to go below that, my ego can't handle it.

Weirdly, it failed to complete Superposition at 2750MHz, which it used to do fine. That said, 2750MHz is quite a bit above where this thing is stable at, so I wouldn't count this failure as a problem with the drivers. Sadly, I can't really be of much help until I get down to at least repasting the damn thing. I don't know if it's just my card or if the Nitro+ cooler is just plain inadequate, but my temps are horrible compared to others. It's not only the 30C or so delta that I get, that's not unheard of. It's that it reaches 75C edge too. And that's with the front panel removed and the side panel open. Granted, I did push it to 375W, but it mostly fails within GT1 due to temps.


----------



## xR00Tx

By any chance, has anyone tested the 21.10.1 driver?!


----------



## 99belle99

tolis626 said:


> Well, I didn't have driver problems since 21.7.2 came out. My problems have always been temps and the silicon lottery loser of a card I got. And it strikes again. I can't complete TimeSpy at 2600MHz, it reaches 105C hotspot and I have a warning set up in HWiNFO64 which pops up when it reaches that temperature and it shuts the benchmark down. I refuse to go below that, my ego can't handle it.


I have a reference card with stock cooler and my card cannot do 2600MHz in Timespy. I would have it set at that but it is a fluke if it passes graphics test 2 at that frequency. 

I scored 20 056 in Time Spy This is currently my best score in Timespy.


----------



## 99belle99

xR00Tx said:


> By any chance, has anyone tested the 21.10.1 driver?!


No my PC is running pretty good at the minute so do not want that to change so I will pass. I also have a new MB bios and do not want to flash that either.


----------



## jonRock1992

xR00Tx said:


> By any chance, has anyone tested the 21.10.1 driver?!


I'm going to a little bit tonight. After work I'm going to clean install Windows 11 with the new driver and hopefully get that 25k GPU score in Timespy 🤣

I was able to get back to 2066MHz FCLK with my 5800X after re-seating my RAM and CPU AIO and multiple CMOS resets, so hopefully that helps as well.

Update: Windows 11 with the latest driver is a big nope. I have lost 20 MHz on my GPU max clock for Timespy, and I'm getting driver crashes when GT2 fails. Lost about 300 points in Timespy graphics score for the run I was able to complete. I'm done testing this POS. Going back to Win 10 and 21.9.2.


----------



## Justye95

Hi guys, I have a liquid cooled 6900 xt, I saved the bios via gpuz and changed the watts via mpt, but in the game it always measures 300 watts with msi afterburner, why? i have the latest amd and windows 11 driver.
i have the latest beta version of mpt for windows 11


----------



## EastCoast

As,far as u know it's not automatica in games like that. 

Try 3dmark firestrike instead.


----------



## OC-NightHawk

I would suggest not using afterburner. Try using the overlay that the radeon driver software provides. It will report these statistics. Try seeing if that is reporting more as expected.


----------



## deadfelllow

Hey Guys!

I'am confused about 6900xt's. So there are xt cards and xtxh cards. 

I just want to know that ; 
*6900 XT Toxic Gaming OC Extreme Edition 11308-08-20G*
is this card xtxh model?

If it's xtxh model can it reach 2700+ core clock easily?


----------



## lestatdk

deadfelllow said:


> Hey Guys!
> 
> I'am confused about 6900xt's. So there are xt cards and xtxh cards.
> 
> I just want to know that ;
> *6900 XT Toxic Gaming OC Extreme Edition 11308-08-20G*
> is this card xtxh model?
> 
> If it's xtxh model can it reach 2700+ core clock easily?


According to TPU it's XTXH










Sapphire TOXIC RX 6900 XT Extreme Edition Specs


AMD Navi 21, 2525 MHz, 5120 Cores, 320 TMUs, 128 ROPs, 16384 MB GDDR6, 2000 MHz, 256 bit




www.techpowerup.com


----------



## Scorpion667

deadfelllow said:


> Hey Guys!
> 
> I'am confused about 6900xt's. So there are xt cards and xtxh cards.
> 
> I just want to know that ;
> *6900 XT Toxic Gaming OC Extreme Edition 11308-08-20G*
> is this card xtxh model?
> 
> If it's xtxh model can it reach 2700+ core clock easily?


The 6900XT Toxic Extreme Edition is guaranteed to hit 2730 core via the one click OC in Trixx Software.


----------



## The EX1

The Asrock OC Formula is $1629 on Newegg US this morning with promo code.


----------



## The EX1

deadfelllow said:


> Hey Guys!
> 
> I'am confused about 6900xt's. So there are xt cards and xtxh cards.
> 
> I just want to know that ;
> *6900 XT Toxic Gaming OC Extreme Edition 11308-08-20G*
> is this card xtxh model?
> 
> If it's xtxh model can it reach 2700+ core clock easily?


The Limited Edition Toxic is normal XTX. The Extreme Edition is XTXH and their Toxic boost feature in the Trixx software will push them over 2700 core typically.


----------



## cfranko

I have a 6900 XT and a Bykski waterblock on it. I built my custom loop about a month ago and during the first week temps were great. Now it has been about month since I built the loop and temps are much much worse. At first I had 80 degrees hotspot at 400 watts but now I have 95 degrees hotspot at 400 watts. I didn’t touch anything in the loop, it got this much worse completely by itself. Can anyone guess why the contact between the block and the GPU die got a lot worse byitself? It is obviously not because of water temps because the ambient temperature is basically the same.


----------



## CS9K

cfranko said:


> I have a 6900 XT and a Bykski waterblock on it. I built my custom loop about a month ago and during the first week temps were great. Now it has been about month since I built the loop and temps are much much worse. At first I had 80 degrees hotspot at 400 watts but now I have 95 degrees hotspot at 400 watts. I didn’t touch anything in the loop, it got this much worse completely by itself. Can anyone guess why the contact between the block and the GPU die got a lot worse byitself? It is obviously not because of water temps because the ambient temperature is basically the same.


_If_ your water loop is operating normally, and your water block is contacting the GPU core properly, then what you are observing is classic thermal compound "pump-out". One way to test this is to _carefully_ press your block against your GPU while the GPU is under load, to see if temperatures improve at all. If temperatures improve, then it is likely a contact issue, if temperatures do not improve, then it could be pump-out.

If you pull the water block off of the GPU, try your best to pull it straight "up" away from the GPU, so that you can take pictures of the thermal paste pattern on the GPU core and water block.


----------



## cfranko

CS9K said:


> _If_ your water loop is operating normally, and your water block is contacting the GPU core properly, then what you are observing is classic thermal compound "pump-out". One way to test this is to _carefully_ press your block against your GPU while the GPU is under load, to see if temperatures improve at all. If temperatures improve, then it is likely a contact issue, if temperatures do not improve, then it could be pump-out.
> 
> If you pull the water block off of the GPU, try your best to pull it straight "up" away from the GPU, so that you can take pictures of the thermal paste pattern on the GPU core and water block.


I tried using my hand to press on the gpu block while under load, nothing changed. I mean the thing is I had an air cooler on this card before and the same exact thing happened. I would repaste the air cooler, temps are great for a week or two, and then gets worse again. I went with watercooling to prevent this but it is happening again and tbh I want to throw the pc out of the window. With the air cooler I used MX-4, with the waterblock I used mastergel maker.

This was how it looked when I took apart the air cooler when it had MX-4









I don’t have a similiar photo with the waterblock because I never opened it up after building the loop but If I decide to open it up ill take a photo of the thermal paste to see if it pumped out. At this point I want to try Liquid metal. If liquid metal fixes it good, if liquid metal breaks my GPU then it’s over. That means ill be done with this GPU and move on because I have no patience left to deal with this. This is really frustrating, I have to repaste my GPU every week to have good temperatures.


----------



## jonRock1992

cfranko said:


> I tried using my hand to press on the gpu block while under load, nothing changed. I mean the thing is I had an air cooler on this card before and the same exact thing happened. I would repaste the air cooler, temps are great for a week or two, and then gets worse again. I went with watercooling to prevent this but it is happening again and tbh I want to throw the pc out of the window. With the air cooler I used MX-4, with the waterblock I used mastergel maker.
> 
> This was how it looked when I took apart the air cooler when it had MX-4
> View attachment 2527871
> 
> 
> I don’t have a similiar photo with the waterblock because I never opened it up after building the loop but If I decide to open it up ill take a photo of the thermal paste to see if it pumped out. At this point I want to try Liquid metal. If liquid metal fixes it good, if liquid metal breaks my GPU then its over. I am done with this GPU and move on. This is really frustrating


I'm using Thermalright Silver King with my bykski block. It works really well. Edge and hotspot delta is 15C max. Hotspot temp is also in the high 50's or low 60's most of the time. Just make sure you coat both the GPU die and block with a very thin layer. Also, I use liquid electrical tape around the GPU die to prevent shorts from possible run-off. Then hand-tighten your block as tight as you can get without stripping the screws or cracking the block.


----------



## cfranko

jonRock1992 said:


> I'm using Thermalright Silver King with my bykski block. It works really well. Edge and hotspot delta is 15C max. Hotspot temp is also in the high 50's or low 60's most of the time. Just make sure you coat both the GPU die and block with a very thin layer. Also, I use liquid electrical tape around the GPU die to prevent shorts from possible run-off. Then hand-tighten your block as tight as you can get without stripping the screws or cracking the block.


The thing is most of the time the delta between the edge temperature and the hotspot temperature is 13c right now. But since the edge temperature is high the hotspot is also high even though the delta is not that much. I don’t know what I should do


----------



## CS9K

cfranko said:


> I tried using my hand to press on the gpu block while under load, nothing changed. I mean the thing is I had an air cooler on this card before and the same exact thing happened. I would repaste the air cooler, temps are great for a week or two, and then gets worse again. I went with watercooling to prevent this but it is happening again and tbh I want to throw the pc out of the window. With the air cooler I used MX-4, with the waterblock I used mastergel maker.
> 
> This was how it looked when I took apart the air cooler when it had MX-4
> View attachment 2527871
> 
> 
> I don’t have a similiar photo with the waterblock because I never opened it up after building the loop but If I decide to open it up ill take a photo of the thermal paste to see if it pumped out. At this point I want to try Liquid metal. If liquid metal fixes it good, if liquid metal breaks my GPU then its over. I am done with this GPU and move on. This is really frustrating


In the case of your MX-4, it had begun to pump out, though it doesn't look _as_ bad as some of the dry, crusty pictures one can find around the internet. The picture below shows the places where heatsink compound had already thinned out.









That said, I personally do not know anything about Mastergel Maker, but I looked it up. Let's compare density of a few compounds, in g/cm3:

More-dense compounds:
3.73 - Gelid GC-Extreme
3.76 - Kryonaut Extreme
3.00 - EK Ectotherm

Less-dense compounds:
2.60 - Mastergel Maker
2.49 - Noctua NT-H1
3.70 - Kryonaut
2.40 - Arctic MX-4
3.00? - Arctic Silver 5

Density isn't always directly related to viscosity; that depends on the emulsion the compound is formulated with, and will thus determine how quickly it may pump out. Take Kryonaut for example, it's quite dense, but it pumps out at the speed of light on top of a bare-die (but ironically is one of the best pastes for heat-spreader applications). Likewise, Ectotherm is less dense than the others, but will be more resistant to pump-out thanks to higher viscosity, than most other less dense compounds.

TL;DR: I implore you to give Gelid GC-Extreme a try if you do pull your block apart again. Spread it on top of the die with the included spreader, and make as even of a layer as possible across the entire die. Try not to have any dips or lower areas across the die. The layer should be thick enough that you can't see the GPU die through it after it has been spread.

By the looks of it, mounting pressure with the air cooler was fine, the paste just started to pump out is all. That _may_ be what has happened with your water block and Mastergel, too.


----------



## cfranko

CS9K said:


> In the case of your MX-4, it had begun to pump out, though it doesn't look _as_ bad as some of the dry, crusty pictures one can find around the internet. The picture below shows the places where heatsink compound had already thinned out.
> 
> View attachment 2527878
> 
> 
> That said, I personally do not know anything about Mastergel Maker, but I looked it up. Let's compare density of a few compounds, in g/cm3:
> 
> More-dense compounds:
> 3.73 - Gelid GC-Extreme
> 3.76 - Kryonaut Extreme
> 3.00 - EK Ectotherm
> 
> Less-dense compounds:
> 2.60 - Mastergel Maker
> 2.49 - Noctua NT-H1
> 3.70 - Kryonaut
> 2.40 - Arctic MX-4
> 3.00? - Arctic Silver 5
> 
> Density isn't always directly related to viscosity; that depends on what the emulsion the compound consists of, and will thus determine how quickly it may pump out. Take Kryonaut for example, it's quite dense, but it pumps out at the speed of light on top of a bare-die (but ironically is one of the best pastes for heat-spreader applications). Likewise, Ectotherm is less dense than the others, but sill be more resistant to pump-out than most other less dense compounds.
> 
> TL;DR: I implore you to give Gelid GC-Extreme a try if you do pull your block apart again. Spread it on top of the die with the included spreader, and make as even of a layer as possible across the entire die. Try not to have any dips or lower areas across the die. The layer should be thick enough that you can't see the GPU die through it after it has been spread.
> 
> By the looks of it, mounting pressure with the air cooler was fine, the paste just didn't stay in place. That _may_ be what has happened with your water block and Mastergel, too.


If I just stick to the stock power limit (280W) I get 58c edge and 73c hotspot. I think the best solution at this point would be using the card at stock with the stock power limit and turn off the temperature monitoring software. This way the card wouldn’t overheat and I wouldn’t be seeing the horrible temperatures and just try to enjoy my games. Thanks for the advice, if I ever feel like bothering with the GPU again ill try Gelid GC Extreme but at this point ill stick to the stock power limit and call it a day.


----------



## CS9K

cfranko said:


> If I just stick to the stock power limit (280W) I get 58c edge and 73c hotspot. I think the best solution at this point would be using the card at stock with the stock power limit and turn off the temperature monitoring software. This way the card wouldn’t overheat and I wouldn’t be seeing the horrible temperatures and just try enjoy my games. Thanks for the advice, if I ever feel like bothering with the GPU again ill try Gelid GC Extreme but at this point ill stick to the stock power limit and call it a day.


Sounds like a plan. Sorry you and others have had so many issues with keeping their RX 6900 XT's cool. I do tech support for a living, and am a _turbonerd_ when it comes to messing around with hardware at home... I wish I could be there in person to help yall get your gear running as well as it can. Alas, such is life on the internet :<


----------



## cfranko

CS9K said:


> Sounds like a plan. Sorry you and others have had so many issues with keeping their RX 6900 XT's cool. I do tech support for a living, and am a _turbonerd_ when it comes to messing around with hardware at home... I wish I could be there in person to help yall get your gear running as well as it can. Alas, such is life on the internet :<


I kind of regret not getting a top of the line 3090, for example a Rog Strix 3090. I mean they have double the price of a 6900 XT but id rather pay double instead of dealing with all this stuff. I don’t understand why most of the people with RX 6000 cards suffer from temperatures issues while all 30 series cards are nice and cool when watercooled or air cooled. Everyone around me with either a 6900 XT or 6800 XT have 90c+ hotspot temps with air cooling which is ridicilous if you ask me. I kind of whined like baby but I am sad that I always fail what I want to achieve after all this effort


----------



## ZealotKi11er

cfranko said:


> I kind of regret not getting a top of the line 3090, for example a Rog Strix 3090. I mean they have double the price of a 6900 XT but id rather pay double instead of dealing with all this stuff. I don’t understand why most of the people with RX 6000 cards suffer from temperatures issues while all 30 series cards are nice and cool when watercooled or air cooled. Everyone around me with either a 6900 XT or 6800 XT have 90c+ hotspot temps with air cooling which is ridicilous if you ask me. I kind of whined like baby but I am sad that I always fail what I want to achieve after all this effort


Lol, you get a 3090 and happy with 100c+ memory temperatures.


----------



## cfranko

ZealotKi11er said:


> Lol, you get a 3090 and happy with 100c+ memory temperatures.


They don’t reach 100c when playing games, even if they do you can at least solve it with thermal grizlly thermal pads. Right now my problem seems unsolvable unless I try liquid metal which is way too scary. Sure I could try different thermal pastes but I doubt it will make a difference


----------



## CS9K

cfranko said:


> I kind of regret not getting a top of the line 3090, for example a Rog Strix 3090. I mean they have double the price of a 6900 XT but id rather pay double instead of dealing with all this stuff. I don’t understand why most of the people with RX 6000 cards suffer from temperatures issues while all 30 series cards are nice and cool when watercooled or air cooled. Everyone around me with either a 6900 XT or 6800 XT have 90c+ hotspot temps with air cooling which is ridicilous if you ask me. I kind of whined like baby but I am sad that I always fail what I want to achieve after all this effort


Look at ampere coolers from this perspective: Almost all Ampere air coolers focused solely on keeping the 628mm² spicy ***** of a behemeth 10nm 8nm core cool, while most of the coolers left the memory out there to cook itself. It worked, for the most part, core temperatures on Ampere GPU's are lower than we see with RX 6800 XT and RX 6900 XT GPU's of most brands.

Also, look at the two brands' GPU's from a die-size perspective: 

Navi 21 die is 519mm²
Ampere die is 628mm²

So with both cores pulling the same 300W or 350W, the heat will be more concentrated on AMD GPU's, and thus will be more sensitive to thermal compound type and will be harder on said thermal compound (increased pump-out rate).


cfranko said:


> They don’t reach 100c when playing games, even if they do you can at least solve it with thermal grizlly thermal pads. Right now my problem seems unsolvable unless I try liquid metal which is way too scary. Sure I could try different thermal pastes but I doubt it will make a difference


_🔥 laughs in 3090 FE and FTW3🔥 _


----------



## cfranko

CS9K said:


> Look at ampere coolers from this perspective: Almost all Ampere air coolers focused solely on keeping the 628mm² spicy *** of a behemeth 10nm 8nm core cool, while most of the coolers left the memory out there to cook itself. It worked, for the most part, core temperatures on Ampere GPU's are lower than we see with RX 6800 XT and RX 6900 XT GPU's of most brands.
> 
> Also, look at the two brands' GPU's from a die-size perspective:
> 
> Navi 21 die is 519mm²
> Ampere die is 628mm²
> 
> So with both cores pulling the same 300W or 350W, the heat will be more concentrated on AMD GPU's, and thus will be more sensitive to thermal compound type and will be harder on said thermal compound (increased pump-out rate).
> 
> _🔥 laughs in 3090 FE and FTW3🔥 _


The thing is memory and core are 2 different things. You can overclock the core, give it more voltage, give it more power limit and all sorts of things. But when you look at memory the only thing you can do is increase the memory clock which has very little to no effect on FPS. I think core temperature is way more important than memory.


----------



## CS9K

cfranko said:


> The thing is memory and core are 2 different things. You can overclock the core, give it more voltage, give it more power limit and all sorts of things. But when you look at memory the only thing you can do is increase the memory clock which has very little to no effect on FPS. I think core temperature is way more important than memory.


This was true before GDDR6x came along.

Likewise, pump-out has always existed, but it's only recently that these super-spicy 350W+ GPU's are out in the wild with the masses, some GPU's of those with poor paste or cooling solutions included out of the box. It sucks, but it is what it is. Nvidia and their AIB's are just as guilty, _especially_ if we start to see GDDR6x fail due to heat, on stock power limits, in the coming months/years.


----------



## OC-NightHawk

CS9K said:


> This was true before GDDR6x came along.
> 
> Likewise, pump-out has always existed, but it's only recently that these super-spicy 350W+ GPU's are out in the wild with the masses, some GPU's of those with poor paste or cooling solutions included out of the box. It sucks, but it is what it is. Nvidia and their AIB's are just as guilty, _especially_ if we start to see GDDR6x fail due to heat, on stock power limits, in the coming months/years.


Case in point Amazon's game is killing 3090 cards.


----------



## cfranko

@jonRock1992 Do you keep the electrical tape around the die after applying the LM or do you remove it?


----------



## jonRock1992

cfranko said:


> @jonRock1992 Do you keep the electrical tape around the die after applying the LM or do you remove it?


Apply the liquid electrical tape before applying the liquid metal, and leave it there so it catches the run-off if there is any. Just make sure you put a thin enough layer so it doesn't interfere with the GPU mount, but it's somewhat squishy so it shouldn't be much of an issue.


----------



## J7SC

@cfranko - I quickly scanned your op and subsequent entries on deteriorating temps but am not sure if I got all the nuances and suggestions for fixes. Have you tried to (carefully) re-tighten the four main screws on the back holding the block on to the die ? I ask because my custom 6900XT also has the Bykski block and while temps are superb for now, I did make note of those little plastic washers which after a few heat cycles may change mounting resistance (so far, so good).

Beyond that, I just updated both my 6900XT and 3090 Strix with Gelid OC paste on the die, Thermalright pads on the power stages and my new fav, thermal putty (little white balls in pic below) on VRAM and select other spots. Temps dropped by up to 45 C, depending on measure.

...I also see some 3090 VRAM temp 'hecklers' in the crowd  of this cabaret - feast your eyes on the GDDR6X temps after a 450W Port Royal run on my 3090 Strix OC.


----------



## geriatricpollywog

J7SC said:


> @cfranko - I quickly scanned your op and subsequent entries on deteriorating temps but am not sure if I got all the nuances and suggestions for fixes. Have you tried to (carefully) re-tighten the four main screws on the back holding the block on to the die ? I ask because my custom 6900XT also has the Bykski block and while temps are superb for now, I did make note of those little plastic washers which after a few heat cycles may change mounting resistance (so far, so good).
> 
> Beyond that, I just updated both my 6900XT and 3090 Strix with Gelid OC paste on the die, Thermalright pads on the power stages and my new fav, thermal putty (little white balls in pic below) on VRAM and select other spots. Temps dropped by up to 45 C, depending on measure.
> 
> ...I also see some 3090 VRAM temp 'hecklers' in the crowd  of this cabaret - feast your eyes on the GDDR6X temps after a 450W Port Royal run on my 3090 Strix OC.
> 
> View attachment 2527884
> 
> 
> View attachment 2527885
> 
> 
> View attachment 2527886
> 
> 
> View attachment 2527889
> 
> 
> View attachment 2527887


I don’t even have an active backplate on my 3090 K|NGP|N but my Hydrocopper block keeps the memory on the back of the nard nice and cool. The heat conducts out through the PCB. I could see how memory temps can be high on air-cooled 3090s, but it’s a non-issue with ANY waterblock.


----------



## Justye95

Hi guys, I have a liquid rx 6900 xt, I would like to push it to the maximum, could you suggest me some settings to apply to More power tool? obviously nothing risky please, I would like to take it to the limit but safely


----------



## J7SC

0451 said:


> I don’t even have an active backplate on my 3090 K|NGP|N but my Hydrocopper block keeps the memory on the back of the nard nice and cool. The heat conducts out through the PCB. I could see how memory temps can be high on air-cooled 3090s, but it’s a non-issue with ANY waterblock.


..I was looking into active / w-cooled backplates for the 3090 but between the thermal putty and the addition of those giant heatsinks on both 6900XT and 3090, no need at all for active backplate  













Justye95 said:


> Hi guys, I have a liquid rx 6900 xt, I would like to push it to the maximum, could you suggest me some settings to apply to More power tool? obviously nothing risky please, I would like to take it to the limit but safely


...depends to some extent on your PCB and also the rest of your cooling setup re. load temps, but MPT PL at 360W before power slider was a conservative value for me - no guarantees of course, and others will have a different opinion. You can try out your fav intensive game and/or benchmark and see what 'Hotspot temp' does as you increase PL via MPT.


----------



## EastCoast

CS9K said:


> In the case of your MX-4, it had begun to pump out, though it doesn't look _as_ bad as some of the dry, crusty pictures one can find around the internet. The picture below shows the places where heatsink compound had already thinned out.
> 
> View attachment 2527878
> 
> 
> That said, I personally do not know anything about Mastergel Maker, but I looked it up. Let's compare density of a few compounds, in g/cm3:
> 
> More-dense compounds:
> 3.73 - Gelid GC-Extreme
> 3.76 - Kryonaut Extreme
> 3.00 - EK Ectotherm
> 
> Less-dense compounds:
> 2.60 - Mastergel Maker
> 2.49 - Noctua NT-H1
> 3.70 - Kryonaut
> 2.40 - Arctic MX-4
> 3.00? - Arctic Silver 5
> 
> Density isn't always directly related to viscosity; that depends on the emulsion the compound is formulated with, and will thus determine how quickly it may pump out. Take Kryonaut for example, it's quite dense, but it pumps out at the speed of light on top of a bare-die (but ironically is one of the best pastes for heat-spreader applications). Likewise, Ectotherm is less dense than the others, but will be more resistant to pump-out thanks to higher viscosity, than most other less dense compounds.
> 
> TL;DR: I implore you to give Gelid GC-Extreme a try if you do pull your block apart again. Spread it on top of the die with the included spreader, and make as even of a layer as possible across the entire die. Try not to have any dips or lower areas across the die. The layer should be thick enough that you can't see the GPU die through it after it has been spread.
> 
> By the looks of it, mounting pressure with the air cooler was fine, the paste just started to pump out is all. That _may_ be what has happened with your water block and Mastergel, too.


1 also found that a good pressure mount is equally important. It's a bit of an art when tightening screws crisscross so that all four screws are snug at the same time. Then when it comes to tightening them down it's critical to count each quarter turn. Doing it this way gave me the best results. I have been able to get the same current and Junction Temps at idle. And only a difference of about 10 to 15 degrees during Peak loads.


----------



## cfranko

J7SC said:


> @cfranko - I quickly scanned your op and subsequent entries on deteriorating temps but am not sure if I got all the nuances and suggestions for fixes. Have you tried to (carefully) re-tighten the four main screws on the back holding the block on to the die ? I ask because my custom 6900XT also has the Bykski block and while temps are superb for now, I did make note of those little plastic washers which after a few heat cycles may change mounting resistance (so far, so good).
> 
> Beyond that, I just updated both my 6900XT and 3090 Strix with Gelid OC paste on the die, Thermalright pads on the power stages and my new fav, thermal putty (little white balls in pic below) on VRAM and select other spots. Temps dropped by up to 45 C, depending on measure.
> 
> ...I also see some 3090 VRAM temp 'hecklers' in the crowd  of this cabaret - feast your eyes on the GDDR6X temps after a 450W Port Royal run on my 3090 Strix OC.
> 
> View attachment 2527884
> 
> 
> View attachment 2527885
> 
> 
> View attachment 2527886
> 
> 
> View attachment 2527889
> 
> 
> View attachment 2527887


I didn’t try tigtening the spring screws. I will try that


----------



## Kawaz

Anyone tried benching on a fresh win11 install yet? 
Have a case where my previous setting which I used on win11 preview don't even come close to passing on win11 fresh install. 

I could do timespy at 2820mhz, and superposition I could pass at 2920mhz (numbers from wattman) with win11 preview upgraded from 10.

On a fresh install I'm as mutch as 70-100mhz down on stable clock. Timespy barely does 2750, and superpos crashes at 2850mhz. 

I tried both 21.10.1 and 21.9.2. The latter of which I got all my good clocks with. 

Thinking about doing a fresh 10 install. But the crashes always result in a restart (at least 3dmark) which makes it seem a bit like I maybe have a power issue of some kind too. 

Any suggestions or things to try are very welcome. And ofc share your own experiences with benching 6900xt+win11


----------



## Azazil1190

Hi guy's!
Which drivers you think are the best for stable clocks (win10 toxic extreme 6900xt)
Yesterday i made some test with 21.10.1 and they made my stable daily oc unstable with a lot of restarts to a specific game(days gone)
Im thinking today give a try to 21.9.2. or to return to the stable 21.8.2


----------



## cfranko

Azazil1190 said:


> Hi guy's!
> Which drivers you think are the best for stable clocks (win10 toxic extreme 6900xt)
> Yesterday i made some test with 21.10.1 and they made my stable daily oc unstable with a lot of restarts to a specific game(days gone)
> Im thinking today give a try to 21.9.2. or to return to the stable 21.8.2


21.7.2 is pretty good


----------



## Azazil1190

cfranko said:


> 21.7.2 is pretty good


I will give a try .
Thanks !


----------



## LtMatt

Azazil1190 said:


> Hi guy's!
> Which drivers you think are the best for stable clocks (win10 toxic extreme 6900xt)
> Yesterday i made some test with 21.10.1 and they made my stable daily oc unstable with a lot of restarts to a specific game(days gone)
> Im thinking today give a try to 21.9.2. or to return to the stable 21.8.2


Using 21.10.1 here with the same gpu and playing Days Gone with no issues with my usual overclock. 

I would reduce your overclock or add a bit more voltage to gain stability in the newer driver.


----------



## deadfelllow

Azazil1190 said:


> Hi guy's!
> Which drivers you think are the best for stable clocks (win10 toxic extreme 6900xt)
> Yesterday i made some test with 21.10.1 and they made my stable daily oc unstable with a lot of restarts to a specific game(days gone)
> Im thinking today give a try to 21.9.2. or to return to the stable 21.8.2


Hey,

Is your card model number *11308-08-20G?

If it is are you able to oc core clock above 2800 ?*


----------



## cfranko

@jonRock1992 If I use nail polish and liquid metal and vertical mount my GPU how likely is it for the liquid metal to drip downwards and short the gpu because it’s vertical mounted?


----------



## jonRock1992

Kawaz said:


> Anyone tried benching on a fresh win11 install yet?
> Have a case where my previous setting which I used on win11 preview don't even come close to passing on win11 fresh install.
> 
> I could do timespy at 2820mhz, and superposition I could pass at 2920mhz (numbers from wattman) with win11 preview upgraded from 10.
> 
> On a fresh install I'm as mutch as 70-100mhz down on stable clock. Timespy barely does 2750, and superpos crashes at 2850mhz.
> 
> I tried both 21.10.1 and 21.9.2. The latter of which I got all my good clocks with.
> 
> Thinking about doing a fresh 10 install. But the crashes always result in a restart (at least 3dmark) which makes it seem a bit like I maybe have a power issue of some kind too.
> 
> Any suggestions or things to try are very welcome. And ofc share your own experiences with benching 6900xt+win11


My performance is down with win 11 as well. Lost 300 GPU score points in Timespy and I lost 20MHz on my core clock. Also, my cpu performance is down with Win11. I did a clean install. Win 11 is just worse overall.


cfranko said:


> @jonRock1992 If I use nail polish and liquid metal and vertical mount my GPU how likely is it for the liquid metal to drip downwards and short the gpu because it’s vertical mounted?


You can use nail polish, if it's the kind that can withstand high heat. It just takes way longer to dry and it's not easy to remove. Liquid electrical tape is the better option imo. But if you're going vertical mount, apply the bare minimum amount that you need to cover the whole area of the die and the part of the water block that the die touches. You definitely have to tighten the screws up evenly as well. It shouldn't drip out if it's done right. I have liquid metal on top of my CPU IHS and it has never dripped out, even after transferring my PC to my new house lol.


----------



## cfranko

jonRock1992 said:


> My performance is down with win 11 as well. Lost 300 GPU score points in Timespy and I lost 20MHz on my core clock. Also, my cpu performance is down with Win11. I did a clean install. Win 11 is just worse overall.
> 
> You can use nail polish, if it's the kind that can withstand high heat. It just takes way longer to dry and it's not easy to remove. Liquid electrical tape is the better option imo. But if you're going vertical mount, apply the bare minimum amount that you need to cover the whole area of the die and the part of the water block that the die touches. You definitely have to tighten the screws up evenly as well. It shouldn't drip out if it's done right. I have liquid metal on top of my CPU IHS and it has never dripped out, even after transferring my PC to my new house lol.


Would I have to reapply the LM every 6 months or so? My block is nickel plated copper


----------



## jonRock1992

cfranko said:


> Would I have to reapply the LM every 6 months or so? My block is nickel plated copper


I'm sure you could get away without changing it for a year. Just gotta apply it correctly. Probably whenever you need to do maintenance on your loop, you could swap out the liquid metal. I think it's recommended to maintenance a custom loop every year.


----------



## OC-NightHawk

jonRock1992 said:


> My performance is down with win 11 as well. Lost 300 GPU score points in Timespy and I lost 20MHz on my core clock. Also, my cpu performance is down with Win11. I did a clean install. Win 11 is just worse overall.


I wonder how much of that is the driver and how much of that is Windows 11. Will the older drivers work on Windows 11?


----------



## Azazil1190

deadfelllow said:


> Hey,
> 
> Is your card model number *11308-08-20G?
> 
> If it is are you able to oc core clock above 2800 ?*


Need to check the number. the box is in the basement now.Later I will.
My daily stable oc for gaming is *min clock 2760 and the max 2880 
+5 pl +1075v and +2100 fast timmings memory But i need to check again with the new deivers.
The above clocks was stable with 21.8.1 .


----------



## Azazil1190

LtMatt said:


> Using 21.10.1 here with the same gpu and playing Days Gone with no issues with my usual overclock.
> 
> I would reduce your overclock or add a bit more voltage to gain stability in the newer driver.


Need to make some test again Matt because something isnt right.


----------



## ZealotKi11er

Azazil1190 said:


> Need to check the number. the box is in the basement now.Later I will.
> My daily stable oc for gaming is *min clock 2760 and the max 2880
> +5 pl +1075v and +2100 fast timmings memory But i need to check again with the new deivers.
> The above clocks was stable with 21.8.1 .


That does not look right. If you are only increasing pl by 5%, you are way way power limited unless you play at like 1080p.


----------



## Azazil1190

ZealotKi11er said:


> That does not look right. If you are only increasing pl by 5%, you are way way power limited unless you play at like 1080p.


Im on 4k maybe im wrong .


----------



## Azazil1190

Those are my clocks at the ascent at 1440p
I will check again later guys.


----------



## ZealotKi11er

That looks good. We just try to see what the clk are with TimeSpy.


----------



## jonRock1992

OC-NightHawk said:


> I wonder how much of that is the driver and how much of that is Windows 11. Will the older drivers work on Windows 11?


It's probably the driver's fault. AMD already acknowledged the CPU performance faults. AMD wasn't ready for Win11.


----------



## Azazil1190

Im testing now at 4k max pl









For ts need to lower the clocks to pass the test and of course i need colder room.The run was in summer session








I scored 23 531 in Time Spy


Intel Core i9-10900K Processor, AMD Radeon RX 6900 XT x 1, 16384 MB, 64-bit Windows 10}




www.3dmark.com


----------



## 99belle99

jonRock1992 said:


> It's probably the driver's fault. AMD already acknowledged the CPU performance faults. AMD wasn't ready for Win11.


I don't know why you are blaming AMD it's Microsofts fault. I can guarantee they tested the new Intel chips thoroughly for a long time and they aren't even released yet.


----------



## jonRock1992

99belle99 said:


> I don't know why you are blaming AMD it's Microsofts fault. I can guarantee they tested the new Intel chips thoroughly for a long time and they aren't even released yet.


Because AMD released drivers that have performance flaws. How would that be Microsoft's fault?


----------



## 99belle99

jonRock1992 said:


> Because AMD released drivers that have performance flaws. How would that be Microsoft's fault?


I'm talking about the CPU's.


----------



## jonRock1992

99belle99 said:


> I'm talking about the CPU's.


I'm assuming their chipset drivers are messed up. AMD is implementing the fix, so it's probably an issue on their end.


----------



## Azazil1190

At cp77 i have better clocks 4k fxcas
I will play a bit to check for stability but i think im ok .only the days gone give me restart at the begging of the game.


----------



## Azazil1190

deadfelllow said:


> Hey,
> 
> Is your card model number *11308-08-20G?
> 
> If it is are you able to oc core clock above 2800 ?*


Yeap it is just check it.
Does it matter the card model?


----------



## CS9K

Azazil1190 said:


> _For ts need to lower the clocks to pass the test_ and of course i need colder room.The run was in summer session


Time Spy (not-extreme) will load down your GPU more than any game will. If you want absolute stability, test Time Spy until you find your max clock speed, set your global clock speed to that setting, and never look back.

From my own experimentation, and from others here in this thread that have confirmed it over time: You are _not_ actually stable at clock speeds above what Time Spy can run at, you just haven't crashed _*yet*_.


----------



## lestatdk

I usually run at 100 MHz or so below my max TS clock. maybe I should up it a bit


----------



## CS9K

lestatdk said:


> I usually run at 100 MHz or so below my max TS clock. maybe I should up it a bit


If you have the thermal headroom, go for it. Just keep an eye on _all_ of your temperatures, if you are on air.


----------



## CS9K

Azazil1190 said:


> View attachment 2527972
> At cp77 i have better clocks 4k fxcas
> I will play a bit to check for stability but i think im ok .only the days gone give me restart at the begging of the game.


Is that an LG CX/C1 I see? I LOVE mine and do not regret buying it as my gaming/media panel 💗


----------



## lawson67

cfranko said:


> @jonRock1992 If I use nail polish and liquid metal and vertical mount my GPU how likely is it for the liquid metal to drip downwards and short the gpu because it’s vertical mounted?


Ive used nail polish every time i have used LM in the past and never had any problems, i use 2 coats of it with 10 mins to dry between coats, also if you ever want to remove the nail polish it comes off very easy with nail polish remover, just tip a bit around the die let it soak in for a min and wipe it off, also LM likes to stick to itself so does not run off to easy if you don't overload it


----------



## Azazil1190

CS9K said:


> Time Spy (not-extreme) will load down your GPU more than any game will. If you want absolute stability, test Time Spy until you find your max clock speed, set your global clock speed to that setting, and never look back.
> 
> From my own experimentation, and from others here in this thread that have confirmed it over time: You are _not_ actually stable at clock speeds above what Time Spy can run at, you just haven't crashed _*yet*_.


I will try this way too in the weekend.Thanks for the info bro.
The only tests that i made are only to play games about 2hours some days of the week.I dont have so much time for long game season.


----------



## lestatdk

CS9K said:


> If you have the thermal headroom, go for it. Just keep an eye on _all_ of your temperatures, if you are on air.


It's on water , so plenty of headroom. Max junction temp is not even close to 50 when gaming. I'll give it a go


----------



## Azazil1190

CS9K said:


> Is that an LG CX/C1 I see? I LOVE mine and do not regret buying it as my gaming/media panel 💗


No but its oled too 
Its a Philips 803 oled.
Lg panel of course.
Definitely i will go for lg oled soon 
Once you go oled you never go back


----------



## lawson67

CS9K said:


> Is that an LG CX/C1 I see? I LOVE mine and do not regret buying it as my gaming/media panel 💗


I have a 65" OLED CX running off my RX 6900 XT hmdi for gaming and absolutely love it


----------



## CS9K

lawson67 said:


> I have a 65" OLED CX running off my RX 6900 XT hmdi for gaming and absolutely love it


Rock on! 48" LG CX, here. It is SO good


Azazil1190 said:


> Definitely i will go for lg oled soon
> Once you go oled you never go back


Aint that the truth! 💗


----------



## Azazil1190

Ok something is wrong with the days gone.
Even at stock clocks default setting gives me a very good restart.
The only game that i have issue.


----------



## CS9K

Azazil1190 said:


> Ok something is wrong with the days gone.
> Even at stock clocks default setting gives me a very good restart.
> The only game that i have issue.


If nothing else is giving you trouble right now, then submit a bug ticket to AMD and hope that they fix it soon.


----------



## cfranko

I picked up a coolant temperature sensor today, my coolant sits at 37c at full load while the gpu temp sits at 60C, do I need to remount the gpu waterblock?


----------



## Azazil1190

CS9K said:


> If nothing else is giving you trouble right now, then submit a bug ticket to AMD and hope that they fix it soon.


Yes its the only game until now. even KENA that is super heavy game i think ...is super stable 
Anyway


----------



## skline00

cfranko, what radiator setup do you have?


----------



## cfranko

skline00 said:


> cfranko, what radiator setup do you have?


2x360 radiators, both are Bykski brand and have 30mm thickness


----------



## Justye95

guys could any of you help me to set my 6900 xt reference to liquid? I would like to use mpt, I gladly accept private messages


----------



## Scorpion49

Hello,

Been a long time since I frequented this site. I just got my hands on a 6900XT reference card for MSRP so I quickly swapped out my 3070 and went to town. Everything I'm playing now runs great EXCEPT Armored Warfare. The game is based on Cryengine, and the card acts like I'm watching a youtube movie, clocks bounce between 228mhz and 508mhz and my fps is around 10. I've tried setting a custom clock speed in the Radeon software, but it only holds the minimum clocks I set for about 5 seconds and then it goes back to idle speed. 

My main rig is a 5800X on an Asus TUF X570, 32GB Gskill 3200mhz, 2TB m.2 ssd and 850W EVGA power supply.

Anyone have a tip? I did a full DDU on it first and also swapped the card to my spare rig which is an 11700K and 16GB of 4000mhz ram and the results were identical. It seems like the drivers don't know I'm playing a game while with the 3070 I was getting 165fps capped.


----------



## EastCoast

OK guys enough tweaking...not enough gaming!!!
What kind of FPS are you getting with BF2042 Beta with 21.10.1 drivers at 1440p?


----------



## OC-NightHawk

EastCoast said:


> OK guys enough tweaking...not enough gaming!!!
> What kind of FPS are you getting with BF2042 Beta with 21.10.1 drivers at 1440p?


Not my type of game but I can get you framerates for games like Red Dead Redemption 2, Cyberpunk, Horizon Zero Dawn, Final Fantasy XV and Witcher 3 if you want.


----------



## Thanh Nguyen

EastCoast said:


> OK guys enough tweaking...not enough gaming!!!
> What kind of FPS are you getting with BF2042 Beta with 21.10.1 drivers at 1440p?


I see 133fps or so. Hard to tell without a built in benchmark.


----------



## coelacanth

CS9K said:


> Rock on! 48" LG CX, here. It is SO good
> 
> Aint that the truth! 💗


I have the 48" LG CX as well, it's amazing for gaming. It's a little finicky with not coming back on when it's off sometimes. Disabling fast boot cured this 99% of the time. Some other random little things, but overall fantastic.


----------



## 99belle99

coelacanth said:


> I have the 48" LG CX as well, it's amazing for gaming. It's a little finicky with not coming back on when it's off sometimes. Disabling fast book cured this 99% of the time. Some other random little things, but overall fantastic.


Are you using it as a monitor? I have a 49 inch LG 4k 120Hz Freesync Tv non OLED for gaming but I switch to a 1080p monitor for general use. The Tv is far too big to use as a monitor full time.


----------



## EastCoast

Thanh Nguyen said:


> I see 133fps or so. Hard to tell without a built in benchmark.


Sounds about right. I cap at 144 and the game runs ultra smooth.


----------



## EastCoast

FAR CRY 6


----------



## J7SC

CS9K said:


> Time Spy (not-extreme) will load down your GPU more than any game will. If you want absolute stability, test Time Spy until you find your max clock speed, set your global clock speed to that setting, and never look back.
> 
> From my own experimentation, and from others here in this thread that have confirmed it over time: You are _not_ actually stable at clock speeds above what Time Spy can run at, you just haven't crashed _*yet*_.


...what I find weird though is that if Time Spy (non-Ex) crashes in GT2 after passing GT1, I can run GT2 _at the same settings_ by itself repeatedly in custom mode w/o a problem. This applies not only to the 6900XT but also 3090, 2080 Ti etc so I wonder if it is a software bug. Still, as a true test of clock stability, Timespy (non-EX, GT1+GT2) certainly makes a nice hurdle to pass for a new setup. Apart from that, I also throw max-setting CP2077, FS2020 and Superposition8K at new builds...if it passes all of that, it's 'fine'...



CS9K said:


> Is that an LG CX/C1 I see? I LOVE mine and do not regret buying it as my gaming/media panel 💗


...I hear you...my updated home-office setup runs multiple machines, but the primary two are a Philips 40 inch VA panel (6900XT work-play) and the 48 inch C1 Oled (3090 play-work). They are to some extent interchangeable for most tasks. With the C1, it just depends how far away you are and what exact settings you have it on for text etc, though I prefer the Philips for longish documents.


----------



## Thanh Nguyen

EastCoast said:


>


Is that bf 2042 benchmark or what?


----------



## EastCoast

Thanh Nguyen said:


> Is that bf 2042 benchmark or what?


That's Far Cry 6


----------



## CS9K

J7SC said:


> ...what I find weird though is that if Time Spy (non-Ex) crashes in GT2 after passing GT1, I can run GT2 _at the same settings_ by itself repeatedly in custom mode w/o a problem. This applies not only to the 6900XT but also 3090, 2080 Ti etc so I wonder if it is a software bug.


_That_ is an interesting observation... one I had not made, myself. Hmm


----------



## EastCoast

I tried again with the HD Texture Pack and this is what I got in FC6


----------



## Scorpion667

Does the boost algo lower clocks at any temperature value between 50 and 60c? I always bench at 100% fan speed so i forgot to check


----------



## coelacanth

99belle99 said:


> Are you using it as a monitor? I have a 49 inch LG 4k 120Hz Freesync Tv non OLED for gaming but I switch to a 1080p monitor for general use. The Tv is far too big to use as a monitor full time.


Yes using as a monitor full time but my desk is pretty deep. I also have the OLED light and brightness turned way way down. The amount of light it can emit is crazy. I have a separate OLED light and brightness gaming profile that turns the OLED light and brightness up.


----------



## kairi_zeroblade

EastCoast said:


> I tried again with the HD Texture Pack and this is what I got in FC6


so the HD texture pack itself is already beefed up in terms of details, do you find the difference between High and Ultra almost identical (or barely indifferent) in any way?


----------



## cfranko

Battlefield 2042 crashes as soon as I apply and kind of overclock


----------



## Justye95

cfranko said:


> Battlefield 2042 crashes as soon as I apply and kind of overclock


Same here


----------



## tolis626

cfranko said:


> Battlefield 2042 crashes as soon as I apply and kind of overclock


Battlefield 2042 crashes. There, fixed it for you.

The beta is a god damn mess. It's laggy, it performs poorly, it has tons of bugs and I am kind of disappointed by some aspects of the graphics. The foliage looks terrible, the AA is subpar. I hope these are things that get fixed for the final game, drawing conclusions from the beta isn't smart. I do have my concerns, though. Last BF game I played was BF4, and that took like 6 months after launch to become playable.


----------



## thomasck

Just played around 1 hour of bf 2042 and no probs. Graphics on ultra, 1440p, borderless. Card at 2400-2500 stock voltage. Most of the time around 110-125fps and 144 in closed areas. For someone that has been playing cod wz for a while because did not have any option (apart from Enlisted) bf 2042 is a blessing.


----------



## Simzak

@EastCoast Got my Gaming Z Trio 6900 XT. Would this be considered a good result for stock power limit? My junction temp in time spy hits 95c at 300w with stock power limit so I haven't raised it. Clock speeds during timespy are around 2480 but actual games like MW at 180% render resolution from 1080p it sits higher at around 2740 even at max gpu usage. Can't even get any game at 1080p to get close to maxing this card out lol.


----------



## lestatdk

Not bad. Those numbers match my Gaming X trio with raised power limit. When my card was on air it could hit the 118C shutdown temp in Timespy, so you got some margin to go still


----------



## LtMatt

EastCoast said:


> I tried again with the HD Texture Pack and this is what I got in FC6


Here's my Far Cry 6 benchmarks, recorded and uploaded to the Tube. 

A tuned 5950X with SAM can help alleviate some of the game engine bottleneck at 1440P. 
1440P Ultra Settings + HD Textures + RT





2160P Ultra Settings + HD Textures + RT


----------



## kairi_zeroblade

LtMatt said:


> Here's my Far Cry 6 benchmarks, recorded and uploaded to the Tube.
> 
> A tuned 5950X with SAM can help alleviate some of the game engine bottleneck at 1440P.
> 1440P Ultra Settings + HD Textures + RT
> 
> 
> 
> 
> 
> 2160P Ultra Settings + HD Textures + RT


nice..what did you use to capture that?

also does the micro stutters on the beginning of the benchmark normal?? CPU usage and frequency scaling seems off the roof here and there..


----------



## gtz

Simzak said:


> @EastCoast Got my Gaming Z Trio 6900 XT. Would this be considered a good result for stock power limit? My junction temp in time spy hits 95c at 300w with stock power limit so I haven't raised it. Clock speeds during timespy are around 2480 but actual games like MW at 180% render resolution from 1080p it sits higher at around 2740 even at max gpu usage. Can't even get any game at 1080p to get close to maxing this card out lol.
> 
> View attachment 2528073
> 
> View attachment 2528074
> 
> View attachment 2528072


That's actually really good for 300 watts. I get 22500-22700 at 310-320w. I am severely temp limited, just waiting on my block to crank it up to 400w.


----------



## cfranko

Simzak said:


> @EastCoast Got my Gaming Z Trio 6900 XT. Would this be considered a good result for stock power limit? My junction temp in time spy hits 95c at 300w with stock power limit so I haven't raised it. Clock speeds during timespy are around 2480 but actual games like MW at 180% render resolution from 1080p it sits higher at around 2740 even at max gpu usage. Can't even get any game at 1080p to get close to maxing this card out lol.
> 
> View attachment 2528073
> 
> View attachment 2528074
> 
> View attachment 2528072


I get that TimeSpy score at 400 watts


----------



## kairi_zeroblade

cfranko said:


> I get that TimeSpy score at 400 watts


not all cards are really created equal (specially the silicon)..either you buy a new one (risk it again) or stick with it..TBH, if you're running the E-Peen race yeah, you probably should buy a newer one that can perform better, else if you're using it for normal household gaming and streaming, it shouldn't be bad given that you watercooled it..personally, I'd be thankful since I was given a chance to grab 1, specially at times like these (nothing is sure/fixed, market prices going crazy, people aggravating others to bleed them off hard earned money), its a matter of "acceptance" and being happy with the card..I can see you are somehow "regretting" or you're at the "buyer's remorse" stage of a purchase..


----------



## 99belle99

cfranko said:


> I get that TimeSpy score at 400 watts


I get that score too with MPT and pushing the limits. But I do have a reference card with the stock cooler.


----------



## EastCoast

Scorpion667 said:


> Does the boost algo lower clocks at any temperature value between 50 and 60c? I always bench at 100% fan speed so i forgot to check


At 60C current temps it does boost. Not sure what will happen if you lower the fan speed because someone out there hates fan speed. 




kairi_zeroblade said:


> so the HD texture pack itself is already beefed up in terms of details, do you find the difference between High and Ultra almost identical (or barely indifferent) in any way?


That does make sense. High everything should be just as good. HD Textures are subtle though. 



Simzak said:


> @EastCoast Got my Gaming Z Trio 6900 XT. Would this be considered a good result for stock power limit? My junction temp in time spy hits 95c at 300w with stock power limit so I haven't raised it. Clock speeds during timespy are around 2480 but actual games like MW at 180% render resolution from 1080p it sits higher at around 2740 even at max gpu usage. Can't even get any game at 1080p to get close to maxing this card out lol.
> 
> View attachment 2528073
> 
> View attachment 2528074
> 
> View attachment 2528072


Excellent! I told you it was no problem with that card. 45/67 C in MW2019 is pretty good temps to be honest. 
Word of advice, don't go messing with the card. Those temps are great. I wouldn't touch the thermal paste/pads until it's time to do so(temps are higher then normal). Which isn't for a long time from now.


----------



## tolis626

thomasck said:


> Just played around 1 hour of bf 2042 and no probs. Graphics on ultra, 1440p, borderless. Card at 2400-2500 stock voltage. Most of the time around 110-125fps and 144 in closed areas. For someone that has been playing cod wz for a while because did not have any option (apart from Enlisted) bf 2042 is a blessing.


Really? Mine was a mess. Micro-stutters followed by mega-stutters. poor performance (or so it seemed due to the stuttering, FPS was in the ~120fps range at 1440p maxed out settings), poor audio, a lot of bugs (clipping through surfaces, glitchy animations etc) and poor network performance (although, that last one is probably my atrociously crappy connection at work). However, the game looks good, I can't wait to see it in a polished state. I love Battlefield, but I just can't play historic shooters any more. I can't find any interest in playing with the Gewehr and MP40 and Thompson any more. So BF1 and BFV were dead to me before they even came out. And I too have been stuck with Warzone. And there, where Infinity Ward took a lot of great steps forward with MW2019, there came Treyarch (or whatever studio it was), taking a lot of steps back with Cold War. It was sad to see a regression in gameplay quality, graphics (IMO), animations etc. Not to mention the atrocious weapon balance after the CW integration and the out of control cheater situation. And now they want to also chuck Vanguard in there. Nope, I'm out. I'll be playing BF2042 even if it sucks at launch.


----------



## kairi_zeroblade

EastCoast said:


> That does make sense. High everything should be just as good. HD Textures are subtle though.


I see..I just saw a review for FC6 using a potato I5 4460+gtx960 4gb and I find the medium detail good enough for 1080p and the system is peforming well with playable framerates..45++ (well his purpose was to see if the minimum requirements were really upto the game's minimum standard..) that is why I find it weird for a heavily tuned/overclocked system to perform bad..


----------



## EastCoast

tolis626 said:


> Really? Mine was a mess. Micro-stutters followed by mega-stutters. poor performance (or so it seemed due to the stuttering, FPS was in the ~120fps range at 1440p maxed out settings), poor audio, a lot of bugs (clipping through surfaces, glitchy animations etc) and poor network performance (although, that last one is probably my atrociously crappy connection at work). However, the game looks good, I can't wait to see it in a polished state. I love Battlefield, but I just can't play historic shooters any more. I can't find any interest in playing with the Gewehr and MP40 and Thompson any more. So BF1 and BFV were dead to me before they even came out. And I too have been stuck with Warzone. And there, where Infinity Ward took a lot of great steps forward with MW2019, there came Treyarch (or whatever studio it was), taking a lot of steps back with Cold War. It was sad to see a regression in gameplay quality, graphics (IMO), animations etc. Not to mention the atrocious weapon balance after the CW integration and the out of control cheater situation. And now they want to also chuck Vanguard in there. Nope, I'm out. I'll be playing BF2042 even if it sucks at launch.


I have a sliver of hope for MW2022. I guess we are of a few who really loved MW2019.
MW2019 made me change up my hardware to play it like BF2 did back in the day.

I mean the audio, the player animation, the visuals, etc actually invested me into the immersion of the game. Very few 1st person shooter games did that for me. Yes, it wasn't a "traditional run/gun cod". But how the game was developed attracted me into playing it and having more fun then Black Op series ever did. Another thing is that I thoroughly enjoyed how they keep the footsteps audible. And, making dead silence a perk was the icing on the cake. It made cheating in the game that much more frustrating keeping newbs in check from getting nukes every match. Not until recently did I even understand how cheating works in COD.

Most importantly it was easy on AMD hardware. I could lock fps at my refresh rate without mush hassle.

I watched a youtube video showing the amount of control cheaters actually have over COD. They start off with a new account and unlock everything. Even the skins and weapons you have to pay for. They can instantly get to 100 on the battle pass without paying for it. Unlocking attachments and everything else needed to play. Among other things. I felt those were most important. So, if you think some of those youtubers paid $90+ to unlock those battlepass skins, weapons think again.

The best way to find cheaters is to look at their rank and see what kind of skins they have. In particular The NecroKing bundle. It's a favorite among all. But when you see they have unlocked it to Gold and Diamond (which is very rare). Should draw your attention from a player using a new account with only a rank of 40.


----------



## tolis626

EastCoast said:


> I have a sliver of hope for MW2022. I guess we are of a few who really loved MW2019.
> MW2019 made me change up my hardware to play it like BF2 did back in the day.
> 
> I mean the audio, the player animation, the visuals, etc actually invested me into the immersion of the game. Very few 1st person shooter games did that for me. Yes, it wasn't a "traditional run/gun cod". But how the game was developed attracted me into playing it and having more fun then Black Op series ever did. Another thing is that I thoroughly enjoyed how they keep the footsteps audible. And, making dead silence a perk was the icing on the cake. It made cheating in the game that much more frustrating keeping newbs in check from getting nukes every match. Not until recently did I even understand how cheating works in COD.
> 
> Most importantly it was easy on AMD hardware. I could lock fps at my refresh rate without mush hassle.
> 
> I watched a youtube video showing the amount of control cheaters actually have over COD. They start off with a new account and unlock everything. Even the skins and weapons you have to pay for. They can instantly get to 100 on the battle pass without paying for it. Unlocking attachments and everything else needed to play. Among other things. I felt those were most important. So, if you think some of those youtubers paid $90+ to unlock those battlepass skins, weapons think again.
> 
> The best way to find cheaters is to look at their rank and see what kind of skins they have. In particular The NecroKing bundle. It's a favorite among all. But when you see they have unlocked it to Gold and Diamond (which is very rare). Should draw your attention from a player using a new account with only a rank of 40.


I can't understand how anyone would prefer BOCW over MW2019. I honestly can't. It's simply a better game. In every possible way. And it was actually a very good shooter all things considered, not just good for a CoD game. That coming from someone who used to say CoD is something. 

As for cheating... It's horrible. They can mess with pretty much every aspect of the game. And it's not like they're cheating but are being careful about it. No no no, they will teleport around the map and mow you down from 500m away full auto with hipfire. It's nuts, makes the game unplayable sometimes. What's even worse, is that any faith the community might have had is lost. Whenever I see someone being slightly good, I have to check if they're cheating. I have been called a cheater on deathcomms so many times that it's not even funny. And I find people thinking that the new anti-cheat is gonna magically fix everything very naive.


----------



## LtMatt

kairi_zeroblade said:


> nice..what did you use to capture that?
> 
> also does the micro stutters on the beginning of the benchmark normal?? CPU usage and frequency scaling seems off the roof here and there..


Yes that is a bit of a stutter, although GPU usage does not drop, near the start of the bench. Think it's just a benchmark glitch as the game seems very smooth in the open world.


----------



## LtMatt

tolis626 said:


> I can't understand how anyone would prefer BOCW over MW2019. I honestly can't. It's simply a better game. In every possible way. And it was actually a very good shooter all things considered, not just good for a CoD game. That coming from someone who used to say CoD is something.
> 
> As for cheating... It's horrible. They can mess with pretty much every aspect of the game. And it's not like they're cheating but are being careful about it. No no no, they will teleport around the map and mow you down from 500m away full auto with hipfire. It's nuts, makes the game unplayable sometimes. What's even worse, is that any faith the community might have had is lost. Whenever I see someone being slightly good, I have to check if they're cheating. I have been called a cheater on deathcomms so many times that it's not even funny. And I find people thinking that the new anti-cheat is gonna magically fix everything very naive.


I liked both.

At first I loved MW and hated Cold War. However, in the end I ended up preferring Cold War for some reason.

Both are pretty similar though for the most part. 

They also love RDNA2 and it wipes the floor with Ampere in these games which always warms the cockles of ones heart.


----------



## EastCoast

kairi_zeroblade said:


> I see..I just saw a review for FC6 using a potato I5 4460+gtx960 4gb and I find the medium detail good enough for 1080p and the system is performing well with playable framerates..45++ (well his purpose was to see if the minimum requirements were really upto the game's minimum standard..) that is why I find it weird for a heavily tuned/overclocked system to perform bad..


I'm sure they will update the game a few time addressing an inconsistencies. I only wish they did that for Watch Dogs.
You gave me a good idea setting it to high from now on with the HD Texture Pack. Thanks for that. I haven't gotten far in that game right now though. But I haven't played FC since FC2. Which I never finished BTW. Those checkpoints were repeatively brutal for me back in the day. LOL.




tolis626 said:


> I can't understand how anyone would prefer BOCW over MW2019. I honestly can't. It's simply a better game. In every possible way. And it was actually a very good shooter all things considered, not just good for a CoD game. That coming from someone who used to say CoD is something.
> 
> As for cheating... It's horrible. They can mess with pretty much every aspect of the game. And it's not like they're cheating but are being careful about it. No no no, they will teleport around the map and mow you down from 500m away full auto with hipfire. It's nuts, makes the game unplayable sometimes. What's even worse, is that any faith the community might have had is lost. Whenever I see someone being slightly good, I have to check if they're cheating. I have been called a cheater on deathcomms so many times that it's not even funny. And I find people thinking that the new anti-cheat is gonna magically fix everything very naive.


To be honest I have a much higher k/d in BOCW then I do in MW2019. Although I love every map in MW2019 there are a few maps in CW that I excel in.
As for the cheating it's a real shame. It wasn't that bad in BO4 and dare I say not as bad in MW2019. But in CW it's atrocious that's for sure. In warzone you ask, "Ok guys...who's not cheating because we aren't going to be much help in this squad!!"  To be honest I've seen the difference between having no cheaters on our squad vs having at least 1 cheater. It just takes 1 cheater who can carry you. It takes just one to help equalize the match to make it enjoyable.

Oh yeah here is the Ice Drake everyone loves to sport, even when it's not possible to get it at this point in the game.




Almost impossible to get with a new account but you will see it from some players.


----------



## OC-NightHawk

kairi_zeroblade said:


> not all cards are really created equal (specially the silicon)..either you buy a new one (risk it again) or stick with it..TBH, if you're running the E-Peen race yeah, you probably should buy a newer one that can perform better, else if you're using it for normal household gaming and streaming, it shouldn't be bad given that you watercooled it..personally, I'd be thankful since I was given a chance to grab 1, specially at times like these (nothing is sure/fixed, market prices going crazy, people aggravating others to bleed them off hard earned money), its a matter of "acceptance" and being happy with the card..I can see you are somehow "regretting" or you're at the "buyer's remorse" stage of a purchase..


I agree. I use benchmarking to tune my machine for playing games making sure that I get the optimal stable gameplay experience. I’m hyper competitive so sometimes I get a little carried away but at the end of the day we just need enough to play our games. 20000 on Timespy is sufficient for playing pretty much any game at a good detail level at 1440P. I personally wouldn’t bother buying another card for that one machine unless you where going to put the other one to work in other ways.


----------



## CS9K

For those having issues with the BF2042 open beta, you're not the only ones. Jayz2Cents on youtube had stuttering issues on both AMD and Nvidia, and from others I've seen that play with both brands' cards, the open beta is a _hot_ mess. As beta programs tend to be.


----------



## Azazil1190

I tried bf2042 to my son pc which has 5950x and 3080ti .
I cant play with all those sutters .
Ok is beta version but the graphics are amazing to my eyes.
I prefer cod warzone too


----------



## EastCoast

CS9K said:


> For those having issues with the BF2042 open beta, you're not the only ones. Jayz2Cents on youtube had stuttering issues on both AMD and Nvidia, and from others I've seen that play with both brands' cards, the open beta is a _hot_ mess. As beta programs tend to be.


please link me to that


----------



## tolis626

LtMatt said:


> I liked both.
> 
> At first I loved MW and hated Cold War. However, in the end I ended up preferring Cold War for some reason.
> 
> Both are pretty similar though for the most part.
> 
> They also love RDNA2 and it wipes the floor with Ampere in these games which always warms the cockles of ones heart.


I still hate Cold War. The movement, the animations, the weapons, it all seems backwards to me. Like it's an older game. It feels clunky, unlike MW's excellent animations and gunplay.

As for liking RDNA, MW does and WZ did too. But after the "new" Verdansk, performance went down the drain. I used to get 120-150fps on my 5700XT when Warzone first came out, but when I replaced it it was getting like 80-100. Same card, same clocks, same CPU, same settings. Performance just went down after the CW integration completed. And performance in CW was much worse for me than in MW. MW was indeed very friendly to AMD, though. I was getting mostly the same performance (or sometimes even better) as a friend of mine who had a 2080 and a 9900k. I kept trolling him until he got a 3080. Little did he know that a few days later, I was expecting the 6900XT. I'm still trolling him. 


EastCoast said:


> To be honest I have a much higher k/d in BOCW then I do in MW2019. Although I love every map in MW2019 there are a few maps in CW that I excel in.
> As for the cheating it's a real shame. It wasn't that bad in BO4 and dare I say not as bad in MW2019. But in CW it's atrocious that's for sure. In warzone you ask, "Ok guys...who's not cheating because we aren't going to be much help in this squad!!"  To be honest I've seen the difference between having no cheaters on our squad vs having at least 1 cheater. It just takes 1 cheater who can carry you. It takes just one to help equalize the match to make it enjoyable.
> 
> Oh yeah here is the Ice Drake everyone loves to sport, even when it's not possible to get it at this point in the game.
> 
> 
> 
> 
> Almost impossible to get with a new account but you will see it from some players.


Yeah, I know the skin. The state of the game is just sad at this point. It started out great, but with cheaters running rampant and with the balance being out of whack since the CW integration (probably to incentivize people to buy CW to unlock its guns), it's dead to me. BF 2042 can't come soon enough. I just hope it does so in a good state.


Azazil1190 said:


> I tried bf2042 to my son pc which has 5950x and 3080ti .
> I cant play with all those sutters .
> Ok is beta version but the graphics are amazing to my eyes.
> I prefer cod warzone too


Terrible, isn't it? I was expecting some glitches/bugs, but not THAT.

Also, is it just me or does the foliage and AA look bad in 2042?


EastCoast said:


> please link me to that


----------



## ZealotKi11er

Finally broke 25K in TS. AMD Radeon RX 6900 XT video card benchmark result - Intel Core i9-9900K Processor,ASUSTeK COMPUTER INC. ROG MAXIMUS XI HERO (WI-FI) (3dmark.com) 
My problem now is XTXH does not seem to help at all with FireStrike. I can run TimeSpy 2830MHz but FS Extreme I need to go under 2800MHz. It was the opposite with XTX-nonH which did 2600MHz TS but 2750MHz in FS Extreme.


----------



## deadfelllow

Hey guys,

Today i just got my GPU which is *Sapphire 6900 XT Toxic Gaming OC Extreme Edition 11308-08-20G* .( XTXH)

With the help of @cfranko (mpt and radeon tuning)

I achieved 25125 Graphics Score in Timespy. (29th)

2750-2850, 2168 fast timings









I scored 20 110 in Time Spy


AMD Ryzen 5 5600X, AMD Radeon RX 6900 XT x 1, 16384 MB, 64-bit Windows 10}




www.3dmark.com


----------



## gtz

deadfelllow said:


> Hey guys,
> 
> Today i just got my GPU which is *Sapphire 6900 XT Toxic Gaming OC Extreme Edition 11308-08-20G* .( XTXH)
> 
> With the help of @cfranko (mpt and radeon tuning)
> 
> I achieved 25125 Graphics Score in Timespy. (29th)
> 
> 2750-2850, 2168 fast timings
> 
> 
> 
> 
> 
> 
> 
> 
> 
> I scored 20 110 in Time Spy
> 
> 
> AMD Ryzen 5 5600X, AMD Radeon RX 6900 XT x 1, 16384 MB, 64-bit Windows 10}
> 
> 
> 
> 
> www.3dmark.com
> 
> 
> 
> 
> View attachment 2528119


Damn, congrats!!!! 25k


----------



## tolis626

Hahahaha just look at this guy. There's people here struggling to break 25k TS graphics score, missing it by the narrowest of margins, pulling their heads out, modding stuff, and then there comes @deadfellow out of the blue, casually dropping a 25k out of nowhere. Congrats dude!


----------



## LtMatt

Lol, spare a thought for Jon. Some people have gone bald trying to reach 25K!


----------



## cfranko

Is the Thermalright Silver king or Conductonaut better?


----------



## deadfelllow

tolis626 said:


> Hahahaha just look at this guy. There's people here struggling to break 25k TS graphics score, missing it by the narrowest of margins, pulling their heads out, modding stuff, and then there comes @deadfellow out of the blue, casually dropping a 25k out of nowhere. Congrats dude!


Also I tried BF5(capped fps) for some benching&core clock oc'ing.It was stable at 2950 core clock like 10 minutes here is the screenshot.



also i tried 3000-3100 coreclock and it crashed after 10 secs. .In Gpu-z my core clock boosted to 3113 lmao.


----------



## lawson67

cfranko said:


> Is the Thermalright Silver king or Conductonaut better?


There both good i expect but i have always used Conductonaut with no problems at all, just make sure to protect the semiconductors around GPU die whether that be with nail polish or tape


----------



## cfranko

lawson67 said:


> There both good i expect but i have always used Conductonaut with no problems at all, just make sure to protect the semiconductors around GPU die whether that be with nail polish or tape


I read that some nail polish can burn when applied on the conductors, is that true?


----------



## kairi_zeroblade

cfranko said:


> I read that some nail polish can burn when applied on the conductors, is that true?


yes, you need a specific formula of nail polish..(I forgot the chemical substance you should avoid when checking nail polishes out)

edit: its toluene and any benzene derivative..


----------



## ZealotKi11er

Is there any tuning guide for 5950x for TimeSpy? Getting very weak scores at stock.


----------



## kairi_zeroblade

ZealotKi11er said:


> Is there any tuning guide for 5950x for TimeSpy? Getting very weak scores at stock.


Disable SMT..should get you easy 18k cpu score..


----------



## jonRock1992

LtMatt said:


> Lol, spare a thought for Jon. Some people have gone bald trying to reach 25K!


Yeah I gave up on that lol. The only thing that I'm sure will get it there is an EVC2SX mod, but I'm not gonna mess with that.


----------



## ZealotKi11er

We need 26K Club.
I am sure it can happen once winter hits and I can run the card at -20C.


----------



## cfranko

ZealotKi11er said:


> We need 26K Club.
> I am sure it can happen once winter hits and I can run the card at -20C.


How would -20c help though? Afaik the limit with these cards right now is the voltage and silicon lottery, it isn't temperature.


----------



## ZealotKi11er

cfranko said:


> How would -20c help though? Afaik the limit with these cards right now is the voltage and silicon lottery, it isn't temperature.


It should definitely help. Using AC air I was able to lower hotspot temp by 20C and could achieve more stable clks and 2820MHz with this card. If I can slower 20-40C more I am sure it will help. Yes silicon lottery is important.


----------



## Justye95

I want to thank @cfranko for helping me with my liquid rx 6900 xt reference to push it to the max. I achieved a good result thanks to him


----------



## Azazil1190

What i get at far cry 6 with my daily gaming oc profile(i dont use mptool for daily)
i think Great result's from the 6900.
1440p hd textures


----------



## Azazil1190

And 4k hd textures


----------



## ZealotKi11er

25K is too common now. Join the 26K club: I scored 21 891 in Time Spy


----------



## gtz

ZealotKi11er said:


> 25K is too common now. Join the 26K club: I scored 21 891 in Time Spy


Now you are just showing off.


----------



## ZealotKi11er

gtz said:


> Now you are just showing off.


fun and games.


----------



## OC-NightHawk

deadfelllow said:


> Also I tried BF5(capped fps) for some benching&core clock oc'ing.It was stable at 2950 core clock like 10 minutes here is the screenshot.
> 
> 
> 
> also i tried 3000-3100 coreclock and it crashed after 10 secs. .In Gpu-z my core clock boosted to 3113 lmao.
> View attachment 2528167
> 
> View attachment 2528168


That is amazing. What are your MPT settings. I would like to try and duplicate if you don’t mind. 😊


----------



## jfrob75

I am amazed at the clocks I am able to run for FC6. Below is my FC6 benchmark result with Max GPU clk. set to 2925MHz Min GPU clk. set to 2800MHz. Power set 650 watts via MPT. This game does not draw that much power, which is surprising at these clocks. Monitoring with HWinfo64, I saw max GPU ppt of 325 watts.


----------



## ZealotKi11er

Anyone can help with TimeSpy CPU score with 5950x?
I tried all core OC @4.7GHz
PBO OC
SMT Disabled
RAM is at 3733MHz CL16/ FCLK 1866MHz
Still only getting 16.x


----------



## Azazil1190

ZealotKi11er said:


> Anyone can help with TimeSpy CPU score with 5950x?
> I tried all core OC @4.7GHz
> PBO OC
> SMT Disabled
> RAM is at 3733MHz CL16/ FCLK 1866MHz
> Still only getting 16.x


Which mobo do you have?
And did you uv the cpu?
Pbo setting's?


----------



## jfrob75

ZealotKi11er said:


> Anyone can help with TimeSpy CPU score with 5950x?
> I tried all core OC @4.7GHz
> PBO OC
> SMT Disabled
> RAM is at 3733MHz CL16/ FCLK 1866MHz
> Still only getting 16.x


My CPU setup for TimeSpy is SMT off, PERCCX @ 48.25MHz CCD1, CCD2 @ 1.365V. PBO disabled, and CPU volts to AUTO. Most of the time this will produce results above 18K for CPU test.


----------



## ZealotKi11er

Azazil1190 said:


> Which mobo do you have?
> And did you uv the cpu?
> Pbo setting's?


I have not tried curve optimizer / UV
PBO is just enabled, no other settings changed
ASUS X570 Hero


----------



## Azazil1190

ZealotKi11er said:


> I have not tried curve optimizer / UV
> PBO is just enabled, no other settings changed
> ASUS X570 Hero


Ok i can send you my bios settings ,i have the same mobo on my second pc.
But give a try to play via curve optimiser


----------



## J7SC

ZealotKi11er said:


> Anyone can help with TimeSpy CPU score with 5950x?
> I tried all core OC @4.7GHz
> PBO OC
> SMT Disabled
> RAM is at 3733MHz CL16/ FCLK 1866MHz
> Still only getting 16.x


...'grats on the 26K 'club' 

As to 5950X, I wonder if it could be RAM settings / timings ? I can hit over 17k CPU in Timespy _with SMT on_ and no per-CCD or curve optimizer magic, using an Asus Dark Hero w/ dynamicOC at 4750 and 4x8 GSkill (single rank B-die) at 1900/3800 14-14-14...Zen timings sheet below left (ignore the one on the right, haven't had time to optimize that yet)


----------



## Skinnered

jfrob75 said:


> I am amazed at the clocks I am able to run for FC6. Below is my FC6 benchmark result with Max GPU clk. set to 2925MHz Min GPU clk. set to 2800MHz. Power set 650 watts via MPT. This game does not draw that much power, which is surprising at these clocks. Monitoring with HWinfo64, I saw max GPU ppt of 325 watts.
> View attachment 2528291


This without RT and with fsr I suppose?


----------



## OC-NightHawk

ZealotKi11er said:


> 25K is too common now. Join the 26K club: I scored 21 891 in Time Spy


Where you using fast memory timings with that speed or regular memory timings?


----------



## ZealotKi11er

OC-NightHawk said:


> Where you using fast memory timings with that speed or regular memory timings?


Fast Memory Timings. Memory did not make much difference. I was just increasing for every last point. Core is what give all the point at the end 10Mhz ~ 90 points.


----------



## OC-NightHawk

ZealotKi11er said:


> Fast Memory Timings. Memory did not make much difference. I was just increasing for every last point. Core is what give all the point at the end 10Mhz ~ 90 points.


Can you share a screenshot of your MPT settings and Radeon software settings please. That is a highly impressive result and I wonder if mine is capable of getting closer to it. 😊


----------



## 99belle99

Number one in the world atm for a single 6900XT, good going ZealotKi11er. But I beat your 9900K in CPU score with my 3700X.


----------



## ZealotKi11er

OC-NightHawk said:


> Can you share a screenshot of your MPT settings and Radeon software settings please. That is a highly impressive result and I wonder if mine is capable of getting closer to it. 😊


----------



## ZealotKi11er

99belle99 said:


> Number one in the world atm for a single 6900XT, good going ZealotKi11er. But I beat your 9900K in CPU score with my 3700X.


Yeah, I noticed my 9900K CPU score is not that good. I have seen it over 12K but for this run it was quite low. Still trying to understand TimeSpy CPU benchmark.


----------



## thomasck

tolis626 said:


> Really? Mine was a mess. Micro-stutters followed by mega-stutters. poor performance (or so it seemed due to the stuttering, FPS was in the ~120fps range at 1440p maxed out settings), poor audio, a lot of bugs (clipping through surfaces, glitchy animations etc) and poor network performance (although, that last one is probably my atrociously crappy connection at work). However, the game looks good, I can't wait to see it in a polished state. I love Battlefield, but I just can't play historic shooters any more. I can't find any interest in playing with the Gewehr and MP40 and Thompson any more. So BF1 and BFV were dead to me before they even came out. And I too have been stuck with Warzone. And there, where Infinity Ward took a lot of great steps forward with MW2019, there came Treyarch (or whatever studio it was), taking a lot of steps back with Cold War. It was sad to see a regression in gameplay quality, graphics (IMO), animations etc. Not to mention the atrocious weapon balance after the CW integration and the out of control cheater situation. And now they want to also chuck Vanguard in there. Nope, I'm out. I'll be playing BF2042 even if it sucks at launch.


Seriously, I've played around 2 hours the day I posted here and the only issue was to connect to a match, and starting the match without the gun texture. Performance was about what you are getting, 120fps and I've read people with 3090s reporting same fps as well, 1440p. Agreed, game looks and I am looking forward to it, I can't be bothered to play wz anymore (it was great), only if my mates are playing so it's some king of **** show funny game. I am playing Enlisted instead but as you mentioned ww2 games and old weapons are just getting boring as well. BF1 n BF5 were fine until the point a lot of cheaters joined the game, so I also stopped, I hope bf 2042 is getting better anti cheaters protection. I don't ever bother talking about cold war, or even vanguard, was not enough to kill wz with all current bugs, every season adjust loadouts, using different guns to do not be in disadvantage etc, then cold war, and now back to ww2 guns? Oh boy, I expected more.


----------



## 99belle99

thomasck said:


> Seriously, I've played around 2 hours the day I posted here and the only issue was to connect to a match, and starting the match without the gun texture. Performance was about what you are getting, 120fps and I've read people with 3090s reporting same fps as well, 1440p. Agreed, game looks and I am looking forward to it, I can't be bothered to play wz anymore (it was great), only if my mates are playing so it's some king of **** show funny game. I am playing Enlisted instead but as you mentioned ww2 games and old weapons are just getting boring as well. BF1 n BF5 were fine until the point a lot of cheaters joined the game, so I also stopped, I hope bf 2042 is getting better anti cheaters protection. I don't ever bother talking about cold war, or even vanguard, was not enough to kill wz with all current bugs, every season adjust loadouts, using different guns to do not be in disadvantage etc, then cold war, and now back to ww2 guns? Oh boy, I expected more.


I remember when I used to play Battlefield 4 a fair few years ago and I remember one game a hacker had it in for me. I was so dumb and kept playing I should have just quit the match and found a better server. I had a good fun with that game and the majority are genuine players but the hackers/cheaters really ruin it for everyone.


----------



## 99belle99

ZealotKi11er said:


> Yeah, I noticed my 9900K CPU score is not that good. I have seen it over 12K but for this run it was quite low. Still trying to understand TimeSpy CPU benchmark.


My CPU score fluctuates too bay a small bit. You either get a good GPU score and your CPU score is down and then it's the other way round very hard to get both a good GPU score and CPU score at the same time.

12k that pretty good. I have never got any where near that with my 3700X. 11,7something is my best ever CPU score.


----------



## jfrob75

Skinnered said:


> This without RT and with fsr I suppose?


Without RT or FSR


----------



## lawson67

cfranko said:


> I read that some nail polish can burn when applied on the conductors, is that true?


Use this nail polish its what I've always used and it does not burn or lift

Sally Hansen Advanced Hard as Nails Strengthener, 13.3ml : Amazon.co.uk: Beauty


----------



## weleh

ZealotKi11er said:


> Fast Memory Timings. Memory did not make much difference. I was just increasing for every last point. Core is what give all the point at the end 10Mhz ~ 90 points.


Memory doesn't mater? Memory is what carries LCRef cards dude...Super binned 18Gbps G6 memory not even highly binned 2080 TIs had 18Gbps capable G6 memory modules.

Look at the top 5 cards, 3 of them are LC reference cards and, 1 is a XTXH memory modded card and then there's mine a lowly XTX card.

All the Ref LC cards have way lower clockspeed and average clockspeed than my card yet produce 300 points higher.

So, memory does matter, memory carries hard because how fast these cards can clock, memory is important to keep being fed by the core.


----------



## lestatdk

I wish that we could exceed the 2150 barrier on the XTX cards. I have tested several times and my memory can do 2250 or so, but it cripples the core due to the driver restriction.


----------



## weleh

lestatdk said:


> I wish that we could exceed the 2150 barrier on the XTX cards. I have tested several times and my memory can do 2250 or so, but it cripples the core due to the driver restriction.


There's no reason why you'd want that. They are 16Gbps rated memory chips, most cards do around 16.5/16.8Gbps which is the upper ceiling of these 16Gbps chips.

Ref LC cards use 18Gbps modules which are super binned that's why they handle higher clocks and tighter timings.


----------



## ilmazzo

So every 6900XT liquid cooled by factory has this 18gbps DDR6 modules? So we are speaking of sapphire toxic, powercolor liquid devil, etc etc? I was not aware of this ....


----------



## weleh

ilmazzo said:


> So every 6900XT liquid cooled by factory has this 18gbps DDR6 modules? So we are speaking of sapphire toxic, powercolor liquid devil, etc etc? I was not aware of this ....


No.

Only AMD Reference Liquid cooled cards (the ones that are sold with prebuilts).


----------



## cfranko

ilmazzo said:


> So every 6900XT liquid cooled by factory has this 18gbps DDR6 modules? So we are speaking of sapphire toxic, powercolor liquid devil, etc etc? I was not aware of this ....


No, only the liquid cooled AMD referance card


----------



## ilmazzo

argh! ok thanks!


----------



## ZealotKi11er

weleh said:


> Memory doesn't mater? Memory is what carries LCRef cards dude...Super binned 18Gbps G6 memory not even highly binned 2080 TIs had 18Gbps capable G6 memory modules.
> 
> Look at the top 5 cards, 3 of them are LC reference cards and, 1 is a XTXH memory modded card and then there's mine a lowly XTX card.
> 
> All the Ref LC cards have way lower clockspeed and average clockspeed than my card yet produce 300 points higher.
> 
> So, memory does matter, memory carries hard because how fast these cards can clock, memory is important to keep being fed by the core.


You would be surprised how little difference 18GBps makes. This is because the 128mb infinity cache. At most you get 1-2% more. This is compare to stock 16GBps card. 16GBps card get faster after you OC to 2150MHz. 

I can probably test clk per clock and tell you how much the difference is.


----------



## cfranko

ZealotKi11er said:


> You would be surprised how little difference 18GBps makes.


Then how does the Referance liquid cooled card surpass all other cards in Time Spy despite the crappy 120mm radiator?


----------



## LtMatt

ZealotKi11er said:


> You would be surprised how little difference 18GBps makes. This is because the 128mb infinity cache. At most you get 1-2% more. This is compare to stock 16GBps card. 16GBps card get faster after you OC to 2150MHz.
> 
> I can probably test clk per clock and tell you how much the difference is.


That’s a shame. I really like the look of that card, but I know it’s almost certainly pointless to switch to it from my Toxic Extreme.

Do you know if it uses the thermal pad?


----------



## Skinnered

jfrob75 said:


> Without RT or FSR


If you have the time, can you run it with [email protected] 4K?
I get a Sapphire EE LC on the way


----------



## ZealotKi11er

cfranko said:


> Then how does the Referance liquid cooled card surpass all other cards in Time Spy despite the crappy 120mm radiator?


I put it on a custom loop.


----------



## ZealotKi11er

LtMatt said:


> That’s a shame. I really like the look of that card, but I know it’s almost certainly pointless to switch to it from my Toxic Extreme.
> 
> Do you know if it uses the thermal pad?


Yes uses thermal pad. LC can hold about 350W.


----------



## J7SC

lestatdk said:


> I wish that we could exceed the 2150 barrier on the XTX cards. I have tested several times and my memory can do 2250 or so, but it cripples the core due to the driver restriction.


...yeah, that came up before and still isn't 'fixed'  . Every time there's a new MPT beta, I get my hopes up a bit that they finally found the culprit which must hide out somewhere as a switch in the win registry, perhaps triggered by vbios microcode. What irks me the most is that AMD's own auto-oc tuning software tells me that 2260 would be possible (110 MHz over the current limit) and I've run that with the gimped GPU clocks and compared it to 2150 MHz VRAM - improvements are measurable even beyond that speed. I don't know where the true 'top efficiency' is with my 6900XT's particular set of VRAM chips, and I really would like to find out, what with much improved cooling now etc











On a related note, manufacturers are also known to underclock VRAM, be it for their model palette 'differentiation', power consumption or heat. My 3090 Strix OC for example with its 24 GB of GDDR6X goes way higher than the 1219 MHz default - that's because the Micron chips are actually rated much higher at 1313 MHz (21 GBs) per TechPowerUp:


----------



## OC-NightHawk

ZealotKi11er said:


> ..post..


You have an amazing card. I have tried every which way I could think of and I cannot even hit 25000 on my graphics score. This is my best graphics score AMD Radeon RX 6900 XT video card benchmark result - AMD Ryzen 9 5950X,Micro-Star International Co., Ltd. PRESTIGE X570 CREATION (MS-7C36) (3dmark.com).

I can get my GPU core almost up to 2800 but strangely enough the more I push the worse score I get.

On the bright side what I am getting is more then capable of playing the latest games at a fast framerate but I wish I could have gotten my card up to 2800MHz.


----------



## 99belle99

ZealotKi11er said:


> I put it on a custom loop.


How was it able to push so high with only 2x 8pin sockets?


----------



## jfrob75

Skinnered said:


> If you have the time, can you run it with [email protected] 4K?
> I get a Sapphire EE LC on the way


Here is my FC6 benchmark results using the same GPU clock settings as before but with RT reflections and shadows enabled.


----------



## OC-NightHawk

While testing I have noticed that even if I specify 475W and then give +15% in the Radeon software settings I never see the wattage go above 390W. I'm wondering if the fact that I am not seeing the power draw go higher is the reason I cannot reach 2800+MHz.

Is it just that my room is to warm? The ambient air temperature in my machines case by the CPU and VRM is 32C. Over by the chipset the ambient air temperature is 28C.

The GPU is idling at 36 to 38C. It is somewhat dependent on what the CPU is doing at the time because they share a single loop.

On my highest overall score the GPU topped out at 59C. I'm guessing that this is not the junction temperature though. I think it was around 76 maybe 78C in that test run.


----------



## ZealotKi11er

OC-NightHawk said:


> While testing I have noticed that even if I specify 475W and then give +15% in the Radeon software settings I never see the wattage go above 390W. I'm wondering if the fact that I am not seeing the power draw go higher is the reason I cannot reach 2800+MHz.
> 
> Is it just that my room is to warm? The ambient air temperature in my machines case by the CPU and VRM is 32C. Over by the chipset the ambient air temperature is 28C.
> 
> The GPU is idling at 36 to 38C. It is somewhat dependent on what the CPU is doing at the time because they share a single loop.
> 
> On my highest overall score the GPU topped out at 59C. I'm guessing that this is not the junction temperature though. I think it was around 76 maybe 78C in that test run.
> View attachment 2528394


I only gave itt 475W because I noticed some dips in the clk during the run. Start with 400W and increase from there until you have a flat line. Did you disable the features posted?
Also make sure you have SAM enabled.
Set the display resolution to 1440p with so scaling.
Set display to lowest quality.
With memory start with 2100 and try multiple runs with each 10MHz increase to see if you gain or lose performance.
Shut down PC after you have done your best run, let it cool down and run again with a fresh boot.
This can help get some more point as the memory retrains each time system boots and you could get better results.

Also picture of my XTXH LC


----------



## weleh

ZealotKi11er said:


> You would be surprised how little difference 18GBps makes. This is because the 128mb infinity cache. At most you get 1-2% more. This is compare to stock 16GBps card. 16GBps card get faster after you OC to 2150MHz.
> 
> I can probably test clk per clock and tell you how much the difference is.


Surprised?


I'm not surprised because I've been playing with a 6900XT for too long now to know how the card works and I know you're wrong. 

100% memory carrying the LC Reference cards. That's why they are the top cards on the leaderboard because at some point, no matter how fast your core goes, if memory can't keep up you get a pipeline bottleneck, just how it happens with CPUs and RAM.

InfinityCache is on every card so it's not that at all too.

But it's easy to prove us wrong, downclock your card to 2150 Mhz (16,5 Gbps) FT1 and leave the core clock at your usual speed. Then post here the result.


----------



## 99belle99

weleh said:


> Surprised?
> 
> 
> I'm not surprised because I've been playing with a 6900XT for too long now to know how the card works and I know you're wrong.
> 
> 100% memory carrying the LC Reference cards. That's why they are the top cards on the leaderboard because at some point, no matter how fast your core goes, if memory can't keep up you get a pipeline bottleneck, just how it happens with CPUs and RAM.
> 
> InfinityCache is on every card so it's not that at all too.
> 
> But it's easy to prove us wrong, downclock your card to 2150 Mhz (16,5 Gbps) FT1 and leave the core clock at your usual speed. Then post here the result.


You're right as if you look up his number 1 position score the memory is 2371MHz average.


----------



## Skinnered

jfrob75 said:


> Here is my FC6 benchmark results using the same GPU clock settings as before but with RT reflections and shadows enabled.
> View attachment 2528397


Thanks, that's a great result!


----------



## Roldo

Do people have issues with instability in TimeSpy GT2 even @stock?
I've had 2 6900 XT (ref and Nitro+) and both could randomly crash in TS GT2 even completely stock.
Other games/benches ran fine.
Tried with two different PSUs (Revolution DF 750 / RM850x v2), same results.
I had no issues with a oc'ed 3080 prior to this


----------



## weleh

Jazz9 said:


> Do people have issues with instability in TimeSpy GT2 even @stock?
> I've had 2 6900 XT (ref and Nitro+) and both could randomly crash in TS GT2 even completely stock.
> Other games/benches ran fine.
> Tried with two different PSUs (Revolution DF 750 / RM850x v2), same results.
> I had no issues with a oc'ed 3080 prior to this


If the card is completely stock that's a bit strange but it's not unheard of unfortunately because Time Spy is a bit trash in that regard.


----------



## ZealotKi11er

Jazz9 said:


> Do people have issues with instability in TimeSpy GT2 even @stock?
> I've had 2 6900 XT (ref and Nitro+) and both could randomly crash in TS GT2 even completely stock.
> Other games/benches ran fine.
> Tried with two different PSUs (Revolution DF 750 / RM850x v2), same results.
> I had no issues with a oc'ed 3080 prior to this


I have noticed at stock the card will boost a lot higher in TimeSpy. Try to enable manual OC and run again. Dont change anything else.


----------



## Skinnered

Have you guys also disabled al powersaving features with mpt and via reg. or msi ab?


----------



## ZealotKi11er

weleh said:


> Surprised?
> 
> 
> I'm not surprised because I've been playing with a 6900XT for too long now to know how the card works and I know you're wrong.
> 
> 100% memory carrying the LC Reference cards. That's why they are the top cards on the leaderboard because at some point, no matter how fast your core goes, if memory can't keep up you get a pipeline bottleneck, just how it happens with CPUs and RAM.
> 
> InfinityCache is on every card so it's not that at all too.
> 
> But it's easy to prove us wrong, downclock your card to 2150 Mhz (16,5 Gbps) FT1 and leave the core clock at your usual speed. Then post here the result.


Still have to try with a difference BIOS that is designed for 16Gbps.
This is with me setting memory to 1075MHz with MPT.

2750MHz - 2310MHz (Stock) - I scored 21 865 in Time Spy
2750MHz - 2420MHz (FT1) - I scored 22 040 in Time Spy
2750MHz - 2150MHz (FT1) - I scored 21 813 in Time Spy

I am going to try to flash a 16Gbps XTXH vBIOS and see how it goes.

16Gbps vBIOS: 

2750MHz - 2150MHz (FT1) - I scored 21 862 in Time Spy


----------



## kratosatlante

ZealotKi11er said:


> I only gave itt 475W because I noticed some dips in the clk during the run. Start with 400W and increase from there until you have a flat line. Did you disable the features posted?
> Also make sure you have SAM enabled.
> Set the display resolution to 1440p with so scaling.
> Set display to lowest quality.
> With memory start with 2100 and try multiple runs with each 10MHz increase to see if you gain or lose performance.
> Shut down PC after you have done your best run, let it cool down and run again with a fresh boot.
> This can help get some more point as the memory retrains each time system boots and you could get better results.
> 
> Also picture of my XTXH LC
> View attachment 2528401


You have lg oled? you try 4k 12bit 4:4:4 120hz ?


----------



## LtMatt

kratosatlante said:


> You have lg oled? you try 4k 12bit 4:4:4 120hz ?


6900 XT HDMI port does not have the bandwidth to support that, max it can do is 4K120 4:4:4 at 10bit. 12bit is possible if you drop refresh to 60HZ, however the TV (at least the LG CX OLED) does not support higher than 10 bit so its pointless.


----------



## OC-NightHawk

ZealotKi11er said:


> Still have to try with a difference BIOS that is designed for 16Gbps.
> This is with me setting memory to 1075MHz with MPT.
> 
> 2750MHz - 2310MHz (Stock) - I scored 21 865 in Time Spy
> 2750MHz - 2420MHz (FT1) - I scored 22 040 in Time Spy
> 2750MHz - 2150MHz (FT1) - I scored 21 813 in Time Spy
> 
> I am going to try to flash a 16Gbps XTXH vBIOS and see how it goes.
> 
> 16Gbps vBIOS:
> 
> 2750MHz - 2150MHz (FT1) - I scored 21 862 in Time Spy


Maybe I'm wrong so I aplogize if I have arrived to the incorrect conclusion but I think I may have found an example of why the memory only matters to a point.

Once the bandwidth is high enough increasing the bandwidth doesn't increase the Pixel Fillrate or the texture fillrate. Conversely increasing the GPU speed without increasing the memory speed stunts the increase in the two fill rates. So if the bandwidth is not fast enough for the fill rates to maximize you see a gain by just increasing the memory speed. But once the bandwidth is enough for the GPU core to do it's thing the extra bandwidth doesn't do anything. It's like being on a bike in a low gear and spinning the peddles as fast as you can but getting no where because the gear is too low.


----------



## lawson67

LtMatt said:


> 6900 XT HDMI port does not have the bandwidth to support that, max it can do is 4K120 4:4:4 at 10bit. 12bit is possible if you drop refresh to 60HZ, however the TV (at least the LG CX OLED) does not support higher than 10 bit so its pointless.


So your saying that the HMDI 2.1 ports on all RX 6900 XT are limited to 40gb/s?, the full standard bandwidth for HDMI 2.1 allows for 48gb/s which is enough bandwidth for full RGB 4.4.4 12bit @4k 120hz VRR, however LG has limited the HMDI 2.1 ports on the LG CX to 40gb/s unlike the LG C9 which there HMDI 2.1 ports are full 48gb/s and can except a full RGB 4.4.4 12bit @4k VRR 120hz signal, however as you say all the LG OLED panels are only 10bit so makes no difference anyhow, however i have not seen any documentation stating that the HMDI 2.1 ports on the RX 6900 XT are limited to 40gp/s as opposed to the full standard HMDI 2.1 which allows for 48gp/s


----------



## ptt1982

Moved from 21.9.2 --> 21.10.2 driver on my Red Devil (non-ultimate) featuring bad alphacool waterblock mount and corsair XD series custom loop:

-Temps the same (edited this, they were around the same before)
-GPU ASIC power now displayed correctly with other TDP etc. metrics on HWinfo64
-Delta between Edge/Junction peak in TS2 25C (53C/78C), around the same as before
-Had to lower clocks by 10mhz from 2685mhz to 2675mhz to pass TS2, 2690mhz insta crash in TS2, 2680mhz crashed towards the end of it
-Scored 23620 graphics score which is margin of error level from my 23675 or so high score (with 10mhz higher clocks), but still 0.2% less
-The driver fixed the annoying 144p max resolution youtube issue for me
-Overall stability seems better, and adrenaline menus snappier, but I haven't used the drivers that much so take that with a grain of salt

I'll let you know if games feel different.


----------



## weleh

OC-NightHawk said:


> Maybe I'm wrong so I aplogize if I have arrived to the incorrect conclusion but I think I may have found an example of why the memory only matters to a point.
> 
> Once the bandwidth is high enough increasing the bandwidth doesn't increase the Pixel Fillrate or the texture fillrate. Conversely increasing the GPU speed without increasing the memory speed stunts the increase in the two fill rates. So if the bandwidth is not fast enough for the fill rates to maximize you see a gain by just increasing the memory speed. But once the bandwidth is enough for the GPU core to do it's thing the extra bandwidth doesn't do anything. It's like being on a bike in a low gear and spinning the peddles as fast as you can but getting no where because the gear is too low.
> View attachment 2528452


This is exactly the reason I was talking about.

If your core is doing 2600 Mhz then going to super high memory won't yield meaningful gains however once you start pushing the core to higher clocks, memory bandwith matters a lot because it can't keep up with 2800 Mhz GPUs anymore.


----------



## OC-NightHawk

weleh said:


> This is exactly the reason I was talking about.
> 
> If your core is doing 2600 Mhz then going to super high memory won't yield meaningful gains however once you start pushing the core to higher clocks, memory bandwith matters a lot because it can't keep up with 2800 Mhz GPUs anymore.


Having learned this it makes me wonder about the RTX 3090. It’s bandwidth is essentially double the 6900XT but the fill rates are so low. Yet it is reported to do better at 4K then the 6900 XT. I’m wondering how that holds up when compared to the overclocked XTXH?


----------



## weleh

OC-NightHawk said:


> Having learned this it makes me wonder about the RTX 3090. It’s bandwidth is essentially double the 6900XT but the fill rates are so low. Yet it is reported to do better at 4K then the 6900 XT. I’m wondering how that holds up when compared to the overclocked XTXH?


At 4K bandwith is very important. The reason why the 6900XT has better Fill rate and whatnot is due to the clock speed being much higher.


----------



## ZealotKi11er

weleh said:


> At 4K bandwith is very important. The reason why the 6900XT has better Fill rate and whatnot is due to the clock speed being much higher.


Just to let you know in TimeSpy 20MHz Core OC will yield more performance than going from 16Gbps to 18Gbps. You saw my score and how close they are. TimeSpy is also 1440p test. I don't know about 4K and the effect of 18Gbps.


----------



## jonRock1992

OC-NightHawk said:


> You have an amazing card. I have tried every which way I could think of and I cannot even hit 25000 on my graphics score. This is my best graphics score AMD Radeon RX 6900 XT video card benchmark result - AMD Ryzen 9 5950X,Micro-Star International Co., Ltd. PRESTIGE X570 CREATION (MS-7C36) (3dmark.com).
> 
> I can get my GPU core almost up to 2800 but strangely enough the more I push the worse score I get.
> 
> On the bright side what I am getting is more then capable of playing the latest games at a fast framerate but I wish I could have gotten my card up to 2800MHz.


Your GPU seems to perform right around where mine does. I can't for the life of me get 25k GPU score lol.

Also, is it normal for fluid level in your loop to go down over time? It seems I'm losing about 10mL every few weeks. So I just keep topping it off. I don't see any leaks anywhere.


----------



## lestatdk

jonRock1992 said:


> Your GPU seems to perform right around where mine does. I can't for the life of me get 25k GPU score lol.
> 
> Also, is it normal for fluid level in your loop to go down over time? It seems I'm losing about 10mL every few weeks. So I just keep topping it off. I don't see any leaks anywhere.


I sure hope so  Mine does the same. No leaks either


----------



## jonRock1992

lestatdk said:


> I sure hope so  Mine does the same. No leaks either


It's so annoying lol. This is my first custom loop and it's been bugging the **** out of me. I think it's evaporating out of my tubes or something. I'm using soft tubing.


----------



## Neoki

jonRock1992 said:


> It's so annoying lol. This is my first custom loop and it's been bugging the **** out of me. I think it's evaporating out of my tubes or something. I'm using soft tubing.


Soft tubing will evaporate fluid very slowly over time. It's not a 100% complete seal, similar to how AIO's work with over time performance loss due to the same issue. They are just pressurized better and treated to hold out better than custom loops.


----------



## OC-NightHawk

weleh said:


> At 4K bandwith is very important. The reason why the 6900XT has better Fill rate and whatnot is due to the clock speed being much higher.


Fascinating. So even with the fill rates being up the bandwidth is also a factor of how well it can handle the image size. So maybe the bandwidth on it's own would be important at around 4K?

Assuming that the 6900 XT or successor that worked the same way but fast, what bandwidth would keep the AMD card from losing ground to the 3090 at 4K?

I would be curious how my 6900 XT?


----------



## ilmazzo

steam will go down again as liquid when it gets cold, I guess you have air in the loop that is slowly getting off..... my custom loop after one month stopped decreasing in fact now i should have to fill a bit the loop to get the water level a bit over the inlet pipe level in the reservoir....


----------



## weleh

ZealotKi11er said:


> Just to let you know in TimeSpy 20MHz Core OC will yield more performance than going from 16Gbps to 18Gbps. You saw my score and how close they are. TimeSpy is also 1440p test. I don't know about 4K and the effect of 18Gbps.


You're not reading are you?

Do a bench at your usual clock speed, 2750 or whatever it is you're benching top spot at but with the memory downclocked to 16Gbps.

Then do the same test with 2750 or whatever it is and then overclock the memory to 18 Gbps.

Post results.


----------



## weleh

This is my run at 25.7K points:

Clock frequency 2,859 MHz (2,255 MHz)
Average clock frequency 2,814 MHz
Memory clock frequency 2,110 MHz (2,000 MHz)
Average memory clock frequency 2,098 MHz

Your run at 26K

Clock frequency 2,867 MHz (2,350 MHz)
Average clock frequency 2,786 MHz
Memory clock frequency 2,420 MHz (2,312 MHz)
Average memory clock frequency 2,371 MHz

Other top Referece LC score

Clock frequency 2,795 MHz (2,400 MHz)
Average clock frequency 2,766 MHz
Memory clock frequency 2,408 MHz (2,312 MHz)
Average memory clock frequency 2,395 MHz

Other top Reference LC score

Clock frequency 2,805 MHz (2,400 MHz)
Average clock frequency 2,764 MHz
Memory clock frequency 2,400 MHz (2,312 MHz) 
Average memory clock frequency 2,370 MHz

Do you see the similarities? The only variable that changes is the memory clock speed and the timings, core wise, I'm the fastest.


----------



## ZealotKi11er

weleh said:


> This is my run at 25.7K points:
> 
> Clock frequency 2,859 MHz (2,255 MHz)
> Average clock frequency 2,814 MHz
> Memory clock frequency 2,110 MHz (2,000 MHz)
> Average memory clock frequency 2,098 MHz
> 
> Your run at 26K
> 
> Clock frequency 2,867 MHz (2,350 MHz)
> Average clock frequency 2,786 MHz
> Memory clock frequency 2,420 MHz (2,312 MHz)
> Average memory clock frequency 2,371 MHz
> 
> Other top Referece LC score
> 
> Clock frequency 2,795 MHz (2,400 MHz)
> Average clock frequency 2,766 MHz
> Memory clock frequency 2,408 MHz (2,312 MHz)
> Average memory clock frequency 2,395 MHz
> 
> Other top Reference LC score
> 
> Clock frequency 2,805 MHz (2,400 MHz)
> Average clock frequency 2,764 MHz
> Memory clock frequency 2,400 MHz (2,312 MHz)
> Average memory clock frequency 2,370 MHz
> 
> Do you see the similarities? The only variable that changes is the memory clock speed and the timings, core wise, I'm the fastest.



There are other trade secrets to get to 26K. Also your memory is lower than what other XTXH with 16Gbps can achieve. I have seen some get 2170MHz.


----------



## LtMatt

ZealotKi11er said:


> There are other trade secrets to get to 26K. Also your memory is lower than what other XTXH with 16Gbps can achieve. I have seen some get 2170MHz.


What are these trade secrets? We're all friends here and in some cases, lovers! 🤠


----------



## OC-NightHawk

I would like to know them too.

I have found a real world example of my findings with Timespy. The more I crank the memory speed up the slower my FPS in Horizon Zero Dawn. I don’t know why that would be. I’m working under the principle now of increasing the core now if I can and only increasing the bandwidth of the bandwidth is insufficient for the core speed. I’ll try to make a recording to demonstrate my findings sometime this evening.


----------



## ZealotKi11er

OC-NightHawk said:


> I would like to know them too.
> 
> I have found a real world example of my findings with Timespy. The more I crank the memory speed up the slower my FPS in Horizon Zero Dawn. I don’t know why that would be. I’m working under the principle now of increasing the core now if I can and only increasing the bandwidth of the bandwidth is insufficient for the core speed. I’ll try to make a recording to demonstrate my findings sometime this evening.


If you see perf decrease in games it just means EDC is kicking in. Your ram is not stable. This is same with Nvidia GPUs. The higher the EDC the lower the perf. It is usually fine for a fast benchmark run to run more aggressive memory clocks. Fire Strike Extreme is a good tool to use for memory stability. 



LtMatt said:


> What are these trade secrets? We're all friends here and in some cases, lovers! 🤠


A bunch of random stuff that even I dont know how I got there lol.
Windows Registry, Display Resolutions, Disabling features, CPU/RAM Tuning.


----------



## OC-NightHawk

ZealotKi11er said:


> If you see perf decrease in games it just means EDC is kicking in. Your ram is not stable. This is same with Nvidia GPUs. The higher the EDC the lower the perf. It is usually fine for a fast benchmark run to run more aggressive memory clocks. Fire Strike Extreme is a good tool to use for memory stability.
> 
> 
> 
> A bunch of random stuff that even I dont know how I got there lol.
> Windows Registry, Display Resolutions, Disabling features, CPU/RAM Tuning.


That makes sense. I guess I found the sweet spot for memory disappointingly low. I'm wondering if it would be better for me to go to normal timings and try for a faster speed to see if that would give me more bandwidth while remaining stable.


----------



## The EX1

Those of you with waterblocks on your cards, what are you seeing for temperature deltas between your gpu and your fluid?


----------



## OC-NightHawk

The EX1 said:


> Those of you with waterblocks on your cards, what are you seeing for temperature deltas between your gpu and your fluid?


The delta is about 23C between the fluid temperature and the junction and 11C between the fluid and the GPU core in the fluid directly after the GPU in the loop.


----------



## Neoki

The EX1 said:


> Those of you with waterblocks on your cards, what are you seeing for temperature deltas between your gpu and your fluid?


Currently running an OEM blocked XFX Zero, that's fresh off the production release line. Delta is about 24c at 350-370w MPT, 30c at 400w+. GPU Edge is 50c at up to 350w, 54c at 400w+. Fluid holds well at 31c under max load, but I have an insane radiator setup.


----------



## weleh

ZealotKi11er said:


> There are other trade secrets to get to 26K. Also your memory is lower than what other XTXH with 16Gbps can achieve. I have seen some get 2170MHz.


My card isn't a XTXH it's a Toxic LE aka XTX.

Unless people are cheating the benchmark with non MPT related tweaks, which I don't condone because it makes it unfair and stupid, I have already shared all the tweaks I've found helped my scores bar one, which I won't share for the time being because it's actually not acessible to most people here and is hardware related.

All of them are on 1 or 2 comments on this thread anyway for everyone to see. 

I stand by my point, LC reference cards are carried by memory and it's extra stability vs all the other cards running at 16Gbps.


----------



## weleh

All of these cards, except mine are Reference LC cards.
LIME's card isn't a ref LC but it has it's bios and is hardware modded (150+ caps on the memory and other stuff he hasn't disclosed).


----------



## ZealotKi11er

weleh said:


> View attachment 2528499
> 
> 
> All of these cards, except mine are Reference LC cards.
> LIME's card isn't a ref LC but it has it's bios and is hardware modded (150+ caps on the memory and other stuff he hasn't disclosed).


I bet If I did any hw related mod I would be even higher.


----------



## ptt1982

LtMatt said:


> What are these trade secrets? We're all friends here and in some cases, lovers! 🤠


Who loves who! I want to know all the rumors and drama on overclock's 6900xt threads!!

Anyone else noticed that performance increases with AMD drivers have stopped since June, and we only have had stability upgrades and features for months? The list of issues is becoming short. I wonder if they will try and squeeze more performance and do optimizations moving forward. That's what happened with 5700x driver cycles as well.


----------



## ptt1982

Neoki said:


> Currently running an OEM blocked XFX Zero, that's fresh off the production release line. Delta is about 24c at 350-370w MPT, 30c at 400w+. GPU Edge is 50c at up to 350w, 54c at 400w+. Fluid holds well at 31c under max load, but I have an insane radiator setup.


I have actually very similar temps on my Red Devil 6900xt (non-ultimate). Radiator set up is 240mm and 360mm for both cpu and gpu, hydro x series with xd3 pump/reservoir combo.

Out of interest, do you guys do maintenance on your systems every 6 months? I'm thinking that just dusting the radiators and fans with my compressed air electronics friendly duster, and inspecting the system once in 6 months should be enough, and draining it every 9-12 months should be enough. It's such a hassle and I'm super busy with work, weekends disappear with activities and family...


----------



## J7SC

The results achieved here by several folks are stunning, no matter what.
On the topic of VRAM, both serious tests by tech sites as well as my own comparison between the 3090 and 6900 XT suggest that up to 1440 rez, the much narrower memory bandwidth of the 6900XT doesn't matter as much (perhaps due to InfinityCache), but at 4K, it most certainly does (see GPUz below). Also, the two GPUs have different die designs, memory types etc so some head--to-head comparisons based on only one parameter such as MHz limp a bit.

The obvious tip would be to cool your cards as best you can and find the highest VRAM settings ('most efficient') before scores go down at a fixed GPU speed. It is worth repeating that on my 6900XT, I'm leaving about 110Mhz or so VRAM speed on the table due to artificial restrictions on the 'non-H' chip, even w/ fast timings. While the 6900XT is my 'daily work' machine (more so than the 3090), I still would like to run it at its full 24/7 potential.


----------



## ZealotKi11er

J7SC said:


> The results achieved here by several folks are stunning, no matter what.
> On the topic of VRAM, both serious tests by tech sites as well as my own comparison between the 3090 and 6900 XT suggest that up to 1440 rez, the much narrower memory bandwidth of the 6900XT doesn't matter as much (perhaps due to InfinityCache), but at 4K, it most certainly does (see GPUz below). Also, the two GPUs have different die designs, memory types etc so some head--to-head comparisons based on only one parameter such as MHz limp a bit.
> 
> The obvious tip would be to cool your cards as best you can and find the highest VRAM settings ('most efficient') before scores go down at a fixed GPU speed. It is worth repeating that on my 6900XT, I'm leaving about 110Mhz or so VRAM speed on the table due to artificial restrictions on the 'non-H' chip, even w/ fast timings. While the 6900XT is my 'daily work' machine (more so than the 3090), I still would like to run it at its full 24/7 potential.
> 
> View attachment 2528506


Nvidia doing better at 4K is not because of memory bandwidth. It because they have more shader power and its easier to utilize at 4K. Just look at something like 3070 vs 6700 XT. Its much much smaller difference 14Gbps 256-Bit vs 16Gbps 192-Bit + the fast Infinity Cache and yet at 4K the lead increases as long as 3070 does not run out of vRAM. Also 6900 XT would have not scales that well at 4K vs something like 6800 which has same memory subsystem. The case with Nvidia doing better at 4K is not because AMD is not scaling better at 4K, its Nvidia scaling worse at 1080p/1440p.


----------



## J7SC

...well, we can agree / disagree, as much as I _already mentioned that it typically comes down to more than just one parameter_, and different die designs. That said, my own tests on my 6900XT suggest that the higher the VRAM, the better the results at the same max GPU clocks, until I hit an artificially (non-H) imposed limit. So when it gets to VRAM, I do wonder what a 6900XT could really do with a no-artificial limit GDDR6_X_ and a 384 bit bus, which is 50% bigger than the current one - and that is a compliment of sorts for BigNavi. InfinityCache certainly helps, but its limited size kicks in sooner or later, depending on app.

Ed.: While artificial VRAM clock limitation and overall memory design limits are two different things, both ultimately limit available bandwidth. Also, my 2x 2080 Tis from three years ago with an official 14 gbps GDDR6 VRAM are overclocked to an efficiency peak of 2060+ MHz (in SLI btw), which exceeds the default 6900XT VRAM speed and isn't too far off the 2150 limit I like to hate...and even before a 50% wider bus, GDDR6X itself provides another big boost over GDDR6. I simply think that a.) these limitations start to show at 4K and b.) the 6900XT has a lot more potential than we're currently getting, though it would mean much more heat and $$s.


----------



## DieDreiBeiden

weleh said:


> View attachment 2528499
> 
> 
> All of these cards, except mine are Reference LC cards.
> LIME's card isn't a ref LC but it has it's bios and is hardware modded (150+ caps on the memory and other stuff he hasn't disclosed).


Mine is an OC Formula with LC Bios, not a LC Ref.


----------



## jonRock1992

ptt1982 said:


> I have actually very similar temps on my Red Devil 6900xt (non-ultimate). Radiator set up is 240mm and 360mm for both cpu and gpu, hydro x series with xd3 pump/reservoir combo.
> 
> Out of interest, do you guys do maintenance on your systems every 6 months? I'm thinking that just dusting the radiators and fans with my compressed air electronics friendly duster, and inspecting the system once in 6 months should be enough, and draining it every 9-12 months should be enough. It's such a hassle and I'm super busy with work, weekends disappear with activities and family...


I literally dust mine out every few days lol. But I don't plan on flushing my loop until another 6 months or so.


----------



## Thanh Nguyen

24k4-24k5 is the max I can do with my ultimate red devil. Any no limit bios for this card or no way to push it to 25k?








I scored 23 146 in Time Spy


AMD Ryzen 9 5950X, AMD Radeon RX 6900 XT x 1, 32768 MB, 64-bit Windows 10}




www.3dmark.com


----------



## J7SC

ptt1982 said:


> I have actually very similar temps on my Red Devil 6900xt (non-ultimate). Radiator set up is 240mm and 360mm for both cpu and gpu, hydro x series with xd3 pump/reservoir combo.
> 
> Out of interest, do you guys do maintenance on your systems every 6 months? I'm thinking that just dusting the radiators and fans with my compressed air electronics friendly duster, and inspecting the system once in 6 months should be enough, and draining it every 9-12 months should be enough. It's such a hassle and I'm super busy with work, weekends disappear with activities and family...


I dust the rads and clean the fans every 9 months or so; also depends if you have pets etc. Regarding the loop internals, I recently opened a complex dual-loop (5x rads, 5x D5s) I completed in late 2018 and everything was still pristine. That said, the clear acrylic reservoirs and w-blocks with sight-access to the micro-fins made _regular checking_ easy, along with noting any changes in delta temps and also changes in the colour and clarity (there weren't any).

Normally, I change liquids more often than 2.5+ years, but it depends if there's any deterioration. I am also paranoid when building up a loop (final flushes w/ deionized water; not touching any internal bits with my fingers w/o gloves, wearing a mask and of course good quality liquids).

Re. topping up a reservoir, as others already mentioned, liquids in a soft-tubing build tend to evaporate a bit. I noticed though that if you let the liquid in the reservoir settle at a slightly lower level, the evaporation slows down significantly, presumably related to changes in pressure.


----------



## OC-NightHawk

I think maybe the 3090 might be matched or beaten in some games by the RX 6900 XT at 4K ultra.

This is a 3090 on youtube: RTX 3090 : Horizon Zero Dawn 4K - ULTRA - YouTube This machine is struggling to maintain 60FPS.

This is my 6900 XT in 1440P with both monitors in use. This is a pixel fill rate of 360.3GPixels/s and 900.8GTexels/s.









The same settings in 4K ultra at 88FPS.









Is the RTX 3090 overrated or is this person's card just not setup optimally?


----------



## cfranko

jonRock1992 said:


> Your GPU seems to perform right around where mine does. I can't for the life of me get 25k GPU score lol.
> 
> Also, is it normal for fluid level in your loop to go down over time? It seems I'm losing about 10mL every few weeks. So I just keep topping it off. I don't see any leaks anywhere.


Yeah my liquid in the reservoir also decreases over time, pretty interesting.


----------



## ptt1982

cfranko said:


> Yeah my liquid in the reservoir also decreases over time, pretty interesting.


If it's too fast, then make sure you don't have a leak!


----------



## Neoki

So I've been toying around with my new XFX Zero XTXH. It is Core OCing better than my previous Gigabyte Waterforce extreme. However I cannot get the memory above 2090-2100mhz with fast timings. I was hoping to at least get it to the xtxh 2150 sweet spot but taking it above 2100 i actually lose score and 2120+ i get crashes in timespy. 

My waterforce didn't have any issues sitting at 2150 memory fast timing but could not go above 2750 core. The zero is going up to 2820 core so far but I'm actually down 500 points on timespy Delta from the previous card just due to the memory difference i guess?

I've tried upping the min soc from 850 to 950. Tried some values in between with no improvement either. Tried upping the mem controller from 850 to 1000, and mem voltage from 1350 to 1500. Upping the memory voltage actually seemed to have negative impact as well.

Anybody ran into this on their xtxh? Any ideas?


----------



## kairi_zeroblade

OC-NightHawk said:


> This is a 3090 on youtube: RTX 3090 : Horizon Zero Dawn 4K - ULTRA - YouTube This machine is struggling to maintain 60FPS.


probably an old driver (since its from 2020) some things (driver side) might have changed/improved for that specific game..I don't think the 3090 would be underrated that much..specially when under a WB and Overclocked..

another thing I noticed is the system he is using, might be showing its "Age" already when paired to the RTX 3090..(he's using an X99 platform with a OC'd 5820K)

Personally, I think the best comparison would be on a similar system/build as the IPC gains would co-relate and the only factor you would have to observe is the GPU..


----------



## LtMatt

Neoki said:


> So I've been toying around with my new XFX Zero XTXH. It is Core OCing better than my previous Gigabyte Waterforce extreme. However I cannot get the memory above 2090-2100mhz with fast timings. I was hoping to at least get it to the xtxh 2150 sweet spot but taking it above 2100 i actually lose score and 2120+ i get crashes in timespy.
> 
> My waterforce didn't have any issues sitting at 2150 memory fast timing but could not go above 2750 core. The zero is going up to 2820 core so far but I'm actually down 500 points on timespy Delta from the previous card just due to the memory difference i guess?
> 
> I've tried upping the min soc from 850 to 950. Tried some values in between with no improvement either. Tried upping the mem controller from 850 to 1000, and mem voltage from 1350 to 1500. Upping the memory voltage actually seemed to have negative impact as well.
> 
> Anybody ran into this on their xtxh? Any ideas?


Thanks, looks like its just another regular XTXH then with a block slapped on it.


----------



## weleh

No idea why we're even discussing if memory bandwith matters or not.

If we understand how fundamentally GPUs work and the interaction between core clocks and memory, you realize that having fast memory is just as important as having a fast core. Core feeds the memory, if you have a bottleneck on the memory side, it doesn't matter how fast your core goes, it will have to wait.

Looking at Time Spy and more importantly, TSE and FSE (which are higher resolution benchmarks) the only difference between LC Ref and any other 6900XT on the market is the higher binned memory chips, nothing else...Heck, the PCB on the reference LC is probably just as basic as the reference PCB besides the die... Having 18 Gbps chips makes an enormous difference when you saturate the pipeline, you can see the cards flying with even lower clocks because you're taking full advantage of all that memory speed.

The same observations can be seen between CPU and RAM so it's not like this is new.


----------



## ilmazzo

ZealotKi11er said:


> There are other trade secrets to get to 26K. Also your memory is lower than what other XTXH with 16Gbps can achieve. I have seen some get 2170MHz.


Sir, how can they go around hard bios limit of 2150 for vram? asking for a friend....


----------



## L!ME

weleh said:


> All of these cards, except mine are Reference LC cards.
> LIME's card isn't a ref LC but it has it's bios and is hardware modded (150+ caps on the memory and other stuff he hasn't disclosed).


The only ref LC are from snakeeyes and zealotki11er the Rest use normal xtxh with LC BIOS.

I did mods on every Power rail. Nothing Special.
With more Power Tool you can Change soc and flck and this is where the gain came from.


----------



## ZealotKi11er

ilmazzo said:


> Sir, how can they go around hard bios limit of 2150 for vram? asking for a friend....


XTXH does not have the limit. 
Also because of the timings, even with 18Gbps chips, anything after 2150 breaks apart when it comes to score.


----------



## lestatdk

ZealotKi11er said:


> XTXH does not have the limit.
> Also because of the timings, even with 18Gbps chips, anything after 2150 breaks apart when it comes to score.


But the ones with high GPU scores run high memory frequencies. So that doesn't add up


----------



## LtMatt

I almost ended up buying a Powercolour Radeon 6900 XT Liquid same as Zealot, as he linked to a German website selling it. Luckily they don't ship to the UK (I guess due to Brexit) so I can't buy one even If I wanted to. Probably just as well.


----------



## Neoki

LtMatt said:


> Thanks, looks like its just another regular XTXH then with a block slapped on it.


yep, basically my overall review of it.

Some pro's it does have for me personally, is that it's a legit EK Block, with RGB connections that work with my Asus Dark Hero RGB software without issue. The Gigabyte waterforce required RGB Fusion. Also the XFX EK block has easy access to take apart the block in the future for cleaning, the Gigabyte had the screw holes covered by a plastic glued on mold and half of the bottom aluminum cover. I think maintenance on the gigabyte version would have been impossible outside of rinsing it with Blitz 2 and hoping for the best, unless you were willing to damage the aesthetics of the block later. And finally, the XFX EK block looks better to me personally.

Overall in the end, I'll be leaving the daily OC pretty much at the same level as the previous card, so no loses there. Just my pride in that I can't bump my TS score up any higher, but I guess I can be happy with my previous cards 23249. If anybody has any tips on improving that memory stability to get it from 2100 to 2150 I'm all ears.

New version 2 of my build. Hard Satin tubing redo will be coming up in a few weeks after I'm done trying to tune this.


----------



## EastCoast

What is the OC frequency for the vram to get 18Gbps?


----------



## ZealotKi11er

EastCoast said:


> What is the OC frequency for the vram to get 18Gbps?


+310MHz


----------



## bloot

EastCoast said:


> What is the OC frequency for the vram to get 18Gbps?


2000*8=16000
2250*8=18000

Although these RDNA2 gpus show 10-12MHz less on memory clock and I don't know why, instead of 2000MHz it shows 1990-1988MHz.


----------



## 99belle99

14000 : 1750MHz I had this in my 5700XT and could push it for benchmarks above 2000MHz

16000 : 2000MHz 6900XT only goes to 2150MHz in Radeon software but boots lower by averaging.

18000 : 2250MHz


----------



## Papa Emeritus

I'm thinking of switching from a RTX 3080 to one of these cards. Is there any difference in oc with 2x or 3x 8pin cards? I know about the XTX & XTXH differencies already.


----------



## ryouiki

Just installed a Gigabyte 6900XT Waterforce... would not have been my first choice of card, but I managed to get it on "sale" (still horribly overpriced) from first party/non-scalped.

This is more of a desperation buy then anything, my 5700XT Liquid Devil is apparently faulty... I had assumed the issue was with AMD's notoriously bad drivers, but new card has been running the same app for 24+ hours that would crash the 5700XT in minutes/hours.

Hopefully this one turns out to be a good one, due to 5700XT stability issues I had to swap out my main PC to one running 1080Ti... so once I finish stability testing new card and flush loop again I can finally switch back to PC with 6900XT.


----------



## Nader83

Hi all,

I wanted to know if my temps are good and safe. I have 2 gpu ( xfx rx6900xt ekwb edition) and 5950. The GPU is set to rage mode. Under load while benchmarking 3dmark time spy i have seen my gpu temps shoot up to 72 for split second and back down to 60-65C during the test. Is this something normal?

I have 2x360 corsair radiators and ekwb D5 pump/reservoir.


thanks


----------



## b0z0

Just received my 6900xt yesterday, unfortunately haven't had much time to play around with it


----------



## ZealotKi11er

Look


weleh said:


> No idea why we're even discussing if memory bandwith matters or not.
> 
> If we understand how fundamentally GPUs work and the interaction between core clocks and memory, you realize that having fast memory is just as important as having a fast core. Core feeds the memory, if you have a bottleneck on the memory side, it doesn't matter how fast your core goes, it will have to wait.
> 
> Looking at Time Spy and more importantly, TSE and FSE (which are higher resolution benchmarks) the only difference between LC Ref and any other 6900XT on the market is the higher binned memory chips, nothing else...Heck, the PCB on the reference LC is probably just as basic as the reference PCB besides the die... Having 18 Gbps chips makes an enormous difference when you saturate the pipeline, you can see the cards flying with even lower clocks because you're taking full advantage of all that memory speed.
> 
> The same observations can be seen between CPU and RAM so it's not like this is new.


enormous, I would agree if there was no IC.


----------



## Gigabytedude24

Should I send this card back or just live with the broken soldier joint?


----------



## kairi_zeroblade

Gigabytedude24 said:


> Should I send this card back or just live with the broken soldier joint?
> View attachment 2528611


Good steady hands and solder skills should fix it quickly, else I do not know (on your region) if they would warrant self-induced damage..


----------



## Gigabytedude24

kairi_zeroblade said:


> Good steady hands and solder skills should fix it quickly, else I do not know (on your region) if they would warrant self-induced damage..


I got it that way from shipping. The place I got it from will take it back. Pay for shipping but they are sold out.


----------



## kratosatlante

Can you share the LC bios 18gbs you are using? gelid gc-ultimate arrive, tonight test


----------



## lestatdk

Gigabytedude24 said:


> Should I send this card back or just live with the broken soldier joint?


It can be repaired. A small piece has come off the capacitor, but should still be possible to get a good mount. If that's the only thing wrong with it and you have a soldering iron or know someone who does, I'd say repair it.
Does it run OK even with the broken capacitor ?


----------



## Gigabytedude24

lestatdk said:


> It can be repaired. A small piece has come off the capacitor, but should still be possible to get a good mount. If that's the only thing wrong with it and you have a soldering iron or know someone who does, I'd say repair it.
> Does it run OK even with the broken capacitor ?


Yes that's the thing it runs just fine games just like my other 6900xt


----------



## Skinnered

Can some someone give me some reference for the possible best settings for the sapphire 6900 xt ee toxic for core en mem in radeonsettings. Is trixx, toxic nercesary to unlock some things?
Any other important things, disable powesaving with mpt and msi ab etc

Thanks in advance


----------



## kratosatlante

weleh said:


> No.
> 
> Only AMD Reference Liquid cooled cards (the ones that are sold with prebuilts).
> 
> View attachment 2528377


can you share the link of this bios?


----------



## OC-NightHawk

kratosatlante said:


> Can you share the LC bios 18gbs you are using? gelid gc-ultimate arrive, tonight test


Is this a different bios then the bios the Gigabyte RX 6900 XT Xtreme Waterforce is using? Would I see better gains with the memory with this bios?


----------



## weleh

kratosatlante said:


> can you share the link of this bios?


You have to flash via Linux or with an external flash tool.
The vbios is on techpowerup.


----------



## kratosatlante

weleh said:


> You have to flash via Linux or with an external flash tool.
> The vbios is on techpowerup.


AMD VBFlash / ATI ATIFlash (3.20) Download
3.20 say suport 6900 xtxh
This is the LC bios 18gbps?
AMD RX 6900 XT VBIOS


----------



## kratosatlante

lowrider_05 said:


> I bought this one: aliexpress.com/item/32223454070.html and
> those for the Mem cooling: aliexpress.com/item/4000670979331.html
> 
> but just Save you the trouble and get an Preblocked one like the Liquid Devil.
> 
> on another note, i was able to flash the XTXH LC AMD Bios on my card and now my mem speed can go up to 2370 Mhz!!!
> wich set me on the NR1 Spot of TSE in my Hardware config and that with a low CPU Score:
> 
> View attachment 2517686


Can you share the LC bios you are using and the procedure?


----------



## ZealotKi11er

kratosatlante said:


> AMD VBFlash / ATI ATIFlash (3.20) Download
> 3.20 say suport 6900 xtxh
> This is the LC bios 18gbps?
> AMD RX 6900 XT VBIOS


Yes that is the LC bios.


----------



## jonRock1992

Just a heads up. I tried flashing the LC vbios in Linux on my 6900 XT Red Devil Ultimate. It flashed successfully, but my system would not post with it. I would only try flashing it if you're familiar with an external flashing tool or you have a dual bios switch. Luckily I had a dual bios switch so all I had to do was flip my switch, boot back into Linux, and then flip the switch back to the incompatible vbios before flashing my backup.


----------



## ryouiki

Hmm well 6900XT Waterforce on stock settings posted a respectable 20639 in TimeSpy (still on 3900X CPU). Seems to stay fairly cool with GPU temp reading not exceeding 50C, GPU Hotspot peaking at 73C, and Memory Junction 46C

Unfortunately all is not well again... another TDR / Crash Defender on the app that was causing my 5700XT to do the same (though it took much longer). Anyone know if there is a way to open a ticket directly to AMD for driver related issues? The "Bug Report Tool" does nothing, I have submitted dozens of reports without anything being done.

_Edit_ Also I apparently have a special 6900XT that has 49gb of RAM as that is what AMD ADL is reporting to HWInfo64 for "GPU Memory Usage" at the time for the driver crash.


----------



## kairi_zeroblade

Gigabytedude24 said:


> I got it that way from shipping. The place I got it from will take it back. Pay for shipping but they are sold out.


so its mishandling from the seller..damn..if you are paying for the return, it kinda sounds like a ripoff since it won't get replaced any sooner, or they'd just get away with soldering it themselves..


----------



## LtMatt

ryouiki said:


> Hmm well 6900XT Waterforce on stock settings posted a respectable 20639 in TimeSpy (still on 3900X CPU). Seems to stay fairly cool with GPU temp reading not exceeding 50C, GPU Hotspot peaking at 73C, and Memory Junction 46C
> 
> Unfortunately all is not well again... another TDR / Crash Defender on the app that was causing my 5700XT to do the same (though it took much longer). Anyone know if there is a way to open a ticket directly to AMD for driver related issues? The "Bug Report Tool" does nothing, I have submitted dozens of reports without anything being done.
> 
> _Edit_ Also I apparently have a special 6900XT that has 49gb of RAM as that is what AMD ADL is reporting to HWInfo64 for "GPU Memory Usage" at the time for the driver crash.


I suspect you have a local system issue to be honest.


----------



## ryouiki

LtMatt said:


> I suspect you have a local system issue to be honest.


Pretty much 100% AMD driver issue... I've replaced motherboard, processor, PSU, memory and now GPU (only thing I haven't replaced are NVME drives), same issue always occurs. Swapping 1080Ti in the same system and issue goes away immediately, app does not crash even when running for weeks at a time.

_Edit_ I've also tested with a different monitor.


----------



## weleh

kratosatlante said:


> AMD VBFlash / ATI ATIFlash (3.20) Download
> 3.20 say suport 6900 xtxh
> This is the LC bios 18gbps?
> AMD RX 6900 XT VBIOS


It doesn't work on Windows if you try to flash a different vendor card.
It's only good to flash the OG bios.

To force flash you either do it externally or via Linux, that's how I did back when XTXH bios leaked and flashed sucessfully on my XTX card but drivers don't let you change anything.


----------



## Justye95

Hi guys, I don't understand why SAM is not available, is it a driver bug? happens to you too? obviously it is enabled by the bios
driver version; 21.10.2


----------



## LtMatt

ryouiki said:


> Pretty much 100% AMD driver issue... I've replaced motherboard, processor, PSU, memory and now GPU (only thing I haven't replaced are NVME drives), same issue always occurs. Swapping 1080Ti in the same system and issue goes away immediately, app does not crash even when running for weeks at a time.
> 
> _Edit_ I've also tested with a different monitor.


What app is it that's causing it?


----------



## OC-NightHawk

ryouiki said:


> Pretty much 100% AMD driver issue... I've replaced motherboard, processor, PSU, memory and now GPU (only thing I haven't replaced are NVME drives), same issue always occurs. Swapping 1080Ti in the same system and issue goes away immediately, app does not crash even when running for weeks at a time.
> 
> _Edit_ I've also tested with a different monitor.


Something is going on with your hardware you just haven't diagnosed the issue yet. If it was the driver then everyone would be seeing the issue.


----------



## OC-NightHawk

Justye95 said:


> Hi guys, I don't understand why SAM is not available, is it a driver bug? happens to you too? obviously it is enabled by the bios
> driver version; 21.10.2
> 
> View attachment 2528703
> View attachment 2528704


Did you enable it in the bios of your motherboard?


----------



## ryouiki

OC-NightHawk said:


> Something is going on with your hardware you just haven't diagnosed the issue yet. If it was the driver then everyone would be seeing the issue.


That seems to be the bog standard response for any AMD driver related issues here or on Reddit or anywhere else...

I've spent months trying to diagnose this as a hardware issue with previous 5700XT, and now it repeats on 6900XT. At any rate I don't want to clog up this thread, I opened a support request via AMD's website but I have little faith that it will be routed to a team that can do anything about it.

Other then that 6900XT Waterforce seems like a nice card, it is keeping temperatures very consistent... after running various 3DMark benchmarks on it though it does like to "sing" quite noticeably with coil whine. 😆


----------



## LtMatt

OC-NightHawk said:


> Something is going on with your hardware you just haven't diagnosed the issue yet. If it was the driver then everyone would be seeing the issue.


That is honestly my take too at the moment.


----------



## EastCoast

ryouiki said:


> Hmm well 6900XT Waterforce on stock settings posted a respectable 20639 in TimeSpy (still on 3900X CPU). Seems to stay fairly cool with GPU temp reading not exceeding 50C, GPU Hotspot peaking at 73C, and Memory Junction 46C
> 
> Unfortunately all is not well again... another TDR / Crash Defender on the app that was causing my 5700XT to do the same (though it took much longer). Anyone know if there is a way to open a ticket directly to AMD for driver related issues? The "Bug Report Tool" does nothing, I have submitted dozens of reports without anything being done.
> 
> _Edit_ Also I apparently have a special 6900XT that has 49gb of RAM as that is what AMD ADL is reporting to HWInfo64 for "GPU Memory Usage" at the time for the driver crash.


What is your PC build specs:
CPU: 3900x
MB:
Ram Make/Model:.......... Speed of ram: .......... Speed you have ram set to:..........
PCIe Devices:
USB Devices:
PSU Make/Model:
UPS Make/Model:
WaterCooled Components Make/Model:

How do you have the video card connected? Are you using:
A. 1 power connectors with 2 or 1 pigtails
B. 2 power connectors w/1 or 2 pigtails
C. 3 power connectors
D. 2 power connectors


----------



## Justye95

OC-NightHawk said:


> Did you enable it in the bios of your motherboard?


Yes!


----------



## EastCoast

Justye95 said:


> Yes!


How did you enable it?


----------



## OC-NightHawk

ryouiki said:


> That seems to be the bog standard response for any AMD driver related issues here or on Reddit or anywhere else...
> 
> I've spent months trying to diagnose this as a hardware issue with previous 5700XT, and now it repeats on 6900XT. At any rate I don't want to clog up this thread, I opened a support request via AMD's website but I have little faith that it will be routed to a team that can do anything about it.
> 
> Other then that 6900XT Waterforce seems like a nice card, it is keeping temperatures very consistent... after running various 3DMark benchmarks on it though it does like to "sing" quite noticeably with coil whine. 😆


The likely hood of your receiving two bad cards in a row is absurdly low.

I suggest a thread for it in the forum here. Essentially though it could be the overclocking settings, the memory, the chipset. It’s hard to say until the diagnostics have begun.

Anyways I do hope that it can get sorted out for you. 😊


----------



## ryouiki

EastCoast said:


> What is your PC build specs:
> CPU: *3900x*
> MB: *Gigabyte X570 Aorus Master*
> Ram Make/Model: *GSkill FlareX 4x8 (Single matched kit) *Speed of ram: *XMP 3200CL14 *Speed you have ram set to: *JEDEC/XMP/Custom Timings (3733Cl16) Tested*
> PCIe Devices: *None*
> USB Devices: *Steelseries Apex Pro, Logitech G600*
> PSU Make/Model: *Seasonic Prime TX-1000 Titanium*
> UPS Make/Model: *CyberPower 1500PFC (Sinewave)*
> WaterCooled Components Make/Model: *Heatkiller IV (Cpu), Heatkiller Tube (200mm Res), Heatkiller D5 Vario (Pump), BlackIce Nemesis 360mm GTX + GTS XFlow (Radiators), EKWB X570 Master Block (Chipset)*
> 
> How do you have the video card connected? Are you using:
> A. 1 power connectors with 2 or 1 pigtails
> B. 2 power connectors w/1 or 2 pigtails
> C. *3 power connectors*
> D. 2 power connectors


As noted above... Also been tested with Gskill TridentZ 3600CL15 kit at JEDEC/XMP/Custom Timings as well as a 2nd fully matched FlareX kit. I have also tested with only 2x8 configuration.


----------



## EastCoast

@ryouiki
1. For testing purposes set your ram to default. We already know that MB is best served using daisy chain memory _topology. So that means you need to return to default any voltage, etc changes your made in the bios as well relating to the dram & CPU overclock. Not just taking 2 mem sticks out. _

2. Unplug your UPS and use a regular power strip (for example).

Try this method for the next 10-24 hours to see if you see any more problems from the "app". BTW, what app are you referring to?


----------



## OC-NightHawk

ryouiki said:


> As noted above... Also been tested with Gskill TridentZ 3600CL15 kit at JEDEC/XMP/Custom Timings as well as a 2nd fully matched FlareX kit. I have also tested with only 2x8 configuration.


The memory speeds you have it set for are considered overclocked. Set the memory to a slower speed and see what happens. Do you have the model of the memory so we can check if it is single or dual rank.

For instance, are these the modules? F4-3600C15D-16GTZ-G.SKILL International Enterprise Co., Ltd. (gskill.com)


----------



## coelacanth

Justye95 said:


> Hi guys, I don't understand why SAM is not available, is it a driver bug? happens to you too? obviously it is enabled by the bios
> driver version; 21.10.2
> 
> View attachment 2528703
> View attachment 2528704


I had the same issue.
In the BIOS:
Enable Above 4G Decoding
Disable CSM


----------



## ryouiki

EastCoast said:


> @ryouiki
> 1. For testing purposes set your ram to default. We already know that MB is best served using daisy chain memory _topology. So that means you need to return to default any voltage, etc changes your made in the bios as well relating to the dram & CPU overclock. Not just taking 2 mem sticks out. _
> 
> 2. Unplug your UPS and use a regular power strip (for example).
> 
> Try this method for the next 10-24 hours to see if you see any more problems from the "app". BTW, what app are you referring to?


1.) I've attempting running a board "defaults", aka my reference to JEDEC speeds above. I will do a CMOS reset and re-run but I can pretty much be 100% sure it will occur again.
2.) I've run this system with/without UPS without any variation in the behavior.

I have more then one DirectX 11 application that can trigger these crashes, but the most hands-off one is just launching this, leaving all settings on default and clicking "loop" and then walk away. It will crash eventually.... sometimes 5 minutes, sometimes 15 hours. The defaults for looped mode are a small 1920x1080 window with capped framerate that barely even puts stress on the card.

Endwalker Benchmark (The retail version of this game will also crash in the same way)











No idea what the anomaly with memory speed is here, the card is running bone stock with latest AMD drivers after removing all previous drivers with DDU from Windows 10 Safe Mode.



OC-NightHawk said:


> The memory speeds you have it set for are considered overclocked. Set the memory to a slower speed and see what happens. Do you have the model of the memory so we can check if it is single or dual rank.


See 1.) From above quote.

For the modules, This is what is actively in the system (QVL for this motherboard):

F4-3200C14D-16GFX (I have 2 kits of these, both are 4 module matched kits - newer kit is A2 PCB, older one is A0 PCB).

I don't actively use these, but have used them previously for testing:

F4-3600C15D-16GTZ (I have 2 kits of these, 2 modules per kit, sequential serial numbers).


----------



## OC-NightHawk

Can you share screen shots of gpuz and the different tabs on Cpuz please?


----------



## EastCoast

ryouiki said:


> That seems to be the bog standard response for any AMD driver related issues here or on Reddit or anywhere else...
> 
> I've spent months trying to diagnose this as a hardware issue with previous 5700XT, and now it repeats on 6900XT. At any rate I don't want to clog up this thread, I opened a support request via AMD's website but I have little faith that it will be routed to a team that can do anything about it.
> 
> Other then that 6900XT Waterforce seems like a nice card, it is keeping temperatures very consistent... after running various 3DMark benchmarks on it though it does like to "sing" quite noticeably with coil whine. 😆





ryouiki said:


> 1.) I've attempting running a board "defaults", aka my reference to JEDEC speeds above. I will do a CMOS reset and re-run but I can pretty much be 100% sure it will occur again.
> 2.) I've run this system with/without UPS without any variation in the behavior.
> 
> I have more then one DirectX 11 application that can trigger these crashes, but the most hands-off one is just launching this, leaving all settings on default and clicking "loop" and then walk away. It will crash eventually.... sometimes 5 minutes, sometimes 15 hours. The defaults for looped mode are a small 1920x1080 window with capped framerate that barely even puts stress on the card.
> 
> Endwalker Benchmark (The retail version of this game will also crash in the same way)
> 
> View attachment 2528739
> 
> 
> 
> No idea what the anomaly with memory speed is here, the card is running bone stock with latest AMD drivers after removing all previous drivers with DDU from Windows 10 Safe Mode.
> 
> 
> 
> See 1.) From above quote.
> 
> For the modules, This is what is actively in the system (QVL for this motherboard):
> 
> F4-3200C14D-16GFX (I have 2 kits of these, both are 4 module matched kits - newer kit is A2 PCB, older one is A0 PCB).
> 
> I don't actively use these, but have used them previously for testing:
> 
> F4-3600C15D-16GTZ (I have 2 kits of these, 2 modules per kit, sequential serial numbers).





ryouiki said:


> Pretty much 100% AMD driver issue... I've replaced motherboard, processor, PSU, memory and now GPU (only thing I haven't replaced are NVME drives), same issue always occurs. Swapping 1080Ti in the same system and issue goes away immediately, app does not crash even when running for weeks at a time.
> 
> _Edit_ I've also tested with a different monitor.


Hmm...after reading a few of your post to get a general idea of what you are providing here there is something going on with your setup that is inducing this problem. To say that Radeon hardware causing this alone as using a nvidia card fixes the issue is simply silly. And only servers to wind up Radeon users here.

After stating that you've troubleshoot other aspects of your build to no avail the remaining component is your CPU. And I've seen something like this before with a cpu that was getting to little voltage from vccio. Which use to causes black screen to my 5700xt. Although this was from an Intel CPU the fact that reducing VCCIO/VCCSA had no effect on geforce cards and prior gen Radeon cards is very telling. And since you have a tendency to swap between the Radeon and Geforece is why I suggest that you reset those settings in the bios you tweaked back to default. And do a hard reset.

If you have then it's my opinion that the issue is still the cpu. If you have not I would suggest that all settings you tweaked for your 3900x be reverted back to it's bios defaults via hard reset with just 2 stick of ram at it's xmp profile. Using the latest bios revision.


----------



## 99belle99

I agree with everyone else and that there is a issue with your setup rather than a driver issue. Take the advice and run the RAM stock and as above maybe even populate just two RAM sockets and if that is good then still with two sockets populated run a test with XMP.


----------



## kairi_zeroblade

anybody tried the 21.10.2 drivers?


----------



## Maulet//*//

yes, no problems. (BF5, Arma3)

Maybe related or not: i don't like some textures with Raytracing with HDR in BF5


----------



## amigafan2003

ryouiki said:


> No idea what the anomaly with memory speed is here, the card is running bone stock with latest AMD drivers *after removing all previous drivers with DDU* from Windows 10 Safe Mode.


Oh dear.

Have you tried a fresh windows install on a new partition?


----------



## ObviousCough

[SO] i swapped my Z590 Unify-X with the B550 Unify-X to do some 5700G things with the 6900XT and it wouldn't hit the same effective clocks any more. I put the Z590 board back in with an 11900k and it's still not reaching the proper effective clock.




















The 11900K is running stock but the ram is at 3866C14. Could a maxed out 5700G/Stock 11900k bottleneck a 6900xt to the point the effective clocks don't go as high, or is there something wrong with my card? It's rock solid and nothing seems to be amiss, other than my missing performance.


edit: the effective clock is generally 50-100mhz lower than the reported clock. so if it's running 2666 the effective clock will be like 2543 or something.


edit2: i can still hit the same effective clocks as before stable but i need to move the slider higher for them to be achieved.


----------



## ilmazzo

Justye95 said:


> Hi guys, I don't understand why SAM is not available, is it a driver bug? happens to you too? obviously it is enabled by the bios
> driver version; 21.10.2
> 
> View attachment 2528703
> View attachment 2528704


from gpu-z, go in the advanced tab, select in the drop down list "pciexpress smart access"....you will see a list of all the pre-requisites needed to have SAM enabled with a yes/no result, so you can easily debug what's missing ....


----------



## ilmazzo

Effective clock is a average figure, as explained from the hwinfo developer himself


----------



## ObviousCough

ilmazzo said:


> Effective clock is a average figure, as explained from the hwinfo developer himself


ok, that doesn't help me with it not being as good as it was before i took my gpu out of the system to swap motherboards.


----------



## The EX1

Nader83 said:


> Hi all,
> 
> I wanted to know if my temps are good and safe. I have 2 gpu ( xfx rx6900xt ekwb edition) and 5950. The GPU is set to rage mode. Under load while benchmarking 3dmark time spy i have seen my gpu temps shoot up to 72 for split second and back down to 60-65C during the test. Is this something normal?
> 
> I have 2x360 corsair radiators and ekwb D5 pump/reservoir.
> 
> 
> thanks


That is a lot of heat for two thin 360 rads. We need to know what your fluid temps are before seeing if those die temps are bad or not.


----------



## ObviousCough

~more data~

I share a lot of stuff with people on discord so i should have plenty of before material to pull from.



























edit: I took the card completely apart and inspected it to find no visible damage. No software changes happened between motherboard swaps.


----------



## Enzarch

ryouiki said:


> Pretty much 100% AMD driver issue... I've replaced motherboard, processor, PSU, memory and now GPU (only thing I haven't replaced are NVME drives), same issue always occurs. Swapping 1080Ti in the same system and issue goes away immediately, app does not crash even when running for weeks at a time.
> 
> _Edit_ I've also tested with a different monitor.


Do you happen to run AIDA64? there is a conflict with Aida sensor monitoring and AMD drivers that causes erratic crashing

Did you also use a different display cable?


----------



## ZealotKi11er

26K club is getting bigger. I have to step up my game to get back on top. 

3DMark Time Spy Graphics Score Hall Of Fame (ul.com)


----------



## 99belle99

Yep I noticed yesterday or the day before(can't remember) I looked up the score and I seen someone had beating your 26000 score and now it seems someone else has as well.


----------



## ryouiki

EastCoast said:


> Hmm...after reading a few of your post to get a general idea of what you are providing here there is something going on with your setup that is inducing this problem. To say that Radeon hardware causing this alone as using a nvidia card fixes the issue is simply silly. And only servers to wind up Radeon users here.
> 
> After stating that you've troubleshoot other aspects of your build to no avail the remaining component is your CPU. And I've seen something like this before with a cpu that was getting to little voltage from vccio. Which use to causes black screen to my 5700xt. Although this was from an Intel CPU the fact that reducing VCCIO/VCCSA had no effect on geforce cards and prior gen Radeon cards is very telling. And since you have a tendency to swap between the Radeon and Geforece is why I suggest that you reset those settings in the bios you tweaked back to default. And do a hard reset.
> 
> If you have then it's my opinion that the issue is still the cpu. If you have not I would suggest that all settings you tweaked for your 3900x be reverted back to it's bios defaults via hard reset with just 2 stick of ram at it's xmp profile. Using the latest bios revision.


So been bit busy with work today, but finally got time to sit down with system again. CMOS reset / load optimized defaults. BIOS changed to enable AMD-RAID so I can actually boot windows off my NVME drives, rest left at default. This leaves memory sitting at 2400CL16 @ 1.2V.

I let the benchmark unlock the framerate so the card gets some actual stress on it, here is the results after ~= 12 minutes.










Believe me I don't want to "wind up" Radeon users... I have $2300 of Navi hardware sitting here that I'd really like to use (and can't return), but so far it just isn't stable enough for a daily driving PC.

In regards to CPU, I have 2x 3900X, they both exhibit the same problem with the 5700XT installed... I'm not sure I want to tear down the loop at this point just to test again vs the 6900XT. Eventually I plan on replacing them with 5900X when the 3D Vcache versions are out, but it seems highly unlikely I have 2 faulty CPU's.

So not sure, next I will drop back to 2 DIMM and retest, but I don't hold out much hope that it will resolve the problem.


----------



## OC-NightHawk

ryouiki said:


> So been bit busy with work today, but finally got time to sit down with system again. CMOS reset / load optimized defaults. BIOS changed to enable AMD-RAID so I can actually boot windows off my NVME drives, rest left at default. This leaves memory sitting at 2400CL16 @ 1.2V.
> 
> I let the benchmark unlock the framerate so the card gets some actual stress on it, here is the results after ~= 12 minutes.
> 
> View attachment 2528929
> 
> 
> Believe me I don't want to "wind up" Radeon users... I have $2300 of Navi hardware sitting here that I'd really like to use (and can't return), but so far it just isn't stable enough for a daily driving PC.
> 
> In regards to CPU, I have 2x 3900X, they both exhibit the same problem with the 5700XT installed... I'm not sure I want to tear down the loop at this point just to test again vs the 6900XT. Eventually I plan on replacing them with 5900X when the 3D Vcache versions are out, but it seems highly unlikely I have 2 faulty CPU's.
> 
> So not sure, next I will drop back to 2 DIMM and retest, but I don't hold out much hope that it will resolve the problem.


How many motherboards have you tried all of these parts on?


----------



## LtMatt

I see a load of new German 6900 XT users near the top of the Timespy charts. I guess they are buying all the 6900 XT Liquid OEM GPUs with the faster memory.


----------



## LtMatt

ryouiki said:


> So been bit busy with work today, but finally got time to sit down with system again. CMOS reset / load optimized defaults. BIOS changed to enable AMD-RAID so I can actually boot windows off my NVME drives, rest left at default. This leaves memory sitting at 2400CL16 @ 1.2V.
> 
> I let the benchmark unlock the framerate so the card gets some actual stress on it, here is the results after ~= 12 minutes.
> 
> View attachment 2528929
> 
> 
> Believe me I don't want to "wind up" Radeon users... I have $2300 of Navi hardware sitting here that I'd really like to use (and can't return), but so far it just isn't stable enough for a daily driving PC.
> 
> In regards to CPU, I have 2x 3900X, they both exhibit the same problem with the 5700XT installed... I'm not sure I want to tear down the loop at this point just to test again vs the 6900XT. Eventually I plan on replacing them with 5900X when the 3D Vcache versions are out, but it seems highly unlikely I have 2 faulty CPU's.
> 
> So not sure, next I will drop back to 2 DIMM and retest, but I don't hold out much hope that it will resolve the problem.


Shouldn't the FCLK clock be at 1200Mhz?


----------



## ZealotKi11er

LtMatt said:


> I see a load of new German 6900 XT users near the top of the Timespy charts. I guess they are buying all the 6900 XT Liquid OEM GPUs with the faster memory.


There is something else going on. The last 200-300 point are more than just the fast memory. 
The guy at the top right now has lower core than most other people but has a very fast Intel CPU and RAM. 
I tried to break 26K again with same GPU but 5950x with even higher OC on XTXH which should have easily beaten my last score but stuff like OS/Driver/Intel CPU are playing a bigger role than I though.


----------



## EastCoast

High end intel cpus effect the score more so then amd cpus. 
There is a lot of inconsistencies you will find with that benchmark the more you align yourself with the hardware they tweaked the score for. IMO


----------



## Esenel

ZealotKi11er said:


> There is something else going on. The last 200-300 point are more than just the fast memory.
> The guy at the top right now has lower core than most other people but has a very fast Intel CPU and RAM.


I heard this dude got a handpicked GPU to do those scores. :-O


----------



## J7SC

RAM speed / latency also plays a big role with most modern 3DM and Intel still does have a bit of an edge there, though Ryzen 5k is making some series inroads.

Even my fav bench - Port Royal - which is supposed to be as CPU neutral as possible still reacts to RAM changes...on my 2950X TR system w/ 2x 2080 Ti, I picked up around 900 points or so just by switching the 2950X to local memory mode (a feature only available in a few TRs of that genre). OS build, driver, GPU settings and ambient were otherwise identical.


----------



## gtz

Regular timespy CPU score is very RAM sensitive. It also does not like more than 32 threads. TimeSpy Extreme can do 32 threads but past that scores regress.

But like others mentioned Intel has a edge over Ryzen 5000. Just comparing 12 cores, my 7920X with tight b die timings running 4.9 all core scores 16000, my 5900X with my d die tweaked and all core at 4.7 got me 15600. Just how this benchmark runs.

Edit:

But in Cinebench the 5900X beats the snot out of the 7920X. CB15 4000 vs 3200.


----------



## kairi_zeroblade

I was watching DOTA2 TI matches and I noticed players coming out of the booth after an intense match, all sweaty..(sees NVDIA and INTEL sticker on their gaming machines..lol[sarcasm])..it must have been hard and hot in there with the hardware they are using..lol


----------



## ZealotKi11er

kairi_zeroblade said:


> I was watching DOTA2 TI matches and I noticed players coming out of the booth after an intense match, all sweaty..(sees NVDIA and INTEL sticker on their gaming machines..lol[sarcasm])..it must have been hard and hot in there with the hardware they are using..lol



lol. The power war will not stop. Little birdy telling me that we have not seen anything especially from GPU side. 



gtz said:


> Regular timespy CPU score is very RAM sensitive. It also does not like more than 32 threads. TimeSpy Extreme can do 32 threads but past that scores regress.
> 
> But like others mentioned Intel has a edge over Ryzen 5000. Just comparing 12 cores, my 7920X with tight b die timings running 4.9 all core scores 16000, my 5900X with my d die tweaked and all core at 4.7 got me 15600. Just how this benchmark runs.
> 
> Edit:
> 
> But in Cinebench the 5900X beats the snot out of the 7920X. CB15 4000 vs 3200.



I am scoring 16.5-17K with 5950x all core 4.8ghz, 3733 cl16.
I still need to tweak the sub-timing and get the cpu colder.


----------



## gtz

ZealotKi11er said:


> lol. The power war will not stop. Little birdy telling me that we have not seen anything especially from GPU side.
> 
> 
> 
> 
> I am scoring 16.5-17K with 5950x all core 4.8ghz, 3733 cl16.
> I still need to tweak the sub-timing and get the cpu colder.


Do you have b die? If only running 2 sticks disable GDM (this enabled hurts timespy cpu score a little). If it is b die shoot for CL14. My highest timespy score was 19500 with my 9980XE @ 4.8 with 4000CL14 all timings tuned. Even though my 10980XE could hit higher core clocks I could never match the score due to it having a weaker IMC. I'm hoping once I get my main rig running optimal I should get a CPU score of 18000 with me 7960X. Just waiting on a new board, I also finally slapped a block on the 6900XT (nonXTXH) and hoping for at least 24K.


----------



## ZealotKi11er

gtz said:


> Do you have b die? If only running 2 sticks disable GDM (this enabled hurts timespy cpu score a little). If it is b die shoot for CL14. My highest timespy score was 19500 with my 9980XE @ 4.8 with 4000CL14 all timings tuned. Even though my 10980XE could hit higher core clocks I could never match the score due to it having a weaker IMC. I'm hoping once I get my main rig running optimal I should get a CPU score of 18000 with me 7960X. Just waiting on a new board, I also finally slapped a block on the 6900XT (nonXTXH) and hoping for at least 24K.


They are 2x16GB 4000 CL19 1.35v. I ran them with 9900K at 4000 CL15 1.45v
What is GMD? Gear Mode ?


----------



## gtz

ZealotKi11er said:


> They are 2x16GB 4000 CL19 1.35v. I ran them with 9900K at 4000 CL15 1.45v
> What is GMD?


GDM (Gear Down Mode), from my understanding (I am in no way a Ryzen Guru) it makes it easier to run higher frequencies and helps with overclocking 4 dimms. But it rounds every timing to even numbers. So if you have this enabled and for instance running CL15 you are actually running CL16. Same goes for 1T, with this enabled it will run 2T. Windows will still report the odd timings but in reality you are running higher than reported.


----------



## ZealotKi11er

Got up to 18.4k now. All down to memory. Did not think it would make this much difference.


----------



## Skinnered

deadfelllow said:


> Hey guys,
> 
> Today i just got my GPU which is *Sapphire 6900 XT Toxic Gaming OC Extreme Edition 11308-08-20G* .( XTXH)
> 
> With the help of @cfranko (mpt and radeon tuning)


Can you share those settings?
Got the same card now


----------



## deadfelllow

Skinnered said:


> Can you share those settings?
> Got the same card now








İndir 25125 timespy


Dosyayı indir 25125 timespy



dosya.co









İndir timespy record


Dosyayı indir timespy record



dosya.co





1 is for mpt 1 is for radeon settings.


----------



## Skinnered

^ Thanks!


----------



## CS9K

gtz said:


> GDM (Gear Down Mode), from my understanding (I am in no way a Ryzen Guru) it makes it easier to run higher frequencies and helps with overclocking 4 dimms. But it rounds every timing to even numbers. So if you have this enabled and for instance running CL15 you are actually running CL16. Same goes for 1T, with this enabled it will run 2T. Windows will still report the odd timings but in reality you are running higher than reported.


Gear down mode's TL;DR: is that it tries to get a little more performance out of memory than if you were running 2T tCR (Command Rate). I've seen GDM referred to as "1.5T". It also has drawbacks that only serve to add another variable to memory stability testing, in my opinion.

I find it easier to stability test in general, if one turns GDM right off and leaves it that way permanently; and runs 1T at 3600 or below, or 2T at speeds above 3600.

If your system has the IMC and voltage headroom to do so, you can do like I do and run the 3200C14 2x16GB Flare X kit in my signature at 1T _GDM off_, 3800 15-15-15-32 @ 1.47V


----------



## 99belle99

CS9K said:


> If your system has the IMC and voltage headroom to do so, you can do like I do and run the 3200C14 2x16GB Flare X kit in my signature at 1T _GDM off_, 3800 15-15-15-32 @ 1.47V


I have the past while being running 3800MHz 1900MHz IF 16-15-15-32 with GDM enabled and it has been flawless. Then yesterday after it must be this thread and GDM discussion I said I would turn it off and lower to 15-15-15-32 and today while watching a video on Facebook my system crashed. All I changed was 16 to 15 and GDM off everything else was the same.


----------



## Neoki

99belle99 said:


> I have the past while being running 3800MHz 1900MHz IF 16-15-15-32 with GDM enabled and it has been flawless. Then yesterday after it must be this thread and GDM discussion I said I would turn it off and lower to 15-15-15-32 and today while watching a video on Facebook my system crashed. All I changed was 16 to 15 and GDM off everything else was the same.


Yah I need GDM to run my 4x8gb sticks of g.skill b die 3600mhz at 3800+ with CL14 Timmings. Currently testing 3866mhz/1933 fclkwith my same 14-15-15-32 Timmings. If I flip GDM off I can't boot, if I flip to 2t I lose performance. So GDM helps me.

Running a 5950x paired with an Asus Dark Hero.


----------



## weleh

CPU scores have very little to do with GPU scores on Time Spy.
Unless you're using a super old or slow CPU, you're not going to get bottlenecked.

Just look at my own score, 11K CPU score (bugged inside Windows since I tried a manual OC and failed hard) and my score is still super high (25.7K).

There's a bunch of other tricks people have no idea about, software wise, to "cheat" scores and get some points back. Those guys spend the whole day messing with the cards and testing a bunch of crap. I mean up until now, no one was flashing the LC ref bios and now that it was found that it works, everyone is doing it to get a bunch of points back.

Also, some of those guys have hard mods too on their cards, or are binning cards, using super cold intake, using different windows tweaks and tricks, etc.

I personally have no system anymore so I cannot test or improve but you guys should go to hardwareluxx and just use the translate function to see what they are on about. 

I have a couple of stuff I could share that I haven't but I don't want to make it public just yet but seeing I don't have a card anymore I might as well just do it privately.


----------



## ZealotKi11er

weleh said:


> CPU scores have very little to do with GPU scores on Time Spy.
> Unless you're using a super old or slow CPU, you're not going to get bottlenecked.
> 
> Just look at my own score, 11K CPU score (bugged inside Windows since I tried a manual OC and failed hard) and my score is still super high (25.7K).
> 
> There's a bunch of other tricks people have no idea about, software wise, to "cheat" scores and get some points back. Those guys spend the whole day messing with the cards and testing a bunch of crap. I mean up until now, no one was flashing the LC ref bios and now that it was found that it works, everyone is doing it to get a bunch of points back.
> 
> Also, some of those guys have hard mods too on their cards, or are binning cards, using super cold intake, using different windows tweaks and tricks, etc.
> 
> I personally have no system anymore so I cannot test or improve but you guys should go to hardwareluxx and just use the translate function to see what they are on about.
> 
> I have a couple of stuff I could share that I haven't but I don't want to make it public just yet but seeing I don't have a card anymore I might as well just do it privately.



For sure. From the looks of it the top cards right now are not even that fast in terms of clk speed. If I try to run with similar core clk as the top score I am about 400-500 points behind so there is room for improvement. 

Actually I was able to run set 2870MHz all day yesterday but which was 10MHz higher but still could not beat my previews score.


----------



## CS9K

Neoki said:


> Yah I need GDM to run my 4x8gb sticks of g.skill b die 3600mhz at 3800+ with CL14 Timmings. Currently testing 3866mhz/1933 fclkwith my same 14-15-15-32 Timmings. If I flip GDM off I can't boot, if I flip to 2t I lose performance. So GDM helps me.
> 
> Running a 5950x paired with an Asus Dark Hero.


cc @99belle99 GDM has its place. Some kits like it, but enough tuning can usually do better without it. While it _says_ "1T" with GDM enabled, it isn't true 1T, but if it is stable, GDM enabled _does_ improve performance vs 2T if "GDM on" lines up with how your kit does tCL at the speed and voltage you're running it at.



99belle99 said:


> I have the past while being running 3800MHz 1900MHz IF 16-15-15-32 with GDM enabled and it has been flawless. Then yesterday after it must be this thread and GDM discussion I said I would turn it off and lower to 15-15-15-32 and today while watching a video on Facebook my system crashed. All I changed was 16 to 15 and GDM off everything else was the same.


The first step after changing _any_ memory setting is to stability test it. I don't "daily drive" new settings to gauge stability, I stability test new settings and daily-drive them once they're stable.

But that's just me. When I am not in the mood to mess with settings and test and instead just want to sit down and game, then the last thing I want to do is chase down the source of instability.


----------



## ZealotKi11er

Looks like I found another 100-200 points from: hardwareluxx


----------



## EastCoast

Can someone post it here? What does the translation say?


----------



## ZealotKi11er

New score: I scored 24 377 in Time Spy
To get this I added the FT2 with MPT.


----------



## OC-NightHawk

ZealotKi11er said:


> New score: I scored 24 377 in Time Spy
> To get this I added the FT2 with MPT.


FT2?


----------



## ZealotKi11er

OC-NightHawk said:


> FT2?


Fast Timing 2.


----------



## OC-NightHawk

ZealotKi11er said:


> Fast Timing 2.


what speed can it handle with the unlocked faster timing?


----------



## ZealotKi11er

OC-NightHawk said:


> what speed can it handle with the unlocked faster timing?


same speed.


----------



## weleh

Fast timing 2 is a known trick. 

Even if you don't select it on the driver it will still boost scores up a bit. At least it did on my XTX.


----------



## 99belle99

Where is Fast Timings 2?


----------



## ZealotKi11er

replace 1 with 2. in mpt.


----------



## ZealotKi11er

back at the top. all thanks to Weleh.


----------



## 99belle99

With a AMD CPU. Who was saying a few posts back that you needed a Intel CPU to get good scores?


----------



## ZealotKi11er

99belle99 said:


> With a AMD CPU. Who was saying a few posts back that you needed a Intel CPU to get good scores?


I was. But with 5950x at 4.95GHz/4.85GHz is not that bad.


----------



## LtMatt

ZealotKi11er said:


> Looks like I found another 100-200 points from: hardwareluxx


Can you share more details?


----------



## OC-NightHawk

ZealotKi11er said:


> replace 1 with 2. in mpt.


As soon as I applied the profile with the radeon software using fast timing level 2 I got color dots all over the screen and the machine locked up. :/


----------



## CS9K

OC-NightHawk said:


> As soon as I applied the profile with the radeon software using fast timing level 2 I got color dots all over the screen and the machine locked up. :/


This has been my experience with Fast Timing Level 2, even with increased memory voltage.


----------



## ilmazzo

and then the silicon kicks in


----------



## ZealotKi11er

OC-NightHawk said:


> As soon as I applied the profile with the radeon software using fast timing level 2 I got color dots all over the screen and the machine locked up. :/


Could be that it only works if you use LC bios with fast memory sine the timings are much worse for 18gbps so FT2 benefits and if you try with 16gbps which has already good timings it probably does not work.


----------



## weleh

Yea FT2 works on Ref LC bios because memory timings are more relaxed.

It won't work with normal XTXH bios.


----------



## ZealotKi11er

Yeah. With FT2 Ref LC actually pulls away. Before that I was not that impressed with the score. 
Now it would be interesting to go to 26K without 18gbps bios.


----------



## lawson67

ZealotKi11er said:


> Could be that it only works if you use LC bios with fast memory sine the timings are much worse for 18gbps so FT2 benefits and if you try with 16gbps which has already good timings it probably does not work.


What card is it that you have?, the Radeon reference LC card?, your score is amazing


----------



## ZealotKi11er

lawson67 said:


> What card is it that you have?, the Radeon reference LC card?, your score is amazing


Yes mine is the actual reference LC which is basically same pcb as normal 6900 xt wit xtxh + 18gbps + aio.


----------



## lawson67

ZealotKi11er said:


> Yes mine is the actual reference LC which is basically same pcb as normal 6900 xt wit xtxh + 18gbps + aio.


Is it standard or have you flashed the bios or done any hardmods?


----------



## OC-NightHawk

ZealotKi11er said:


> Could be that it only works if you use LC bios with fast memory sine the timings are much worse for 18gbps so FT2 benefits and if you try with 16gbps which has already good timings it probably does not work.


Mine is a Gigabyte RX 6900 XT Xtreme Waterforce. I guess the memory on it just isn’t as good because if I go past 2150 the performance degrades.


----------



## J7SC

Re. VRAM testing, not sure if the ancient ATITool has been mentioned here...some versions even allowed for adjusting VRAM timings back in the day but these days, 'scan for artifacts' is really the only thing which works with modern cards such as BigNavi and Ampere.

I still find it useful as it shows you artifacts and 'soft crashes' as you push VRAM speed up via Radeon or MSI AB software etc and can easily enough be recovered from while on the desktop.


----------



## weleh

OC-NightHawk said:


> Mine is a Gigabyte RX 6900 XT Xtreme Waterforce. I guess the memory on it just isn’t as good because if I go past 2150 the performance degrades.


People are flashing RefLC bios (18gbps memory bios) on XTXH cards with very good gains.


----------



## ZealotKi11er

weleh said:


> People are flashing RefLC bios (18gbps memory bios) on XTXH cards with very good gains.


From what I have seen the limit with memory is from the memory controller and not the memory itself because my real 18gbps chips dont seem to do any better than xtxh 16gbps chips.


----------



## cfranko

ZealotKi11er said:


> From what I have seen the limit with memory is from the memory controller and not the memory itself because my real 18gbps chips dont seem to do any better than xtxh 16gbps chips.


Does your card have a higher mining hashrate than regular 16 gbps cards?


----------



## ZealotKi11er

cfranko said:


> Does your card have a higher mining hashrate than regular 16 gbps cards?


When I tested it without any modification it was much worse. I was getting 50MH/s.


----------



## cfranko

ZealotKi11er said:


> When I tested it without any modification it was much worse. I was getting 50MH/s.


Why is it like that low, that makes no sense.


----------



## lestatdk

That's very low. I can' remember the exact numbers but I got my 6800 to do 63 MH or so at 120W ish.


----------



## ZealotKi11er

cfranko said:


> Why is it like that low, that makes no sense.


Mining rather like timings than memory speed. To get 18+ AMD has very relaxed timings. Maybe I try with FT2.


----------



## OC-NightHawk

weleh said:


> People are flashing RefLC bios (18gbps memory bios) on XTXH cards with very good gains.


I have never done this before. Is this difficult?


----------



## OC-NightHawk

cfranko said:


> Does your card have a higher mining hashrate than regular 16 gbps cards?


Mine is .45MH slower but it is 30watts more efficient.


----------



## cfranko

ZealotKi11er said:


> Mining rather like timings than memory speed. To get 18+ AMD has very relaxed timings. Maybe I try with FT2.


What’s the point of 18 gbps if the timings are terrible


----------



## OC-NightHawk

lestatdk said:


> That's very low. I can' remember the exact numbers but I got my 6800 to do 63 MH or so at 120W ish.


My Power Color RX 6800XT Red Devil does 64MH and my Gigabyte Rx 6900 XT Xtreme Waterforce does 63.55MH.


----------



## OC-NightHawk

cfranko said:


> What’s the point of 18 gbps if the timings are terrible


Out of the box it seems like for me that the 18GBps allows for nearly the same mining output but with less power consumed allowing for something like 6 cards to mine at the cost of five.


----------



## ZealotKi11er

cfranko said:


> What’s the point of 18 gbps if the timings are terrible


Its not made for mining?


----------



## cfranko

ZealotKi11er said:


> Its not made for mining?


But wouldn’t terrible timings also affect gaming?


----------



## EastCoast

cfranko said:


> But wouldn’t terrible timings also affect gaming?


No, not for how it's designed in this use case. 

Back in the day you could tighten timings and see a pretty significant boost in performance. Now a days you can choose between default and fast timing. With such a feature I have to say no.


----------



## ryouiki

OC-NightHawk said:


> How many motherboards have you tried all of these parts on?


Two different boards (both X570 Master, one is Rev 1.0 other is Rev 1.2).

In the interim I pulled the extra 2 DIMMS... app still crashed but this time it didn't cause a full TDR. Is that an improvement? Hard to say. It is really hard to test this since it can take many hours to trigger the problem, so next time it might be a TDR again.

I'm not convinced this is a hardware problem, rather maybe AMD's DirectX 11 implementation not being able to handle certain scenarios... I've heard from someone else that running the benchmark/retail game under Linux + DXVK is 100% stable, but I have not had time to deal with setting all of that up.

I've never encountered these crashes on 3DMark or DirectX 12 / Vulkan titles.


----------



## Skinnered

Great memory speed, amd lc edition I guess?
I can"t reach more then 2130 mhz with my sapphire toxic ee


----------



## Gblgbd

Quick question, I have an aorus master 6900 (the air version with the ridiculous screen) xtx.

I was managing 23xxx @380w max draw (I can't remember exactly but I did have the highest score for a 3800x 6900xt combo for a minute) I have noticed everyone seems to run their voltage slider maxed but I have had my best results around 1085mv in adrenaline, what am I missing? 

I have never seen the card above 99 junction. My previous best run was 2544 min 2654 max but I lost those settings, currently I pass 2544 - 2720 with a slightly lower score. If I push the clocks it actually flattens out and gets a better average but driver crashes towards the end of TS graphics test 2... Sorry not so quick question


----------



## Gblgbd

I scored 19 651 in Time Spy


AMD Ryzen 7 3800X, AMD Radeon RX 6900 XT x 1, 32768 MB, 64-bit Windows 10}




www.3dmark.com


----------



## Neoki

Went through the hassle of reflashing the bios today. I'm greatly limited by ambient temps atm, looks like I'll need to do a repaste job as well in the near future.

But I'm happy to gain 350pts regardless from my previous best with the card. Seems like the memory controller on my XFX Zero isn't so hot, even on the normal bios it didn't like above 2100mhz.









I scored 23 543 in Time Spy


AMD Ryzen 9 5950X, AMD Radeon RX 6900 XT x 1, 32768 MB, 64-bit Windows 10}




www.3dmark.com


----------



## lawson67

I had a little play this morning with my Red Devil, not really tried that hard to see what i can get out of it since i had it and i do think there's more in it however i would really like to flash the LC bios


----------



## robiatti

Neoki said:


> Went through the hassle of reflashing the bios today. I'm greatly limited by ambient temps atm, looks like I'll need to do a repaste job as well in the near future.
> 
> But I'm happy to gain 350pts regardless from my previous best with the card. Seems like the memory controller on my XFX Zero isn't so hot, even on the normal bios it didn't like above 2100mhz.
> 
> 
> 
> 
> 
> 
> 
> 
> 
> I scored 23 543 in Time Spy
> 
> 
> AMD Ryzen 9 5950X, AMD Radeon RX 6900 XT x 1, 32768 MB, 64-bit Windows 10}
> 
> 
> 
> 
> www.3dmark.com


Neoki,

Did you flash the 6900 XT LC bios? I am curious as where to download it from. I got my Zero yesterday and so far I am liking it alot better than my Red Devil ultimate with the alphacool block. Finally no audible coil whine.


----------



## lawson67

robiatti said:


> Neoki,
> 
> Did you flash the 6900 XT LC bios? I am curious as where to download it from. I got my Zero yesterday and so far I am liking it alot better than my Red Devil ultimate with the alphacool block. Finally no audible coil whine.


You can get the LC Bios here


----------



## robiatti

lawson67 said:


> You can get the LC Bios here


 Lawson, Awesome Thank you. Still have to flash via Linux?


----------



## lawson67

robiatti said:


> Lawson, Awesome Thank you. Still have to flash via Linux?


I think so others on here who know more about that than me will advise you I'm sure, thinking of doing it myself, just make sure you have 2 bios settings on your card before doing it so if it goes wrong you will be able to flick to the other bios and sort it out


----------



## cfranko

lawson67 said:


> I think so others on here who know more about that than me will advise you I'm sure, thinking of doing it myself, just make sure you have 2 bios settings on your card before doing it so if it goes wrong you will be able to flick to the other bios and sort it out


Does the referance LC bios only work on XTXH cards? If that is the case does it make the regular XTXH cards also run 18 gbps memory?


----------



## lawson67

cfranko said:


> Does the referance LC bios only work on XTXH cards? If that is the case does it make the regular XTXH cards also run 18 gbps memory?


As far as i know you must have an XTXH die for it to work as the AMD LC has the XTXH die, i don't think it will make regular cards ram run at 18 gbps but i belive some will be able to achieve it due to the better timings the LC bios gives you, many over at Hardwareluxx have flashed the LC Bios onto there (non LC) XTXH cards and are hitting 26k in TS as a result of flashing over to it


----------



## ZealotKi11er

You just have to edit the limit with MPT because XTXH bios has higher limits for core and voltage.


----------



## lawson67

ZealotKi11er said:


> You just have to edit the limit with MPT because XTXH bios has higher limits for core and voltage.


Can you give us the limits please


----------



## ZealotKi11er

lawson67 said:


> Can you give us the limits please


The limits you have to copy from a normal XTX bios. If you dont the GPU will be in limp mode.

Now that I think about it with normal XTX you cant flash XTXH bios because of memory limits so it pointless. Anyone that is using XTXH LC bios has XTXH GPU.


----------



## lawson67

ZealotKi11er said:


> The limits you have to copy from a normal XTX bios. If you dont the GPU will be in limp mode.
> 
> Now that I think about it with normal XTX you cant flash XTXH bios because of memory limits so it pointless. Anyone that is using XTXH LC bios has XTXH GPU.


So even though the LC card is using an XTXH die you have to use limits from an XTX die?, is that what your saying?, so i cant use up to 1.2v from my XTXH Red Devil for the LC bios?


----------



## ZealotKi11er

lawson67 said:


> So even though the LC card is using an XTXH die you have to use limits from an XTX die?, is that what your saying?, so i cant use up to 1.2v from my XTXH Red Devil for the LC bios?


If you have XTXH u can flash XTXH LC. People with XTX cant flash XTXH or XTXH LC bios.


----------



## Neoki

lawson67 said:


> I think so others on here who know more about that than me will advise you I'm sure, thinking of doing it myself, just make sure you have 2 bios settings on your card before doing it so if it goes wrong you will be able to flick to the other bios and sort it out





robiatti said:


> Lawson, Awesome Thank you. Still have to flash via Linux?


Yes you need to force flash via Linux.

Also I agree with lawson on only doing it if you have a dual bios switch, for others thinking about it. If your flash fails and you can't boot you're stuck in a bad situation. With dual bios you just flip the switch and flip it back once in and reflash again just like nvflash style on Nvidia cards.


----------



## DuRoc

Are all new 6900's the XTX? I have an AMD reference 6900 xt that is being delivered soon. I'm not well read on the 6900's as I am on Nvidia now and was looking for a 3080 when I scored this off of the AMD site last week.


----------



## 99belle99

DuRoc said:


> Are all new 6900's the XTX? I have an AMD reference 6900 xt that is being delivered soon. I'm not well read on the 6900's as I am on Nvidia now and was looking for a 3080 when I scored this off of the AMD site last week.


Yes the reference and standard 5900 XT's are XTX. Then the binned versions are XTXH and usually more expensive.


----------



## DuRoc

99belle99 said:


> Yes the reference and standard 5900 XT's are XTX. Then the binned versions are XTXH and usually more expensive.


 I have some reading to do. I guess I'll be limited somewhat with the reference cooler? I'll be staying on air.


----------



## 99belle99

DuRoc said:


> I have some reading to do. I guess I'll be limited somewhat with the reference cooler? I'll be staying on air.


I have the exact same card as you a reference also still on stock cooler and also got it from AMD's website except I got mine around February this year.

This is a score I got on TimeSpy. I scored 20 122 in Time Spy


----------



## ZealotKi11er

DuRoc said:


> Are all new 6900's the XTX? I have an AMD reference 6900 xt that is being delivered soon. I'm not well read on the 6900's as I am on Nvidia now and was looking for a 3080 when I scored this off of the AMD site last week.


6900 XT comes both XTX and XTXH, H stand for high and in the upper part of the parts.


----------



## OC-NightHawk

Neoki said:


> Went through the hassle of reflashing the bios today. I'm greatly limited by ambient temps atm, looks like I'll need to do a repaste job as well in the near future.
> 
> But I'm happy to gain 350pts regardless from my previous best with the card. Seems like the memory controller on my XFX Zero isn't so hot, even on the normal bios it didn't like above 2100mhz.
> 
> 
> 
> 
> 
> 
> 
> 
> 
> I scored 23 543 in Time Spy
> 
> 
> AMD Ryzen 9 5950X, AMD Radeon RX 6900 XT x 1, 32768 MB, 64-bit Windows 10}
> 
> 
> 
> 
> www.3dmark.com


So even with a so so memory controller you gained memory speed? Was able to go from not using the fast timing 2 to being able to use it?

I’m not feeling confident about my memory controller but this gives me some hope of getting just a bit more out of the memory.


----------



## Neoki

OC-NightHawk said:


> So even with a so so memory controller you gained memory speed? Was able to go from not using the fast timing 2 to being able to use it?
> 
> I’m not feeling confident about my memory controller but this gives me some hope of getting just a bit more out of the memory.


Yeah but I think it's because the memory timings are looser on the LC bios. So it gives more wiggle room. However, I'm not able to achieve anything higher than 2330mhz FT1 stable, FT2 is hit and miss (more often miss). I've adjusted fclk, soc voltages, and even drm / mem voltages to try and compensate. No dice, Also my Core clocks are limited to 2785mhz, anything higher and GT2 crashes.

My ambient temps are pretty bad right now due to seasonal change and my office being upstairs. So with high wattage load im running 54-56c edge and 90c hotspot. I'm really curious as to the leaderboard showing quite a few folks above me with 30-40c temps. Either they have insane PCB die cooling, using chillers, or sticking their machines outside in the cold (if they have cooler weather). I'm not lugging my 80lb rig downstairs and planting it outside just to chase a few hundred more points so I think I'm done.

This is my final run after I messed around some more with settings.

EDIT: I remember you having a Waterforce, I'd be really weary about flashing that, since you don't have a dual bios on that. Unless you have a CPU laying around with iGPU graphics on it to load into linux to reflash if needed.









I scored 23 545 in Time Spy


AMD Ryzen 9 5950X, AMD Radeon RX 6900 XT x 1, 32768 MB, 64-bit Windows 10}




www.3dmark.com


----------



## jonRock1992

lawson67 said:


> I think so others on here who know more about that than me will advise you I'm sure, thinking of doing it myself, just make sure you have 2 bios settings on your card before doing it so if it goes wrong you will be able to flick to the other bios and sort it out


Just a heads up. I tried it a couple months ago, and my 6900 XT Red Devil Ultimate wouldn't post with it. Unless the bios you linked is different than the LC bios that I tried. Let me know if you get it working.


----------



## OC-NightHawk

I was watching a video on the pcbs for all the 6900 XT boards and the guy said the input filter on my card is insanely bad and it counteracts the very nice set of large capacitors. He also went on to mention that the board has desginated areas for additional capacitors to be added on aftermarket. Would it overcome the over zealous input filtering if I have more capacitors added on? If so are there shops that would perform this kind of work?


----------



## EastCoast

OC-NightHawk said:


> I was watching a video on the pcbs for all the 6900 XT boards and the guy said the input filter on my card is insanely bad and it counteracts the very nice set of large capacitors. He also went on to mention that the board has desginated areas for additional capacitors to be added on aftermarket. Would it overcome the over zealous input filtering if I have more capacitors added on? If so are there shops that would perform this kind of work?


what make/model?


----------



## lawson67

jonRock1992 said:


> Just a heads up. I tried it a couple months ago, and my 6900 XT Red Devil Ultimate wouldn't post with it. Unless the bios you linked is different than the LC bios that I tried. Let me know if you get it working.


It might of been ok you just thought it was not posting, from what i have read most get it to work but you cant see anything until windows loads, something like that anyhow,i am just looking for a guide on how to load it on my card as my core is very fast i am just let down by my ram which don't wanna go much over 2116mhz without a hit, was hoping that LC Bios might help some, people with much lower core speeds then me are getting much better scores as there ram scales better than mine, only way i think i can add voltage to my cards ram is via SOC voltage? i dont know of any other way?

I scored 21 807 in Time Spy


----------



## jonRock1992

lawson67 said:


> It might of been ok you just thought it was not posting, from what i have read most get it to work but you cant see anything until windows loads, something like that anyhow,i am just looking for a guide on how to load it on my card as my core is very fast i am just let down by my ram which don't wanna go much over 2016mhz without a hit, was hoping that might help some


It definitely wasn't posting. My motherboard was displaying an error code related to memory. Let me know how it goes for you. I really hope it works for you.


----------



## ZealotKi11er

lawson67 said:


> It might of been ok you just thought it was not posting, from what i have read most get it to work but you cant see anything until windows loads, something like that anyhow,i am just looking for a guide on how to load it on my card as my core is very fast i am just let down by my ram which don't wanna go much over 2116mhz without a hit, was hoping that LC Bios might help some, people with much lower core speeds then me are getting much better scores as there ram scales better than mine, only way i think i can add voltage to my cards ram is via SOC voltage? i dont know of any other way?
> 
> I scored 21 807 in Time Spy


How fast is you core? Set and actual.


----------



## lawson67

ZealotKi11er said:


> How fast is you core? Set and actual.


I set max 2870mhz think it might go higher, however 3D mark shows "Clock frequency2,836 MHz Average clock frequency2,777 MHz", writing Linux on a USB drive atm otherwise i would do a run and see what HWINFO64 says


----------



## ZealotKi11er

lawson67 said:


> I set max 2870mhz think it might go higher, however 3D mark shows "Clock frequency2,836 MHz Average clock frequency2,777 MHz", writing Linux on a USB drive atm otherwise i would do a run and see what HWINFO64 says


That is how much I have also for my record.


----------



## lawson67

ZealotKi11er said:


> That is how much I have also for my record.


Yes its good core speed so if i can get the ram faster i can get much higher scores, so i have Linux on a usb drive now so does anyone know which tool i need to use to flash the card?


----------



## jonRock1992

lawson67 said:


> Yes its good core speed so if i can get the ram faster i can get much higher scores, so i have Linux on a usb drive now so does anyone know which tool i need to use to flash the card?


I think @L!ME linked to it awhile back.


----------



## xR00Tx

Hey guys,

After a while without testing Time Spy, Today, due to the cold weather, I decided to make a few more attempts and, to my surprise, I finally managed to break (just barely) the 26k graphics points barrier.

I got the first position in the overall and cpu score. And 29th in the graphics rank (two people ahead of me).


I scored 24 822 in Time Spy


----------



## L!ME

Here is the amdvbflash Linux





File-Upload.net - amdvbflash


Datei: amdvbflash. Die Datei wurde von einem User hochgeladen. Laden Sie auch kostenlos Dateien hoch mit File Upload.



www.file-upload.net




At your own Risk!


----------



## Justye95

lawson67 said:


> You can get the LC Bios here


Hi, is this BIOS compatible with the 6900 XT Reference?


----------



## L!ME

No only with xtxh cards


----------



## lawson67

L!ME said:


> No only with xtxh cards


I cant get the LC Bios to flash using AMDVBFLASH it say file not found even though the LC Bios is in the same folder, i can save my Original Bios and reflash that but it just wont let me flash the LC Bios, it just says 0FL01 error file not found and yes i am using the correct file name "sudo ./amdvbflash -f -p 0 file name


----------



## L!ME

Do you use amdvbflash Linux? With the Windows Version you cant flash it.

Oh i See your useing Linux. Maybe you Need another source of the LC BIOS?


----------



## lawson67

L!ME said:


> Do you use amdvbflash Linux? With the Windows Version you cant flash it


Yes i am using the Linux version in Linux, i can save and flash my original bios but amdvbflash cant find the LC bios, it says file not found


----------



## lawson67

L!ME said:


> Oh i See your useing Linux. Maybe you Need another source of the LC BIOS?


Yes maybe do you know of the correct bios, i would of thought they would be all the same?


----------



## L!ME

Try this one
XTXH.zip


----------



## lawson67

L!ME said:


> Try this one
> XTXH.zip


Thanks ill try that now


----------



## robiatti

lawson67 said:


> I cant get the LC Bios to flash using AMDVBFLASH it say file not found even though the LC Bios is in the same folder, i can save my Original Bios and reflash that but it just wont let me flash the LC Bios, it just says 0FL01 error file not found and yes i am using the correct file name "sudo ./amdvbflash -f -p 0 file name


Lawson,

I was able to flash the bios you linked using sudo ./amdvbflash -p -f 0 filename, not sure is the order matters.


----------



## lawson67

L!ME said:


> Try this one
> XTXH.zip


Still wont let me flash the LC Bios strange indeed


----------



## robiatti

Thanks for you help guys, I was able to get the XFX Zero flashed.. not a bad 1st attempt. Need to work on the CPU oc as well as figure out the correct MPT settings for the 6900xt.









I scored 22 480 in Time Spy


AMD Ryzen 9 5950X, AMD Radeon RX 6900 XT x 1, 32768 MB, 64-bit Windows 10}




www.3dmark.com


----------



## robiatti

lawson67 said:


> Still wont let me flash the LC Bios strange indeed
> 
> 
> View attachment 2529387


you can try renaming the file as well. I named mine xtxhlc.rom


----------



## lawson67

robiatti said:


> Lawson,
> 
> I was able to flash the bios you linked using sudo ./amdvbflash -p -f 0 filename, not sure is the order matters.


Thanks ill try doing it that way


----------



## bloot

lawson67 said:


> Still wont let me flash the LC Bios strange indeed
> 
> 
> View attachment 2529387


In Linux when a name contain spaces you must use "

In your case it would be "Radeon RX 6900 XT LC_XTXH"

Also if you type the first letters (RX for example), pressing tab it will autocomplete it for you or it will show you the available options if there're more than one coincidence with what you typed.


----------



## lawson67

bloot said:


> In Linux when a name contain spaces you must use "
> 
> In your case it would be "Radeon RX 6900 XT LC_XTXH"
> 
> Also if you type the first letters (RX for example), pressing tab it will autocomplete it for you or it will show you the available options if there're more than one coincidence with what you typed.


So i got a successful flash in the end but it wont boot, QR Code on M/B reads 55 then shuts down, what a shame was hoping to have some fun with that bios


----------



## lawson67

jonRock1992 said:


> Just a heads up. I tried it a couple months ago, and my 6900 XT Red Devil Ultimate wouldn't post with it. Unless the bios you linked is different than the LC bios that I tried. Let me know if you get it working.


Nope it wont post Jon, bugger


----------



## L!ME

Try flash it again, sometimes the flash was successfull But IT didn't Work. Which 6900 so you use?

Hm you are the only two which tried IT and it doesnt Work?


----------



## lawson67

L!ME said:


> Try flash it again, sometimes the flash was successfull But IT didn't Work. Which 6900 so you use?
> 
> Hm you are the only two which tried IT and it doesnt Work?


I have the Red Devil Ultimate RX 6900 XT


----------



## jonRock1992

Seems it's incompatible with the red devil ultimate. Bummer.


----------



## L!ME

No my Card is a liquid devil ultimate and it works. I know two more devil where IT works. But WE all flashed IT via an external Programmer ch431a


----------



## jonRock1992

L!ME said:


> No my Card is a liquid devil ultimate and it works. I know two more devil where IT works. But WE all flashed IT via an external Programmer ch431a


Ah I see. It doesn't work with the Linux flashing tool for some reason then.


----------



## lawson67

L!ME said:


> No my Card is a liquid devil ultimate and it works. I know two more devil where IT works. But WE all flashed IT via an external Programmer ch431a


I need
this then i guess with the leads underneath as an option?

ZHITING SOIC8 SOP8 Flash Chip IC Test Clips Socket Adpter Programmer BIOS + CH341A 24 25 Series EEPROM Flash BIOS USB Programmer Module


----------



## lawson67

L!ME said:


> No my Card is a liquid devil ultimate and it works. I know two more devil where IT works. But WE all flashed IT via an external Programmer ch431a


Can you use the external Programmer ch431a under windows or do you need to use it under Linux or does it not matter?


----------



## L!ME

I have only a Windows Tool to use the programmer


----------



## lawson67

L!ME said:


> I have only a Windows Tool to use the programmer


Is it this bios chip in the picture that i need to program? or is the bios chip on the back of the card?, also is the Bios chip a 25 series or a 24 series chip for the programmer?, and do you have a link to the software you use for the programmer, thanks


----------



## L!ME

I flashed that Chip in Front. i think IT was a 25 series but i have to Look.
Here is the Software





File-Upload.net - Programmer.rar


Datei: Programmer.rar. Die Datei wurde von einem User hochgeladen. Laden Sie auch kostenlos Dateien hoch mit File Upload.



www.file-upload.net


----------



## lawson67

L!ME said:


> I flashed that Chip in Front. i think IT was a 25 series but i have to Look.
> Here is the Software
> 
> 
> 
> 
> 
> File-Upload.net - Programmer.rar
> 
> 
> Datei: Programmer.rar. Die Datei wurde von einem User hochgeladen. Laden Sie auch kostenlos Dateien hoch mit File Upload.
> 
> 
> 
> www.file-upload.net


Thanks for all your help mate


----------



## weleh

xR00Tx said:


> Hey guys,
> 
> After a while without testing Time Spy, Today, due to the cold weather, I decided to make a few more attempts and, to my surprise, I finally managed to break (just barely) the 26k graphics points barrier.
> 
> I got the first position in the overall and cpu score. And 29th in the graphics rank (two people ahead of me).
> 
> 
> I scored 24 822 in Time Spy
> 
> View attachment 2529375


Your core clocks in 3DMark are quite low for that amount of score.
What did you have set on the driver?


----------



## xR00Tx

weleh said:


> Your core clocks in 3DMark are quite low for that amount of score.
> What did you have set on the driver?


Got a better score a few seconds ago: I scored 24 957 in Time Spy


----------



## ZealotKi11er

weleh said:


> Your core clocks in 3DMark are quite low for that amount of score.
> What did you have set on the driver?


Very high score. He also only has XTX. With my board he can probably do 26.5k easily. xR00Tx is quite familiar with TimeSpy and knows all the tricks.


----------



## xR00Tx

ZealotKi11er said:


> Very high score. He also only has XTX. With my board he can probably do 26.5k easily. xR00Tx is quite familiar with TimeSpy and knows all the tricks.


Besides the driver and MPT settings, what I do is close all unnecessary windows programs and services... but the main thing is the room temperature. The colder the better!


----------



## robiatti

xR00Tx said:


> Got a better score a few seconds ago: I scored 24 957 in Time Spy
> 
> View attachment 2529418
> 
> View attachment 2529419


Jesus, Mary, & Joseph….. damn nice!


----------



## ZealotKi11er

xR00Tx said:


> Besides the driver and MPT settings, what I do is close all unnecessary windows programs and services... but the main thing is the room temperature. The colder the better!


Tried those settings, gave less score ~ 200 points less.


----------



## 99belle99

xROOTx colder the better. Where are you atm? The top of Alaska your not in Brazil anyway like your flag on here says as it's coming up to summer time there.


----------



## 99belle99

Here is a run I done just now. Reference card with stock cooler. My CPU score is down and I'm guessing that is due to L3 Cache issues with Windows 11. I'm on stock windows build no previews so I do not have the L3 cache fix.

I scored 20 183 in Time Spy


----------



## Thanh Nguyen

Guys, why in bf5 my card cant hold 2700mhz clock? It drops to 2000 or under sometimes and cause massive fps drop.


----------



## advapi

Hello anyone has a sapphire nitro + 6900xt se? How are temperatures? Has someone flash the toxic bios?
Thanks


----------



## robiatti

Thanh Nguyen said:


> Guys, why in bf5 my card cant hold 2700mhz clock? It drops to 2000 or under sometimes and cause massive fps drop.


This happens to me as well in BF5 and COD Cold war. What I did to alleviate it was set min clock to 2500 and max to 2700. I believe what's happening is the card will switch power or clock states at undesired times.


----------



## Thanh Nguyen

robiatti said:


> This happens to me as well in BF5 and COD Cold war. What I did to alleviate it was set min clock to 2500 and max to 2700. I believe what's happening is the card will switch power or clock states at undesired times.


Gpu usage is only in the 80%. Do u have that problem? I don’t know its the win11 problem or my system.


----------



## bloot




----------



## cfranko

bloot said:


>


Is this a binned model or is the only difference a fancy cooler?


----------



## EastCoast

cfranko said:


> Is this a binned model or is the only difference a fancy cooler?


Its a limited edition. And it was announced by 343 Industries for their new Halo Infinite game. Looking at it suggests a reskinned OEM 6900xt. Not sure what quality the silicon will be. Nor how to get it.


----------



## ZealotKi11er

I want that GPU so bad.


----------



## kairi_zeroblade

master chief edition FTW!!


----------



## Reaper29

99belle99 said:


> Here is a run I done just now. Reference card with stock cooler. My CPU score is down and I'm guessing that is due to L3 Cache issues with Windows 11. I'm on stock windows build no previews so I do not have the L3 cache fix.
> 
> I scored 20 183 in Time Spy
> 
> View attachment 2529422


hi, i have same cpu and gpu, my best gpu score, so far, is about 22800 and cpu 11200, more or less, i would like to know your settings about both gpu and cpu.
thank you in advance


----------



## EastCoast

Reaper29 said:


> hi, i have same cpu and gpu, my best gpu score, so far, is about 22800 and cpu 11200, more or less, i would like to know your settings about both gpu and cpu.
> thank you in advance


I think we all like to know those settings...


----------



## 99belle99

This is my best ever Timespy CPU score:









What do you want to know? For CPU I do a manual overclock and with tight memory timings. Stable to do benchmarks but I wouldn't run them as a daily. I tried but it would crash eventually after a hour or so of general PC usage.


----------



## Reaper29

99belle99 said:


> This is my best ever Timespy CPU score:
> 
> View attachment 2529539
> 
> What do you want to know? For CPU I do a manual overclock and with tight memory timings. Stable to do benchmarks but I wouldn't run them as a daily. I tried but it would crash eventually after a hour or so of general PC usage.


for cpu is use pbo with ppt tdc edc settings ram at 3800 mhz cl14 but 3dmark result is a bit lower than ìn yours, i'm more interested at gpu settings, do you use MPT or only Wattman?
and the what are the settings?
my best time spy graphic score so far is 22922 using MPT 320w-360TDC wattman 2540-2720 1096mV ram 2150 fast timings power limit 13%


----------



## Reaper29

EastCoast said:


> I think we all like to know those settings...


why, is it a secret?


----------



## 99belle99

Reaper29 said:


> for cpu is use pbo with ppt tdc edc settings ram at 3800 mhz cl14 but 3dmark result is a bit lower than ìn yours, i'm more interested at gpu settings, do you use MPT or only Wattman?
> and the what are the settings?
> my best time spy graphic score so far is 22922 using MPT 320w-360TDC wattman 2540-2720 1096mV ram 2150 fast timings power limit 13%


No manual overclock. The old way set voltage and CPU multi. No PBO but I do use PBO when I use my pc daily.

Yes I do use MPT. The only thing I can think of as your settings are the same as mine is smart memory access you do have that enabled don't you?


----------



## Reaper29

advapi said:


> Hello anyone has a sapphire nitro + 6900xt se? How are temperatures? Has someone flash the toxic bios?
> Thanks


I have owned it for a couple of weeks, the temperatures are great, no, personally I have not tried to flash other bios, 
at least for now


----------



## Reaper29

99belle99 said:


> No manual overclock. The old way set voltage and CPU multi. No PBO but I do use PBO when I use my pc daily.
> 
> Yes I do use MPT. The only thing I can think of as your settings are the same as mine is smart memory access you do have that enabled don't you?


yes , sma is enabled


----------



## 99belle99

Reaper29 said:


> yes , sma is enabled


Post a link to your best Timespy run. I just want to see it. I just want to compare mine to yours.


----------



## jonRock1992

99belle99 said:


> Here is a run I done just now. Reference card with stock cooler. My CPU score is down and I'm guessing that is due to L3 Cache issues with Windows 11. I'm on stock windows build no previews so I do not have the L3 cache fix.
> 
> I scored 20 183 in Time Spy
> 
> View attachment 2529422


AMD just released a new chipset driver to fix the Windows 11 CPU issue. Just make sure the OS is fully updated and install the new chipset driver, and then your CPU score will go up.


----------



## FatFingerGamer

just wanted to say that everyone that contributes and shares there findings on this specific post is awesome. my best graphics score so far. had to readjust after going with the ek waterblok on my xfx black edition 6900xt. a lot of trial and error but im getting closer. after reading through the posts i dont know if it made the difference or what but i tried to get the vram as close to my ram fclk as possible without freezing up. just need to read up a bit more but for now im content with the results. but for reals thanks to everyone that posts the input. greatly appreciated =)

ps only major thing well kinda is my MPT is based on the RedDevil card. whichever the 2x8pin version is.


----------



## cfranko

My Bykski waterblock comes with 1.8mm thermal pads, I currently use the included pads with the block but they are terrible. I want to change pads but I couldn't find any good 1.8mm pads, I could either get 1.5mm or 2.0mm but I am worried that those will be too thin or too thick. What should I do?


----------



## Reaper29

99belle99 said:


> Post a link to your best Timespy run. I just want to see it. I just want to compare mine to yours.


----------



## ilmazzo

ZealotKi11er said:


> I want that GPU so bad.


And I want yours!!!!


----------



## robiatti

Thanh Nguyen said:


> Gpu usage is only in the 80%. Do u have that problem? I don’t know its the win11 problem or my system.


My GPU usage fluctuates between 90 and 98%, I am also on windows 11.


----------



## EastCoast

Win11 doesn't offer anything of value for this gen of hardware. It's e-waste


----------



## lawson67

jonRock1992 said:


> Ah I see. It doesn't work with the Linux flashing tool for some reason then.


@jonRock1992 i successfully flashed my graphic card with the LC Bios using the CH341A programmer and it boots fine until you shut down then it wont post again with M/B Q-Code reading 55 again, i did it 3 times every time it boots up fine but when you shut down and restart again your stuffed


----------



## jonRock1992

lawson67 said:


> @jonRock1992 i successfully flashed my graphic card with the LC Bios using the CH341A programmer and it boots fine until you shut down then it wont post again with M/B Q-Code reading 55 again, i did it 3 times every time it boots up fine but when you shut down and restart again your stuffed
> 
> View attachment 2529698


Oh wow. Thanks for following up! Did you see any performance gains from flashing it? I wonder what it is about the red devil that is causing this problem?

If I remember correctly, error code 55 is also what I got after flashing it in Linux. I'm using an Asus Dark Hero motherboard. I'm wondering if it just has something to do with Asus's bios. Error code 55 is supposed to refer to a RAM issue. I'm wondering if a CMOS reset is required after flashing it? I was going to try it, but I didn't wanna have to put all of my bios settings back in. Did you try a CMOS reset?


----------



## J7SC

lawson67 said:


> @jonRock1992 i successfully flashed my graphic card with the LC Bios using the CH341A programmer and it boots fine until you shut down then it wont post again with M/B Q-Code reading 55 again, i did it 3 times every time it boots up fine but when you shut down and restart again your stuffed
> 
> View attachment 2529698


...just a hunch, but Windows 10 (11) would keep bios 'A' for three or so cold reboots even after I switched to bios 'B' on my 3090. With AMD GPU writing into the Win registry anyways, I wonder if DDU after a successful flash while still running might help (or not) ?


----------



## lawson67

jonRock1992 said:


> Oh wow. Thanks for following up! Did you see any performance gains from flashing it? I wonder what it is about the red devil that is causing this problem?
> 
> If I remember correctly, error code 55 is also what I got after flashing it in Linux. I'm using an Asus Dark Hero motherboard. I'm wondering if it just has something to do with Asus's bios. Error code 55 is supposed to refer to a RAM issue. I'm wondering if a CMOS reset is required after flashing it? I was going to try it, but I didn't wanna have to put all of my bios settings back in. Did you try a CMOS reset?


Didn't test it for speed in 3dmark as i was testing if i could reboot with it and be stable with it before i started testing for speed, had to drain loop 3 times to flash it with programmer gave up in the end due to as soon as you shut down and then restart Q-55, did think about resetting bios might do it at some point right now lost interest in it lol


----------



## lawson67

J7SC said:


> ...just a hunch, but Windows 10 (11) would keep bios 'A' for three or so cold reboots even after I switched to bios 'B' on my 3090. With AMD GPU writing into the Win registry anyways, I wonder if DDU after a successful flash while still running might help (or not) ?


I have thought of that, i did clear MPT before i flashed but after i switched bios back to powercolor when it would not post again i noticed MPT seen two RX 6900 XT so deff stuff in registry, maybe a DDU before flashing would help sort it all out??, might try it tomz


----------



## jonRock1992

J7SC said:


> ...just a hunch, but Windows 10 (11) would keep bios 'A' for three or so cold reboots even after I switched to bios 'B' on my 3090. With AMD GPU writing into the Win registry anyways, I wonder if DDU after a successful flash while still running might help (or not) ?


Interesting. I might try flashing in Linux again and try this out sometime.


----------



## lawson67

jonRock1992 said:


> Interesting. I might try flashing in Linux again and try this out sometime.


Ill try right now with Linux cant be arsed to drain loop again tonight and use programmer, ill DDU then flash via Linux


----------



## lawson67

jonRock1992 said:


> Interesting. I might try flashing in Linux again and try this out sometime.


DDU then shut down then flash then restart still Q-55, maybe DDU programmer then restart might work?


----------



## ZealotKi11er

If its failing to post its most likely not being able to train at 18Gbps during bootup.


----------



## ZealotKi11er

What is required to volt mod these card? Need to probably order now for the cold winter months.


----------



## mgkhn

i just received my strix lc 6900xt top edition card and just play for few hours and reach only 24.5k graphic score with it. Memory cant goes above 2100mhz with ft1 and never pass anything when try ft2. is there any tricks for that try different settings mpt but always fails with mem oc









Result not found







www.3dmark.com


----------



## 99belle99

What is holding my card back from reaching higher frequency's? I can only do 2550MHz. Bad bin maybe? As it is a reference card on stock cooler. It just crashes if set to above 2600MHz after a few seconds and it's not heat as I was watching that during the runs.


----------



## J7SC

99belle99 said:


> What is holding my card back from reaching higher frequency's? I can only do 2550MHz. Bad bin maybe? As it is a reference card on stock cooler. It just crashes if set to above 2600MHz after a few seconds and it's not heat as I was watching that during the runs.


PL spikes, may be ?


----------



## 99belle99

J7SC said:


> PL spikes, may be ?


What is PL spikes?


----------



## ZealotKi11er

99belle99 said:


> What is holding my card back from reaching higher frequency's? I can only do 2550MHz. Bad bin maybe? As it is a reference card on stock cooler. It just crashes if set to above 2600MHz after a few seconds and it's not heat as I was watching that during the runs.


XTX dont OC as much as XTXH. 
Stock cooler holding you back for sure. Lower temps will improve stability a bit.


----------



## jonRock1992

Holy **** boys! I finally did it! Over 25k GPU score in Timespy. Thanks to the people that shared some MPT settings recently, some OS optimization, and RAM tweaks. I'm also using the latest driver.
AMD Radeon RX 6900 XT video card benchmark result - AMD Ryzen 7 5800X,ASUSTeK COMPUTER INC. ROG CROSSHAIR VIII DARK HERO (3dmark.com)


----------



## lawson67

ZealotKi11er said:


> If its failing to post its most likely not being able to train at 18Gbps during bootup.


No its not that, it post and restarts and runs fine with LC Bios if you flash with programmer, however if you (shut down) and then attempt restart it comes up with Q-55 code saying USB over voltage shut down in 15 seconds, its not USB over voltage also as i have pulled all USB from PC and front case USB plug off M/B and still say USB over voltage, plus if you switch back to powercolor Bios no USB over voltage B.S comes up, its something else and cant really be anything to do with windows or registry as M/B wont pass off checks to O/S so does not even get to operating system


----------



## ZealotKi11er

Does ur 6900xt come with usb-c like reference 6900 xt?


----------



## lawson67

ZealotKi11er said:


> Does ur 6900xt come with usb-c like reference 6900 xt?


Nope 3 DP 1 HMDI


----------



## J7SC

lawson67 said:


> Nope 3 DP 1 HMDI


Silly question perhaps as I don't know that card, but doesn't the 'actual' LC come w/ a pump and connection to a USB ? Could that be related ?


----------



## ZealotKi11er

lawson67 said:


> Nope 3 DP 1 HMDI


That could be the issue.


----------



## FatFingerGamer

lawson67 said:


> @jonRock1992 i successfully flashed my graphic card with the LC Bios using the CH341A programmer and it boots fine until you shut down then it wont post again with M/B Q-Code reading 55 again, i did it 3 times every time it boots up fine but when you shut down and restart again your stuffed
> 
> View attachment 2529698


I can't remember what article it was on videocardz.com but they were talking about how powercolor has a secondary bios well like a safety feature for their cards. Kind of like a shutdown sequence of sorts. And if it's bypassed or the right code is not read it prevents it from starting in case of cooling failure like pump or fan. I'll try to find the post.


----------



## FatFingerGamer

So got some great news and news for any one that like me bought there XFX merc 319 black edition before they announced the limited black which has the unlocked bios and claimed to go over 2800+. Just need to email them a receipt and your box tag and they will send you a self updating bios USB. Being they're exactly the same and the only difference is that the black and ultra editons shipped out with wait for it..... Rx 6800xt bios on their non rage side of the switch. 😑. Same exact card except the led on the sides are controllable and the unlocked bios. But shhh dont tell them as soon as I get it that bios is going to be shared so you guys can find the line of code that unlocks everything. 😎


----------



## Neoki

jonRock1992 said:


> Holy **** boys! I finally did it! Over 25k GPU score in Timespy. Thanks to the people that shared some MPT settings recently, some OS optimization, and RAM tweaks. I'm also using the latest driver.
> AMD Radeon RX 6900 XT video card benchmark result - AMD Ryzen 7 5800X,ASUSTeK COMPUTER INC. ROG CROSSHAIR VIII DARK HERO (3dmark.com)
> View attachment 2529722


Congrats man! Seeing how long you've been at it I bet that feels good!


----------



## jonRock1992

Neoki said:


> Congrats man! Seeing how long you've been at it I bet that feels good!


My jaw dropped lol. I was very surprised that those tweaks contributed to so many more points. I was expecting to still just be shy of 25k, but ended up getting well above my goal. Feels good 😁


----------



## LtMatt

jonRock1992 said:


> Holy **** boys! I finally did it! Over 25k GPU score in Timespy. Thanks to the people that shared some MPT settings recently, some OS optimization, and RAM tweaks. I'm also using the latest driver.
> AMD Radeon RX 6900 XT video card benchmark result - AMD Ryzen 7 5800X,ASUSTeK COMPUTER INC. ROG CROSSHAIR VIII DARK HERO (3dmark.com)
> View attachment 2529722


Had the best nights sleep ever last night, now I know why, well done Jon!

Is there any tutorial on how to flash the LQ BIOS? Not at all familiar with Linux or the tools mentioned unfortunately.


----------



## lawson67

jonRock1992 said:


> Holy **** boys! I finally did it! Over 25k GPU score in Timespy. Thanks to the people that shared some MPT settings recently, some OS optimization, and RAM tweaks. I'm also using the latest driver.
> AMD Radeon RX 6900 XT video card benchmark result - AMD Ryzen 7 5800X,ASUSTeK COMPUTER INC. ROG CROSSHAIR VIII DARK HERO (3dmark.com)
> View attachment 2529722


Fantastic well done mate!


----------



## FatFingerGamer

ok i know iim new but i feel like ive been here for months because i ve read almost every post. and yesterday i thought i was happy until i read someone post then something clicked and i kinda understood something better than i did. i most definitely cried a little bit after this posted not gonna lie lol. you can now im content this time for reals. lol. i dont want to do what i usually do and keep pushing too far. plus i just ordered the evc2 from elmor. do i have to actually soder it or can i use a small heat jet. cuz i suck at sodering lol. anyways LOVE YOU ALL!! haha 🍻🍻🍻🍻🍻🍻🍻🍻







🍻🍻🍻🍻🍻🍻🍻🍻


----------



## weleh

Ok, guys...

I don't have a card anymore but you can try this yourself.

Apparently you can change VMAX on NAVI cards with MPT.
Need to enable "TempLimitVIN" and then change voltage on the bottom section.

Try at your own risk


----------



## ZealotKi11er

weleh said:


> Ok, guys...
> 
> I don't have a card anymore but you can try this yourself.
> 
> Apparently you can change VMAX on NAVI cards with MPT.
> Need to enable "TempLimitVIN" and then change voltage on the bottom section.
> 
> Try at your own risk
> 
> View attachment 2529911


I am going to wait to get much colder -10/-20 before I try to run again. If this is true and we can overvolt, 27K might be possible with TimeSpy.


----------



## weleh

There will always be a wall at which point voltage makes no difference but this trick should still allow most of 6900XT's squeeze a bunch of performance.


----------



## Neoki

Going to be repasting my XFX Zero and replacing the stock EK Pads with thermal right extremes. 

Torn on the GPU paste. What's a good pick for a good balance of longevity (2 years or so) and performance? Not doing Liquid Metal.

So far reading about gelid extreme and kryonaut being the common picks? Thoughts?


----------



## LtMatt

weleh said:


> Ok, guys...
> 
> I don't have a card anymore but you can try this yourself.
> 
> Apparently you can change VMAX on NAVI cards with MPT.
> Need to enable "TempLimitVIN" and then change voltage on the bottom section.
> 
> Try at your own risk
> 
> View attachment 2529911


Damn, nice one cheers for sharing. Never touched these options before.


----------



## LtMatt

weleh said:


> There will always be a wall at which point voltage makes no difference but this trick should still allow most of 6900XT's squeeze a bunch of performance.


Just had a quick play, and providing you set the min and max to 1250, it seems to be working. I saw a voltage of 1.217v just now. 

Not really got time to do any benching yet unfortunately, but this could help me get near 26K hopefully!


----------



## cfranko

I can also confirm the voltage unlock method working, just saw 1250 mV in Cyberpunk.


----------



## LtMatt

Yup, definitely working. 2900Mhz in game. 









EDIT -- I need to update my overlay I had a 6700 XT in there yesterday.


----------



## cfranko

LtMatt said:


> Yup, definitely working. 2900Mhz in game.
> View attachment 2529918


I ordered liquid metal . After this beautiful voltage unlock trick I felt like LM is necessary.


----------



## ZealotKi11er

Yeah it works for me too. This is big. Now it will be easy to blow up the card. This should get me to 2900MHz/2800MHz and beat those cards with hard mods.


----------



## cfranko

Would too much voltage damage the card? I tried 1250 mV it worked nice and well would 1300 mV work?


----------



## marcoschaap

weleh said:


> Ok, guys...
> 
> I don't have a card anymore but you can try this yourself.
> 
> Apparently you can change VMAX on NAVI cards with MPT.
> Need to enable "TempLimitVIN" and then change voltage on the bottom section.
> 
> Try at your own risk
> 
> View attachment 2529911


Does this work on regular XTX too or only XTXH? HYPED


----------



## cfranko

marcoschaap said:


> Does this work on regular XTX too or only XTXH? HYPED


It works on both


----------



## ZealotKi11er

This mod is scary. It basically bypasses all AMD safety. Someone with 6800 can push 1.2v+ and blow the VRMs/Core. I suggest only experienced people do this to their card and only for benchmark runs.


----------



## LtMatt

ZealotKi11er said:


> This mod is scary. It basically bypasses all AMD safety. Someone with 6800 can push 1.2v+ and blow the VRMs/Core. I suggest only experienced people do this to their card and only for benchmark runs.


Yep not for the faint hearted, big risk to take.


----------



## cfranko

LtMatt said:


> Yep not for the faint hearted, big risk to take.


Would using 1250v daily be risky?


----------



## LtMatt

cfranko said:


> Would too much voltage damage the card? I tried 1250 mV it worked nice and well would 1300 mV work?


Yes it would. Need a good pcb and good cooling to even consider going above default limits IMO. 

Toxic PCB with a water cooler should be able to handle a bit extra, but I know the risk I am taking.


----------



## cfranko

LtMatt said:


> Yes it would. Need a good pcb and good cooling to even consider going above default limits IMO.
> 
> Toxic PCB with a water cooler should be able to handle a bit extra, but I know the risk I am taking.


I just tried 1250 mv on my Asrock phantom gaming card and it ran fine, however I only used it at 1250 mV for like 5 minutes, then I reverted back.


----------



## marcoschaap

LtMatt said:


> Yep not for the faint hearted, big risk to take.


For my 6900XT non XTXH I'm really eager to set it "only" to 1200 with a limited PL. Better be wise for a €1500,- GPU


----------



## LtMatt

marcoschaap said:


> For my 6900XT non XTXH I'm really eager to set it "only" to 1200 with a limited PL. Better be wise for a €1500,- GPU


Lets not turn this into a 6900 XT graveyard, be sensible folks.


----------



## weleh

This is not advisable at all for DAILY USAGE, this is why I didn't want to share this and why I put a disclaimer...

Doing 1250mV daily will **** your card up potentially.

Just so some of you understand, a XTXH card has a max Vcore of 1.2V however, daily usage the card hoovers way below this even when you game (1.1-1.15V range for the most part depending on SKU).

So imagine you running flat 1.2V daily... 

Please use this at your own discretion, I will not be providing any help or tips because I don't have a card and I cannot test or guarantee everything will work out.

If you want more info, browse hardwareluxx and translate to english.


----------



## LtMatt

weleh said:


> This is not advisable at all for DAILY USAGE, this is why I didn't want to share this and why I put a disclaimer...
> 
> Doing 1250mV daily will **** your card up potentially.
> 
> Just so some of you understand, a XTXH card has a max Vcore of 1.2V however, daily usage the card hoovers way below this even when you game (1.1-1.15V range for the most part depending on SKU).
> 
> So imagine you running flat 1.2V daily...
> 
> Please use this at your own discretion, I will not be providing any help or tips because I don't have a card and I cannot test or guarantee everything will work out.
> 
> If you want more info, browse hardwareluxx and translate to english.


Well said. I'll be using it for benching runs only, for daily usage it will be back to the stock voltage.


----------



## weleh

Another thing is, with this much voltage your card will overboost during games and will eventually crash, I've tried with EVC anyway.

So please, just be smart.


----------



## ZealotKi11er

I tried with 6900 XT LC 120mm RAD which can hold about 400w of heat. with 1.25v it throttles hard. Air Cooled card are out of the question. Water Cooled card with 360mm + cold air is min I would try for benchmark runs.


----------



## weleh

Yea you need a custom loop and LM to hold these kind of voltages.

Once you start going above 1200mV hotspot will ****ing fly because the card will pull a ****ton of power.

On my Toxic, with cold air intake from AC, my HS was hoovering around 90C at >500W.


----------



## LtMatt

ZealotKi11er said:


> Yeah it works for me too. This is big. Now it will be easy to blow up the card. This should get me to 2900MHz/2800MHz and beat those cards with hard mods.


Stop it, your scores are fast enough already. 

I wish I knew how to flash the LQ BIOS using Linux or i'd be doing that too.


----------



## weleh

LtMatt said:


> Stop it, your scores are fast enough already.
> 
> I wish I knew how to flash the LQ BIOS using Linux or i'd be doing that too.


What do you need?

I can help you with that, it's easy. I think I even posted a guide here.


----------



## LtMatt

weleh said:


> What do you need?
> 
> I can help you with that, it's easy. I think I even posted a guide here.


Not familiar with Linux so step by step instructions would be nice, might be a lot of work though so understand if you can’t be bothered.


----------



## J7SC

I might try the 'enhanced MPT voltage' as I have a 1200x60+ rad system for the 6900XT loop (3x8 pin, dual bios GPU). 

Still, I second and third all the *warnings provided above*...I've blown an AMD card w/ too much voltage, ironically never _during _HWBot sub_zero runs (up to 1.4v on a custom 290X), but many months _afterwards_ when I put the stock air-cooler back on and forgot to switch the bios lever back from custom XOC to default...card lasted about two weeks on air, then > e-graveyard


----------



## ZealotKi11er

LtMatt said:


> Stop it, your scores are fast enough already.
> 
> I wish I knew how to flash the LQ BIOS using Linux or i'd be doing that too.


My score is all down to silicon lottery. Now I need to actually use that silicon and beat those 2900MHz card on other benchmarks. My kryptonite is Fire Strike GT1. I need to figure a way to get that test to work properly.


----------



## weleh

LtMatt said:


> Not familiar with Linux so step by step instructions would be nice, might be a lot of work though so understand if you can’t be bothered.


Alright, let me put a small tutorial for it all, watching F1 and doing this so bear with me


----------



## lestatdk

ZealotKi11er said:


> My score is all down to silicon lottery. Now I need to actually use that silicon and beat those 2900MHz card on other benchmarks. My kryptonite is Fire Strike GT1. I need to figure a way to get that test to work properly.


I can cruise through GT1, but GT2 is my nemesis. Gets me every time I think I have a good score


----------



## LtMatt

weleh said:


> Alright, let me put a small tutorial for it all, watching F1 and doing this so bear with me


I appreciate it.


----------



## weleh

HOW TO FLASH LC REF VBIOS ON XTXH CARD

*1. *make LINUX USB Liveboot
*1.a* Download and Install RUFUS (https://rufus.ie/en/)
*1.b* Download and burn UBUNTU (https://ubuntu.com/download/desktop)

*2. *Download XTXH LC Ref Bios from TechPowerUp (can be put inside the same Linux boot USB)

*3. *Download AMDVBFLASH LINUX Version from TechPowerUp (can be put inside the same Linux boot USB)

*4. *Insert USB with Ubuntu
*4.a* Boot from USB and inside Ubuntu select the option that says "Try Ubuntu" (https://ubuntucommunity.s3.dualstac...49a92ce6373041a7f8f50ddf6495f8ac539ad275.jpeg)

*5.* Inside Linux take AMDVBFLASH and the BIOS from the USB and put it somewhere easy to reach.
*5.a* Put the rom inside AMDVBFLASH folder

*6.* Inside the AMDVBFLASH folder, right click and click Open Terminal.

*7. *Inside terminal run $ sudo amdvbflash -i
*7.a* This should bring all the GPUs you have installed, there's a number behind everyone of them ,if you have only 1, the number will probably read #0

*8. *Run this command inside terminal: $ sudo amdvbflash -p 0 biosnamehere.rom to flash

Reboot, load up windows and this should work. I would strongly advise only doing this with double bios cards otherwise you might **** this up and get a bricked card and have to use external flasher.

If it doesn't work or you have any doubts, let me know.


----------



## robiatti

weleh said:


> HOW TO FLASH LC REF VBIOS ON XTXH CARD
> 
> *1. *make LINUX USB Liveboot
> *1.a* Download and Install RUFUS (https://rufus.ie/en/)
> *1.b* Download and burn UBUNTU (https://ubuntu.com/download/desktop)
> *2. *Download XTXH LC Ref Bios from TechPowerUp (can be put inside the same Linux boot USB)
> 
> *3. *Download AMDVBFLASH LINUX Version from TechPowerUp (can be put inside the same Linux boot USB)
> 
> *4. *Insert USB with Ubuntu
> *4.a* Boot from USB and inside Ubuntu select the option that says "Try Ubuntu" (https://ubuntucommunity.s3.dualstac...49a92ce6373041a7f8f50ddf6495f8ac539ad275.jpeg)
> 
> *5.* Inside Linux take AMDVBFLASH and the BIOS from the USB and put it somewhere easy to reach.
> *5.a* Put the rom inside AMDVBFLASH folder
> 
> *6.* Inside the AMDVBFLASH folder, right click and click Open Terminal.
> 
> *7. *Inside terminal run $ sudo amdvbflash -i
> *7.a* This should bring all the GPUs you have installed, there's a number behind everyone of them ,if you have only 1, the number will probably read #0
> 
> *8. *Run $ sudo amdvbflash -p 0 biosnamehere.rom to flash
> 
> Reboot, load up windows and this should work. I would strongly advise only doing this with double bios cards otherwise you might **** this up and get a bricked card and have to use external flasher.
> 
> If it doesn't work or you have any doubts, let me know.


I did this method the other day and it worked. The one hiccup I had was drivers for the 6900xt were not loaded as it would say adapter not found.

I ended up installing Linux to a spare drive and loaded the driver . After that it worked fine . After thinking about it, I imagine my difficulty had to do with having secure boot enabled.


----------



## LtMatt

weleh said:


> HOW TO FLASH LC REF VBIOS ON XTXH CARD
> 
> *1. *make LINUX USB Liveboot
> *1.a* Download and Install RUFUS (https://rufus.ie/en/)
> *1.b* Download and burn UBUNTU (https://ubuntu.com/download/desktop)
> 
> *2. *Download XTXH LC Ref Bios from TechPowerUp (can be put inside the same Linux boot USB)
> 
> *3. *Download AMDVBFLASH LINUX Version from TechPowerUp (can be put inside the same Linux boot USB)
> 
> *4. *Insert USB with Ubuntu
> *4.a* Boot from USB and inside Ubuntu select the option that says "Try Ubuntu" (https://ubuntucommunity.s3.dualstac...49a92ce6373041a7f8f50ddf6495f8ac539ad275.jpeg)
> 
> *5.* Inside Linux take AMDVBFLASH and the BIOS from the USB and put it somewhere easy to reach.
> *5.a* Put the rom inside AMDVBFLASH folder
> 
> *6.* Inside the AMDVBFLASH folder, right click and click Open Terminal.
> 
> *7. *Inside terminal run $ sudo amdvbflash -i
> *7.a* This should bring all the GPUs you have installed, there's a number behind everyone of them ,if you have only 1, the number will probably read #0
> 
> *8. *Run this command inside terminal: $ sudo amdvbflash -p 0 biosnamehere.rom to flash
> 
> Reboot, load up windows and this should work. I would strongly advise only doing this with double bios cards otherwise you might **** this up and get a bricked card and have to use external flasher.
> 
> If it doesn't work or you have any doubts, let me know.


Thanks for that, will give it a go when I have some spare time.


----------



## Skinnered

I can't get my sapphire rx6900xt toxic ee lc any higher then 2800 core and 2120 mhz mem (fast timing). I suspect my psu is dying, or not to the task for a rippel free power to the 400+ W gpu as I get often shut downs of the system.
Most, if not all, of these cards can go higher.
Getting another PSU on its way.


----------



## ZealotKi11er

Skinnered said:


> I can't get my sapphire rx6900xt toxic ee lc any higher then 2800 core and 2120 mhz mem (fast timing). I suspect my psu is dying, or not to the task for a rippel free power to the 400+ W gpu as I get often shut downs of the system.
> Most, if not all, of these cards can go higher.
> Getting another PSU on its way.


What PSU do you have? I got shut downs even with 1600w and that was because of temperature.


----------



## jonRock1992

Does anybody know how to force a locked SCLK so the frequency is locked and doesn't bounce around? I'm trying to stop over boost from happening


----------



## chispy

weleh said:


> Ok, guys...
> 
> I don't have a card anymore but you can try this yourself.
> 
> Apparently you can change VMAX on NAVI cards with MPT.
> Need to enable "TempLimitVIN" and then change voltage on the bottom section.
> 
> Try at your own risk
> 
> View attachment 2529911


----------



## LtMatt

jonRock1992 said:


> Does anybody know how to force a locked SCLK so the frequency is locked and doesn't bounce around? I'm trying to stop over boost from happening


Yes as weleh noted it is a problem in GT2 on Timespy.


----------



## D1g1talEntr0py

I just want to take a moment and thank everyone who suggested using GELID GC-Extreme for the GPU instead of a thinner paste. I had good results with Thermal Grizzly products in the past, including my AIO for my CPU, so I decided to use their Kryonaut Extreme for my 6900XT. Initially, my edge and hotspots temps were pretty good, and the delta was around 25-30 deg at 350W (Card is air cooled). But over time, it got worse and ended up closer to 30-35 deg and my hotspot would constantly hover around the 110 deg limit.

But now, oh man what a difference. The delta is around 18 deg @ 350W which is incredible. I was able to bump up the power limit in MPT another 10W and score a few hundred more in Timespy. I know its not much, but the temp difference is what I am the most happy about.

So again, a big thank you to everyone who suggested using it. This forum is great!

Now, if I could only get Horizon Zero Dawn to stop freezing when I simply start the damn game, I'd be happy. 🤪


----------



## Neoki

D1g1talEntr0py said:


> I just want to take a moment and thank everyone who suggested using GELID GC-Extreme for the GPU instead of a thinner paste. I had good results with Thermal Grizzly products in the past, including my AIO for my CPU, so I decided to use their Kryonaut Extreme for my 6900XT. Initially, my edge and hotspots temps were pretty good, and the delta was around 25-30 deg at 350W (Card is air cooled). But over time, it got worse and ended up closer to 30-35 deg and my hotspot would constantly hover around the 110 deg limit.
> 
> But now, oh man what a difference. The delta is around 18 deg @ 350W which is incredible. I was able to bump up the power limit in MPT another 10W and score a few hundred more in Timespy. I know its not much, but the temp difference is what I am the most happy about.
> 
> So again, a big thank you to everyone who suggested using it. This forum is great!
> 
> Now, if I could only get Horizon Zero Dawn to stop freezing when I simply start the damn game, I'd be happy. 🤪


Thanks sharing this. Been going back and forth over going with Gelid GC or Kryonaut/Hydronaut. I will give the GC a shot alongside some thermalright pads.

Also, congrats on getting some great temps!


----------



## cfranko

D1g1talEntr0py said:


> I just want to take a moment and thank everyone who suggested using GELID GC-Extreme for the GPU instead of a thinner paste. I had good results with Thermal Grizzly products in the past, including my AIO for my CPU, so I decided to use their Kryonaut Extreme for my 6900XT. Initially, my edge and hotspots temps were pretty good, and the delta was around 25-30 deg at 350W (Card is air cooled). But over time, it got worse and ended up closer to 30-35 deg and my hotspot would constantly hover around the 110 deg limit.
> 
> But now, oh man what a difference. The delta is around 18 deg @ 350W which is incredible. I was able to bump up the power limit in MPT another 10W and score a few hundred more in Timespy. I know its not much, but the temp difference is what I am the most happy about.
> 
> So again, a big thank you to everyone who suggested using it. This forum is great!
> 
> Now, if I could only get Horizon Zero Dawn to stop freezing when I simply start the damn game, I'd be happy. 🤪


I used cooler master mastergel maker as my paste and saw similiar behavior. Temps great at first but got worse over time. I am gonna change it to liquid metal asap.


----------



## lawson67

D1g1talEntr0py said:


> I just want to take a moment and thank everyone who suggested using GELID GC-Extreme for the GPU instead of a thinner paste. I had good results with Thermal Grizzly products in the past, including my AIO for my CPU, so I decided to use their Kryonaut Extreme for my 6900XT. Initially, my edge and hotspots temps were pretty good, and the delta was around 25-30 deg at 350W (Card is air cooled). But over time, it got worse and ended up closer to 30-35 deg and my hotspot would constantly hover around the 110 deg limit.
> 
> But now, oh man what a difference. The delta is around 18 deg @ 350W which is incredible. I was able to bump up the power limit in MPT another 10W and score a few hundred more in Timespy. I know its not much, but the temp difference is what I am the most happy about.
> 
> So again, a big thank you to everyone who suggested using it. This forum is great!
> 
> Now, if I could only get Horizon Zero Dawn to stop freezing when I simply start the damn game, I'd be happy. 🤪


Yep Kryonaut suffer pump out over 80c making it not great for GPU dies that hit over 80c, at first your temp look great then pump out and then temps go up, i only use LM on my GPU dies no problems at all temps always stay the same


----------



## gamervivek

weleh said:


> Ok, guys...
> 
> I don't have a card anymore but you can try this yourself.
> 
> Apparently you can change VMAX on NAVI cards with MPT.
> Need to enable "TempLimitVIN" and then change voltage on the bottom section.
> 
> Try at your own risk
> 
> View attachment 2529911


Thanks, this is great. My main concern about this is if it makes the voltage as 'manual' like with Ryzen CPUs where all protections get turned off as opposed to when it's boosting normally.

I've applied it to 6800XT LC and it seems to be working, getting 2.75GHz working when it would crash otherwise. Can't test more since selecting more than 2.8GHz on the modified overdrive limits just turns the card down to 500MHz.


----------



## LtMatt

I've not really had any luck at all improving my Timespy score. I'm not yet using the Liquid Cooled BIOS, but feel I must be missing a trick with various Timespy specific optimisations as I am not getting any higher scores despite higher clock speeds.

Regardless, I have had better luck using Firestrike Extreme and have managed to take the No1 spot temporarily until someone with better silicon comes along.










SCORE
32 105 with AMD Radeon RX 6900 XT(1x) and AMD Ryzen 9 5950X
Graphics Score 34 453
Physics Score 43 977
Combined Score 16 758
AMD Radeon RX 6900 XT video card benchmark result - AMD Ryzen 9 5950X,ASUSTeK COMPUTER INC. ROG CROSSHAIR VIII HERO (3dmark.com)

Big shoutout to Kingjohn with his score, think it might be @J7SC? I cannot get anywhere near his graphics score and his clock speed was only 2600Mhz or so. Perhaps a reporting bug, not sure.


----------



## OC-NightHawk

LtMatt said:


> Just had a quick play, and providing you set the min and max to 1250, it seems to be working. I saw a voltage of 1.217v just now.
> 
> Not really got time to do any benching yet unfortunately, but this could help me get near 26K hopefully!


These settings have been working for me daily for quite some time now while gaming at 2808MHz. My card is fully water cooled and has three 8 pin power connectors. Relatively would it be safe to bump the voltage up to 1.25v and power 600W to try and get closer to 2900MHz in game?


----------



## weleh

LtMatt said:


> I've not really had any luck at all improving my Timespy score. I'm not yet using the Liquid Cooled BIOS, but feel I must be missing a trick with various Timespy specific optimisations as I am not getting any higher scores despite higher clock speeds.
> 
> Regardless, I have had better luck using Firestrike Extreme and have managed to take the No1 spot temporarily until someone with better silicon comes along.
> 
> View attachment 2529981
> 
> 
> SCORE
> 32 105 with AMD Radeon RX 6900 XT(1x) and AMD Ryzen 9 5950X
> Graphics Score 34 453
> Physics Score 43 977
> Combined Score 16 758
> AMD Radeon RX 6900 XT video card benchmark result - AMD Ryzen 9 5950X,ASUSTeK COMPUTER INC. ROG CROSSHAIR VIII HERO (3dmark.com)
> 
> Big shoutout to Kingjohn with his score, think it might be @J7SC? I cannot get anywhere near his graphics score and his clock speed was only 2600Mhz or so. Perhaps a reporting bug, not sure.


Insane, well done.


----------



## LtMatt

weleh said:


> Insane, well done.


Thanks, doubt it will last long.  

How much do you think the LQ BIOS would add?


----------



## J7SC

LtMatt said:


> *Thanks, doubt it will last long.*
> 
> How much do you think the LQ BIOS would add?


...superb score either way  ...now though you'll be checking HoF a lot every day (hour?) 

Others here are experts on the LC bios but IF you have an XTXH chip w/o memory speed limitation/ get past that limitation, it could still be worth it to trade off higher VRAM bandwidth for looser timings - might be worth to try out. 

I don't have direct GDDR6 comparisons, but here is a look at DDR4 re. bandwidth vs looser timings, noting that the 2000 / 4000 speed wasn't even optimized. Ultimately, it also comes down to the specific app / bench for VRAM as well.


----------



## LtMatt

J7SC said:


> ...superb score either way  ...now though you'll be checking HoF a lot every day (hour?)
> 
> Others here are experts on the LC bios but IF you have an XTXH chip w/o memory speed limitation/ get past that limitation, it could still be worth it to trade off higher VRAM bandwidth for looser timings - might be worth to try out.
> 
> I don't have direct GDDR6 comparisons, but here is a look at DDR4 re. bandwidth vs looser timings, noting that the 2000 / 4000 speed wasn't even optimized. Ultimately, it also comes down to the specific app / bench for VRAM as well.
> 
> View attachment 2529987


Was that definitely you then? I can see there is masses of room to improve your CPU score so I am sure you could get No1 easily if you wanted. 

Do you think the FCLK at 2000 helped your graphics score?


----------



## J7SC

LtMatt said:


> Was that definitely you then? I can see there is masses of room to improve your CPU score so I am sure you could get No1 easily if you wanted.
> 
> Do you think the FCLK at 2000 helped your graphics score?


...'may be', but it could also be others here. Either way, great score in TS-Ex. BTW, have you tried Firestrike Ex, Ultra ?

...re. score improvement, I know higher RAM speed it helps in Superposition and PortRoyal, but I haven't run the 6900XT too much yet in TS/Ex


----------



## LtMatt

J7SC said:


> ...'may be', but it could also be others here. Either way, great score in TS-Ex. BTW, have you tried Firestrike Ex, Ultra ?
> 
> ...re. score improvement, I know higher RAM speed it helps in Superposition and PortRoyal, but I haven't run the 6900XT too much yet in TS/Ex


Ooh, you tease! 

It's not Jon, so it must be you. 

EDIT - Yes here's my run at firestrike ultra, ran out of time so only managed two runs. Can maybe go a bit higher with more time in the future. 
AMD Radeon RX 6900 XT video card benchmark result - AMD Ryzen 9 5950X,ASUSTeK COMPUTER INC. ROG CROSSHAIR VIII HERO (3dmark.com)


----------



## J7SC

LtMatt said:


> Ooh, you tease!
> 
> It's not Jon, so it must be you.
> 
> EDIT - Yes here's my run at firestrike ultra, ran out of time so only managed two runs. Can maybe go a bit higher with more time in the future.
> AMD Radeon RX 6900 XT video card benchmark result - AMD Ryzen 9 5950X,ASUSTeK COMPUTER INC. ROG CROSSHAIR VIII HERO (3dmark.com)


...love those frequencies and temps !

...and on the danger of repeating myself, ever since I stuffed thermal putty on the VRAM (w-cooled block), VRAM temps dropped and efficient speed increased noticeably.


----------



## Neoki

J7SC said:


> ...love those frequencies and temps !
> 
> ...and on the danger of repeating myself, ever since I stuffed thermal putty on the VRAM (w-cooled block), VRAM temps dropped and efficient speed increased noticeably.


You're using TP instead of a pad on the VRAM? Did you pre spread or just dab them?


----------



## LtMatt

J7SC said:


> ...love those frequencies and temps !
> 
> ...and on the danger of repeating myself, ever since I stuffed thermal putty on the VRAM (w-cooled block), VRAM temps dropped and efficient speed increased noticeably.


Yes the air con unit, liquid metal and the Toxic 360 AIO allow me to get water cooling like temperatures for benchmarks which no doubt helps as although my silicon is decent-ish, it is far from golden compared to higher stable frequencies others report on Timespy in this thread, so I rely on MPT tweaks and low temperatures to get competitive scores with the better silicon out there. 

I can only complete Timespy at 2825Mhz and even then it is not guaranteed and need multiple runs before one of them completes. 

I do well in Firestrike Extreme/Ultra and Timespy Extreme for some reason. I still have the best 6900 XT Timespy Extreme score out there, but not sure any 6900 XT users even run that bench these days as there's been no new updates in a long time. 

Timespy though continues to elude me and I often struggle to even score over 25K consistently. One of life's mysteries.


----------



## J7SC

Neoki said:


> You're using TP instead of a pad on the VRAM? Did you pre spread or just dab them?


...I make little balls out of the putty and then use the block to squish them...the beauty of putty is that it 'conforms automatically' to the available space and fills nooks and crannies - so it covers VRAM not only on top but also on the sides of the VRAM chips. I probably used a bit much  in the pics below, but it doesn't really matter with putty.






















LtMatt said:


> (...)
> 
> Timespy though continues to elude me and I often struggle to even score over 25K consistently. One of life's mysteries.


...as suggested before, Time Spy GT2 is just plain weird - I can run the same settings and temps and pass GT2 (2080 Ti, 3090, 6900XT) on its own every time even when_ those same settings crashed_ when doing the whole TS...

...very confusing:


----------



## weleh

LtMatt said:


> Thanks, doubt it will last long.
> 
> How much do you think the LQ BIOS would add?


On Extreme and Ultra a lot, VRAM does so much for TS/FS Ultra/Extreme


----------



## ZealotKi11er

LtMatt said:


> Thanks, doubt it will last long.
> 
> How much do you think the LQ BIOS would add?


In TS about 400 points after FT2. Before that only 200 points.


----------



## CS9K

D1g1talEntr0py said:


> I just want to take a moment and thank everyone who suggested using GELID GC-Extreme for the GPU instead of a thinner paste. I had good results with Thermal Grizzly products in the past, including my AIO for my CPU, so I decided to use their Kryonaut Extreme for my 6900XT. Initially, my edge and hotspots temps were pretty good, and the delta was around 25-30 deg at 350W (Card is air cooled). But over time, it got worse and ended up closer to 30-35 deg and my hotspot would constantly hover around the 110 deg limit.
> 
> But now, oh man what a difference. The delta is around 18 deg @ 350W which is incredible. I was able to bump up the power limit in MPT another 10W and score a few hundred more in Timespy. I know its not much, but the temp difference is what I am the most happy about.
> 
> So again, a big thank you to everyone who suggested using it. This forum is great!
> 
> Now, if I could only get Horizon Zero Dawn to stop freezing when I simply start the damn game, I'd be happy. 🤪


Howdy! That was me. I appreciate the shout-out!

You can thank:

My fear of using LM
Distaste for re-pasting the AIO on my old 1080 w/Kraken G12 every few months
How impressed I was with EVGA's 2070 Hybrid paste even with the FTW3 bios flashed onto it (I never had to re-paste)
And finally, my de-lidded 3770K

The 3770K is a toasty boi (and to be fair, _needs_ LM), but GC-Extreme was the fourth paste I tried, and to my surprise, it worked! It did the job well enough that I stuck with it. And now, with today's toasty a.f. RDNA2 and Ampere GPU cores, it does the job well right from the start, and I've yet to find a reason to use anything else for bare-die paste jobs when using ambient-temperature cooling. YMMV for active/exotic cooling.


----------



## geriatricpollywog

J7SC said:


> ...I make little balls out of the putty and then use the block to squish them...the beauty of putty is that it 'conforms automatically' to the available space and fills nooks and crannies - so it covers VRAM not only on top but also on the sides of the VRAM chips. I probably used a bit much  in the pics below, but it doesn't really matter with putty.
> 
> View attachment 2530004
> 
> 
> 
> View attachment 2530005
> 
> 
> 
> 
> ...as suggested before, Time Spy GT2 is just plain weird - I can run the same settings and temps and pass GT2 (2080 Ti, 3090, 6900XT) on its own every time even when_ those same settings crashed_ when doing the whole TS...
> 
> ...very confusing:


Do you have a thermal pad under the backplate?


----------



## J7SC

0451 said:


> Do you have a thermal pad under the backplate?


...thermal putty and thermal pads under the back-plate (pads for the power stages, along with a line of MX5; putty on the back of the VRAM PCB area)


----------



## geriatricpollywog

J7SC said:


> ...thermal putty and thermal pads under the back-plate (pads for the power stages, along with a line of MX5; putty on the back of the VRAM PCB area)


Do you have a thermal pad behind the die? Or is there a cut-out hole in the backplate behind the die?


----------



## Skinnered

ZealotKi11er said:


> What PSU do you have? I got shut downs even with 1600w and that was because of temperature.


A 1500W enermax.
I have it replaced with a be quiet! Dark Power Pro 12 1500W

No change in OC, 2800/2130 (core/mem) is the max in games (5K everything reshaded +RTGI)
Time spy even a lousy 2785/2122, anything higher crashes (23685 gfx score)
But no more shut downs, but I didn't win the silicon lottery with this sample, but for gaming it doesn't matter much


----------



## J7SC

0451 said:


> Do you have a thermal pad behind the die? Or is there a cut-out hole in the backplate behind the die?


On the 6900XT, it had a solid (no-hole) back-plate with a factory-mounted soft pad (~2mm) on the back of the die in stock cofig. I replaced that pad with thermal putty and it all makes contact w/ the inside of the back plate which also has the extra heat sink mounted (below). 

Btw, once I saw the factory pad on the back of the die on the 6900XT, I applied the same steps as above, including the putty, for the back of the die of 3090 Strix as well (which in stock back-plate config does have a cut-hole around the back of the GPU die)


----------



## drnilly007

Where is the most recent MPT download?


----------



## FatFingerGamer

Neoki said:


> Going to be repasting my XFX Zero and replacing the stock EK Pads with thermal right extremes.
> 
> Torn on the GPU paste. What's a good pick for a good balance of longevity (2 years or so) and performance? Not doing Liquid Metal.
> 
> So far reading about gelid extreme and kryonaut being the common picks? Thoughts?


can you link the bios pleaseee lol


----------



## Neoki

FatFingerGamer said:


> can you link the bios pleaseee lol


Did you misquote my post? Not sure why you're asking for my bios.


----------



## lDevilDriverl

LtMatt said:


> I've not really had any luck at all improving my Timespy score. I'm not yet using the Liquid Cooled BIOS, but feel I must be missing a trick with various Timespy specific optimisations as I am not getting any higher scores despite higher clock speeds.
> 
> Regardless, I have had better luck using Firestrike Extreme and have managed to take the No1 spot temporarily until someone with better silicon comes along.
> 
> 
> 
> SCORE
> 32 105 with AMD Radeon RX 6900 XT(1x) and AMD Ryzen 9 5950X
> Graphics Score 34 453
> Physics Score 43 977
> Combined Score 16 758
> AMD Radeon RX 6900 XT video card benchmark result - AMD Ryzen 9 5950X,ASUSTeK COMPUTER INC. ROG CROSSHAIR VIII HERO (3dmark.com)
> 
> Big shoutout to Kingjohn with his score, think it might be @J7SC? I cannot get anywhere near his graphics score and his clock speed was only 2600Mhz or so. Perhaps a reporting bug, not sure.


Different bechmark\driver versions? Also, try to overclock 5950 with fixed frequency above 4.9(you can use overclocking per ccx) to increase Combined Score


----------



## FatFingerGamer

Neoki said:


> Did you misquote my post? Not sure why you're asking for my bios.


you said you have the new xfx zero right? i have the xfx limited gaming black with the evc2 i was just trying to see where along the lines xfx is putting down the power. if i misread it then pay it no mind. and full heatkiller waterblock with active backplate.


----------



## ptt1982

Quick update on the latest drivers: the clocks are back up +10mhz, and timespy results around 0.5% higher (on same clocks, without the +10mhz boost). Temps etc. all same, didn't test games yet. There was some update on Doom eternal, maybe that meant that the RT related slowdowns are finally gone.


----------



## ptt1982

Open question: How many of you guys undervolt GPU/CPU combo here?

Given the environmental and monetary cost of electricity, I've come to a conclusion that watercooling combined with undervolt is the ultimate solution for me. Recently undervolted my 5600X and 6900XT, I was able to shave off 160W, from 535W --> 375W of the max peaks. The PC runs overall more efficiently due to the extremely low temps. CPU or GPU never goes past 50C now, junction for GPU stays under 70C in stress tests. Both are roughly 4% faster than when in stock. Surely, there's room to gain 6% more performance for both CPU and GPU, but it would cost 160W, or in other words 30% more power. The green little man in me says that I should think about the environment.

This is not really a monetary issue, but it just makes me think that do I really need that 3-5fps more for 160W. Quite frankly, I didn't notice any performance difference in games I tried. I played all the games as per normal, and if there were stutters before, there were equally stutters in the same places, and if the fps was high at places it was pretty much exactly the same when using 160W less power. Additionally, I've recently downscaled internal resolution to 90% in games and upscaling that to 4K just to reduce GPU load and save watts, because I can't see the difference on my 65" 4K screen between upscaled 1944p and native 2160p. I only see difference in slightly smoother fps and 30W-40W less power consumption. I get strange satisfaction out of this!

My thinking is that surely I will be buying the latest and greatest parts moving forward, such as 7900XT MCM GPU and V-cache 6900X CPU, but I'd like to put them under water and heavily undervolt them for maximum efficiency. The goal I have is to have higher than stock performance with at least 20% less wattage.

I hope undervolting with an overclock is still a suitable topic for overclockers, because it is a form of OC after all.


----------



## FatFingerGamer

ptt1982 said:


> Open question: How many of you guys undervolt GPU/CPU combo here?
> 
> Given the environmental and monetary cost of electricity, I've come to a conclusion that watercooling combined with undervolt is the ultimate solution for me. Recently undervolted my 5600X and 6900XT, I was able to shave off 160W, from 535W --> 375W of the max peaks. The PC runs overall more efficiently due to the extremely low temps. CPU or GPU never goes past 50C now, junction for GPU stays under 70C in stress tests. Both are roughly 4% faster than when in stock. Surely, there's room to gain 6% more performance for both CPU and GPU, but it would cost 160W, or in other words 30% more power. The green little man in me says that I should think about the environment.
> 
> This is not really a monetary issue, but it just makes me think that do I really need that 3-5fps more for 160W. Quite frankly, I didn't notice any performance difference in games I tried. I played all the games as per normal, and if there were stutters before, there were equally stutters in the same places, and if the fps was high at places it was pretty much exactly the same when using 160W less power. Additionally, I've recently downscaled internal resolution to 90% in games and upscaling that to 4K just to reduce GPU load and save watts, because I can't see the difference on my 65" 4K screen between upscaled 1944p and native 2160p. I only see difference in slightly smoother fps and 30W-40W less power consumption. I get strange satisfaction out of this!
> 
> My thinking is that surely I will be buying the latest and greatest parts moving forward, such as 7900XT MCM GPU and V-cache 6900X CPU, but I'd like to put them under water and heavily undervolt them for maximum efficiency. The goal I have is to have higher than stock performance with at least 20% less wattage.
> 
> I hope undervolting with an overclock is still a suitable topic for overclockers, because it is a form of OC after all.


are your wattage measurements from the pc like hwmonitor or whatever. or from the actual wall. you should get a hardware measuring device. (cant think of what its called) basically you plug your pc into it and it plugs into the wall. because you would be suprised how much it says in a software vs actual pull from the wall. also

Say you have a 850, but you only use 225 Watts. With a 85 class (80 silver) you would pull about 250 Watts.
IT can also vary by range, and the 80 plus certifications are done at 20%, 50% and 100%
The more heat is expelled, the less efficient (heat=power in thermodynamics, so heat is power you are not using )
Now, the closer you are to 100% the more efficient PSU gets, but 50% is the sweet spot in electrical terms.
Confused? Good, it is complex, but if you read up to here then you are in the right path.
Ideally you want the total combined of max power (in case you use your pc at 100% on everything) but this is hard in itself to calculate, made worse by the fact that you might add a hard drive in the future, so I add a good 200 Watts (I tend to have a ton added later)

Quick comparison
80+ is 80% 400w = 80w possible loss
80+ Gold Plus is 92% @50% so in 200w of a 400w PSU is 32 Watts loss.
The watts loss is extra watts used, so if you required exactly 320 Watts the 400watts you would be consuming all 400 and not 320, and 240 at 200 watts
A gold plus PSU would use 216W at 200W and 44W at 100 giving you 370W

thats also something to keep in mind. so whatever power you're "saving" you might not even be saving its imaginary power HAH. ps. im not sure if i answered your theory or added more to th equation


----------



## lDevilDriverl

ptt1982 said:


> Open question: How many of you guys undervolt GPU/CPU combo here?
> 
> Given the environmental and monetary cost of electricity, I've come to a conclusion that watercooling combined with undervolt is the ultimate solution for me. Recently undervolted my 5600X and 6900XT, I was able to shave off 160W, from 535W --> 375W of the max peaks. The PC runs overall more efficiently due to the extremely low temps. CPU or GPU never goes past 50C now, junction for GPU stays under 70C in stress tests. Both are roughly 4% faster than when in stock. Surely, there's room to gain 6% more performance for both CPU and GPU, but it would cost 160W, or in other words 30% more power. The green little man in me says that I should think about the environment.
> 
> This is not really a monetary issue, but it just makes me think that do I really need that 3-5fps more for 160W. Quite frankly, I didn't notice any performance difference in games I tried. I played all the games as per normal, and if there were stutters before, there were equally stutters in the same places, and if the fps was high at places it was pretty much exactly the same when using 160W less power. Additionally, I've recently downscaled internal resolution to 90% in games and upscaling that to 4K just to reduce GPU load and save watts, because I can't see the difference on my 65" 4K screen between upscaled 1944p and native 2160p. I only see difference in slightly smoother fps and 30W-40W less power consumption. I get strange satisfaction out of this!
> 
> My thinking is that surely I will be buying the latest and greatest parts moving forward, such as 7900XT MCM GPU and V-cache 6900X CPU, but I'd like to put them under water and heavily undervolt them for maximum efficiency. The goal I have is to have higher than stock performance with at least 20% less wattage.
> 
> I hope undervolting with an overclock is still a suitable topic for overclockers, because it is a form of OC after all.


custom water loop - SR2 360 x2 + gts 360, alphacool 755v3 x2, optimus cpu and ekwb gpu blocks
benching on 5950x with 1.35v ccd1 4.9+ ccd2 4.7+ (depends on room temp) 6900xt 2600-2800+ 1.175v 2120mem (depends on room temp)
daily 5950x 1.25v 4.7/4.5 6900xt 2000-2400 1.05v 2110mem


----------



## ptt1982

FatFingerGamer said:


> are your wattage measurements from the pc like hwmonitor or whatever. or from the actual wall. you should get a hardware measuring device. (cant think of what its called) basically you plug your pc into it and it plugs into the wall. because you would be suprised how much it says in a software vs actual pull from the wall. also
> 
> Say you have a 850, but you only use 225 Watts. With a 85 class (80 silver) you would pull about 250 Watts.
> IT can also vary by range, and the 80 plus certifications are done at 20%, 50% and 100%
> The more heat is expelled, the less efficient (heat=power in thermodynamics, so heat is power you are not using )
> Now, the closer you are to 100% the more efficient PSU gets, but 50% is the sweet spot in electrical terms.
> Confused? Good, it is complex, but if you read up to here then you are in the right path.
> Ideally you want the total combined of max power (in case you use your pc at 100% on everything) but this is hard in itself to calculate, made worse by the fact that you might add a hard drive in the future, so I add a good 200 Watts (I tend to have a ton added later)
> 
> Quick comparison
> 80+ is 80% 400w = 80w possible loss
> 80+ Gold Plus is 92% @50% so in 200w of a 400w PSU is 32 Watts loss.
> The watts loss is extra watts used, so if you required exactly 320 Watts the 400watts you would be consuming all 400 and not 320, and 240 at 200 watts
> A gold plus PSU would use 216W at 200W and 44W at 100 giving you 370W
> 
> thats also something to keep in mind. so whatever power you're "saving" you might not even be saving its imaginary power HAH. ps. im not sure if i answered your theory or added more to th equation


Thank you for your reply, appreciate it that you took so much time and care to explain it. I have Platinum Focus 850W, and am aware that at roughly 50% there's best efficiency. Measuring via hwinfo. I am fine with the system to go up to around 425W mark, so there's still around 50W-100W to play with with the next upgrade. Maybe 95W CPU and 400W GPU and undervolt them heavily to 375-475W average consumption. Surely, even if there's loss happening at lower wattages, the absolute figure of the loss is lower on low total consumption figures compared with higher total consumption. I would presume efficiency is always relative, rather than dealing with absolute numbers. Please correct me if I'm wrong and making a complete clown out of myself!


----------



## Neoki

FatFingerGamer said:


> you said you have the new xfx zero right? i have the xfx limited gaming black with the evc2 i was just trying to see where along the lines xfx is putting down the power. if i misread it then pay it no mind. and full heatkiller waterblock with active backplate.


Ah ok, I don't think there's much difference to other XTXH cards. The Zero actually ended up being a downgrade from my previous Gigabyte Waterforce that I switched from in terms of memory overclocking headroom. My bin does have +50mhz on the core compared to the waterforce, but even then I maxed out at 2785mhz for TS Graphics Test 2. But I am battling thermal issues with ambient and was running XFX/EK's OEM pads/paste job. Which I plan to fix now.

I don't have the machine put together atm, I'm waiting for the new pads/paste to arrive while I'm doing my hard tubing work. But I do remember the PPT default was 332w and iirc the TDC was 320w, and of course the typical XTXH voltage of 1200mv max.


----------



## FatFingerGamer

ptt1982 said:


> Thank you for your reply, appreciate it that you took so much time and care to explain it. I have Platinum Focus 850W, and am aware that at roughly 50% there's best efficiency. Measuring via hwinfo. I am fine with the system to go up to around 425W mark, so there's still around 50W-100W to play with with the next upgrade. Maybe 95W CPU and 400W GPU and undervolt them heavily to 375-475W average consumption. Surely, even if there's loss happening at lower wattages, the absolute figure of the loss is lower on low total consumption figures compared with higher total consumption. I would presume efficiency is always relative, rather than dealing with absolute numbers. Please correct me if I'm wrong and making a complete clown out of myself!


 The best rule of thumb well what a lot of people have said at least is to add up what you have on terms of basic watts needed then give yourself additional 150-200 watts ( also based on how close or far you are to the next psu. If you can try not to go under because you'll create a bottle neck in a sense coming from the PSU always go up.) I think seasonic has a calculator that helps you choose the right wattage just plug in what you have and it will give you bunch of options. I have a 1200watt and 500watt evga psu setup. Baby for the custom loop and fans And the 1200 for the main components. So even tho theoretically you assume 1700 watts. I only ever usually use about 900 and maybe 1200 when I run both 6900xts when I'm doing work related stuff. But I hope that helps you out. Cheers man


----------



## FatFingerGamer

Neoki said:


> Ah ok, I don't think there's much difference to other XTXH cards. The Zero actually ended up being a downgrade from my previous Gigabyte Waterforce that I switched from in terms of memory overclocking headroom. My bin does have +50mhz on the core compared to the waterforce, but even then I maxed out at 2785mhz for TS Graphics Test 2. But I am battling thermal issues with ambient and was running XFX/EK's OEM pads/paste job. Which I plan to fix now.
> 
> I don't have the machine put together atm, I'm waiting for the new pads/paste to arrive while I'm doing my hard tubing work. But I do remember the PPT default was 332w and iirc the TDC was 320w, and of course the typical XTXH voltage of 1200mv max.


Yeah I figured as much. And for the pads yeah ek pads have a tendency of getting dried up really quickly. My waterblocks and the back of the PCB looked like projects my nephew's would do in fingerprinting class. I used putty on the vrms so they could get in between and on the rear as well for the ram backs. Pain in the ass but it's totally worth it. Idling my cards sit at like 22-23. Good luck tho with your build. You'll be happy with xfx tho really beefy cards.


----------



## Neoki

FatFingerGamer said:


> Yeah I figured as much. And for the pads yeah ek pads have a tendency of getting dried up really quickly. My waterblocks and the back of the PCB looked like projects my nephew's would do in fingerprinting class. I used putty on the vrms so they could get in between and on the rear as well for the ram backs. Pain in the ass but it's totally worth it. Idling my cards sit at like 22-23. Good luck tho with your build. You'll be happy with xfx tho really beefy cards.


Yeah even though I lost performance overall to the Waterforce, I love the integration of the RGB into my ASUS Dark Hero, and that the block is fully maintenance-able. The Waterforce block can not be taken apart without ruining the cosmetics by prying off the aluminum + plastic cover that they for some reason put over the block screws.

Once new pads and paste is on, I plan to do another high wattage run with the new voltage adjustment options that Weleh shared. But after that I plan to put the daily driver nice and low to reduce overall wattage and not turn my office into a sauna.


----------



## OC-NightHawk

Neoki said:


> Yeah even though I lost performance overall to the Waterforce, I love the integration of the RGB into my ASUS Dark Hero, and that the block is fully maintenance-able. The Waterforce block can not be taken apart without ruining the cosmetics by prying off the aluminum + plastic cover that they for some reason put over the block screws.
> 
> Once new pads and paste is on, I plan to do another high wattage run with the new voltage adjustment options that Weleh shared. But after that I plan to put the daily driver nice and low to reduce overall wattage and not turn my office into a sauna.


I think the waterforce issue of covering the screws is not insurmountable. All of the screws are visible so drilling in to gain access to the screws should be possible. If done cleanly it wouldn't look much worse then new. It's aggravating that they did it but it could be worse. They could have used opaque materials and left us hunting for the screws.


----------



## J7SC

ptt1982 said:


> Open question: How many of you guys undervolt GPU/CPU combo here?
> 
> Given the environmental and monetary cost of electricity, I've come to a conclusion that watercooling combined with undervolt is the ultimate solution for me. Recently undervolted my 5600X and 6900XT, I was able to shave off 160W, from 535W --> 375W of the max peaks. The PC runs overall more efficiently due to the extremely low temps. CPU or GPU never goes past 50C now, junction for GPU stays under 70C in stress tests. Both are roughly 4% faster than when in stock. Surely, there's room to gain 6% more performance for both CPU and GPU, but it would cost 160W, or in other words 30% more power. The green little man in me says that I should think about the environment.
> 
> This is not really a monetary issue, but it just makes me think that do I really need that 3-5fps more for 160W. Quite frankly, I didn't notice any performance difference in games I tried. I played all the games as per normal, and if there were stutters before, there were equally stutters in the same places, and if the fps was high at places it was pretty much exactly the same when using 160W less power. Additionally, I've recently downscaled internal resolution to 90% in games and upscaling that to 4K just to reduce GPU load and save watts, because I can't see the difference on my 65" 4K screen between upscaled 1944p and native 2160p. I only see difference in slightly smoother fps and 30W-40W less power consumption. I get strange satisfaction out of this!
> 
> My thinking is that surely I will be buying the latest and greatest parts moving forward, such as 7900XT MCM GPU and V-cache 6900X CPU, but I'd like to put them under water and heavily undervolt them for maximum efficiency. The goal I have is to have higher than stock performance with at least 20% less wattage.
> 
> I hope undervolting with an overclock is still a suitable topic for overclockers, because it is a form of OC after all.


I think there's nothing wrong to have an environmental awareness and still enjoy building, gaming on, oc'ing and benching a nice system. On the contrary, kudos for bringing it up - not least as power consumption is one side of the performance coin, and related cooling setups the other...the more peak watts - ultimately a heat energy parameter - the more cooling you have to throw at it.

As to undervolting, given the way newer CPUs and GPUs work with boost algorithms which may just claw back some of the 'savings' automatically, you may also want to consider limiting PL / max EDC/TDC. That is my primary tool to keep things reasonable....that said, I'm no saint, though many of the dozen or so systems in my 'hone-office' are also used for work functions (including dev and back-up servers, firewalls and the like). Electricity costs are low here (plus 97% of our power in BC comes from hydro), but still, it ain't free.



Spoiler



I mentioned before that I used to do sub-zero benching back in the day, and there have been times when total power consumption of a bencher w/ Quad-SLI and hyped HEDT exceeded 4000W via 4 linked PSUs...once you get into exotic cooling, custom bios/vBios and so forth, power consumption sky-rockets. The good thing is that I still have a host of hi-po PSUs which during their new 'daily' setups barely run at 50% these days....haven't bought a new PSU for my systems since 2015...

My most power-hungry current system is a TR2950X (oc'ed) and 2x 2080 Ti Waterforce Extreme GPUs, usually running in SLI-CFR. Even on stock vBios, those cards max out at 380W each, so 760W total peak for the GPUs alone. By the time all is said and done, with all peripherals as well, it can hit as high as 1150W total. Reducing the PL on the GPUs a bit and moving the CPU oc down a notch can save about 200W, without noticeable reduction in perceived performance, ie. smooth game play. It of course does show up in benching, but that is now a private fun thing rather than record chasing at HWBot. 

The latest crop of CPUs and GPUs I run (3950X / 6900XT; 5950X/ 3090) for my home office are all much more power-efficient, including via the built-in algorithms. It makes sense to exploit those features in daily settings, unless you take a break and re-set things for some benching or demanding gaming.


----------



## Skinnered

I don't have read everything, but XTXH GPU's/dies, like on the LC Sapphire toxic EE, can be flashed to the reference liquid RX6900XT with increased perf. due lesser vram timings, but higher frequency?
Is 18 Gbps then possible? Is thatmemory physically the same as the 16Gbps?
To bad you have to hassle with Linux.


----------



## jonRock1992

Skinnered said:


> I don't have read everything, but XTXH GPU's/dies, like on the LC Sapphire toxic EE, can be flashed to the reference liquid RX6900XT with increased perf. due lesser vram timings, but higher frequency?
> Is 18 Gbps then possible?


It seems to work with most, but it doesn't work correctly with the Red Devil Ultimate for some reason.


----------



## Skinnered

^ Thanks, I am not afraid to experience somewhat, but it would comfort me if someone with the same card has done it with succes.


----------



## FatFingerGamer

Neoki said:


> Yeah even though I lost performance overall to the Waterforce, I love the integration of the RGB into my ASUS Dark Hero, and that the block is fully maintenance-able. The Waterforce block can not be taken apart without ruining the cosmetics by prying off the aluminum + plastic cover that they for some reason put over the block screws.
> 
> Once new pads and paste is on, I plan to do another high wattage run with the new voltage adjustment options that Weleh shared. But after that I plan to put the daily driver nice and low to reduce overall wattage and not turn my office into a sauna.


just dont forget to let the new pads "cure" even tho its not like how paste settles. should break em in slowly. then let em rip. if you go 0-100 the way they are made chemically is the reason why they get brittle and dry out. only reason i know is because ive changed about 200 of my gpus pads throughout my adventures in mining... and DONT WORRY i only use nvidia cards to mine with the exception of a few 5700xt's and radeon VIIs.


----------



## Neoki

FatFingerGamer said:


> just dont forget to let the new pads "cure" even tho its not like how paste settles. should break em in slowly. then let em rip. if you go 0-100 the way they are made chemically is the reason why they get brittle and dry out. only reason i know is because ive changed about 200 of my gpus pads throughout my adventures in mining... and DONT WORRY i only use nvidia cards to mine with the exception of a few 5700xt's and radeon VIIs.


Interesting info on the break in period. Gives me formula 1 tire break in vibes.

What do you recommend for break in temps/period?


----------



## ryouiki

So quite a few posts back was troubleshooting TDR/crashing for a few DirectX 11 titles... I picked up a few notes that others might be interested in if you are looking at sensors data on these cards:

HWiNFO64 "GPU Memory Usage" reporting 17-70+GB of memory usage, this was confirmed by HWiNFO author to be bug in AMD ADL and confirmed by AMD who is working on a fix.
Memory Clocks reading non-sensical values (3000+MHz), this was confirmed in AMD driver release notes for latest optional release as "Radeon performance metrics and logging features may intermittently report extremely high and incorrect memory clock values."


----------



## kairi_zeroblade

ryouiki said:


> So quite a few posts back was troubleshooting TDR/crashing for a few DirectX 11 titles... I picked up a few notes that others might be interested in if you are looking at sensors data on these cards:
> 
> HWiNFO64 "GPU Memory Usage" reporting 17-70+GB of memory usage, this was confirmed by HWiNFO author to be bug in AMD ADL and confirmed by AMD who is working on a fix.
> Memory Clocks reading non-sensical values (3000+MHz), this was confirmed in AMD driver release notes for latest optional release as "Radeon performance metrics and logging features may intermittently report extremely high and incorrect memory clock values."


This has always been int the todo list and notes for the AMD drivers for quite some time now..rarely it gets fixed and then breaks on another driver update..its nothing crazy that could affect the stability for you to crash on games (like you mentioned) your issue might be underlying else where..



FatFingerGamer said:


> just dont forget to let the new pads "cure" even tho its not like how paste settles. should break em in slowly. then let em rip. if you go 0-100 the way they are made chemically is the reason why they get brittle and dry out. only reason i know is because ive changed about 200 of my gpus pads throughout my adventures in mining... and DONT WORRY i only use nvidia cards to mine with the exception of a few 5700xt's and radeon VIIs.


quite subjective, for mining (torturing) those pads will be pressed under pressure with a significant amount of heat and that doesn't really incur "curing" time or settling time (or what ever you may want to call it), that process is more like "baking" it, the reason pads would be brittle/hard by the end of their lifespan is due to the lack of moisture the pad holds onto it (chemical not water), it keeps/maintains the synthetic/chewable/gummy properties of the pads, if those dry out, then its roast, pads doesn't require much "curing" time unlike some thermal pastes (some pastes/putty doesn't even need/require something like that), typically a few hours gaming with it would already allow it to meld properly already, unless you have mounting issues..


----------



## jonRock1992

Got a Time Spy session in today. I was able to improve my GPU score quite a bit by increasing voltage!
AMD Radeon RX 6900 XT video card benchmark result - AMD Ryzen 7 5800X,ASUSTeK COMPUTER INC. ROG CROSSHAIR VIII DARK HERO (3dmark.com)


----------



## LtMatt

Blimey I see the Timespy scores are now nearing 27K in the Hall of Fame.



jonRock1992 said:


> Got a Time Spy session in today. I was able to improve my GPU score quite a bit by increasing voltage!
> AMD Radeon RX 6900 XT video card benchmark result - AMD Ryzen 7 5800X,ASUSTeK COMPUTER INC. ROG CROSSHAIR VIII DARK HERO (3dmark.com)
> View attachment 2530155


Excellent workm you have a better graphics score than me now Jon well done. 

I have am going to revisit Timespy soon and have another crack at it.


----------



## kairi_zeroblade

LtMatt said:


> Blimey I see the Timespy scores are now nearing 27K in the Hall of Fame.
> 
> 
> Excellent workm you have a better graphics score than me now Jon well done.
> 
> I have am going to revisit Timespy soon and have another crack at it.


seems the last few pages (moar) seem to be dedicated on the podium finish for Hall of Fame ehh??..

I hope nobody fries their card in the process..and wish you good luck on that score hunting..


----------



## LtMatt

kairi_zeroblade said:


> seems the last few pages (moar) seem to be dedicated on the podium finish for Hall of Fame ehh??..
> 
> I hope nobody fries their card in the process..and wish you good luck on that score hunting..


We all like to wax our carrots over Futuremark benchmark scores periodically.  

AMD cards have historically almost always lost to Nvidia in all the Futuremark benchmarks, that is no longer the case with RDNA2.


----------



## kairi_zeroblade

LtMatt said:


> We all like to wax our carrots over Futuremark benchmark scores periodically.
> 
> AMD cards have historically almost always lost to Nvidia in all the Futuremark benchmarks, that is no longer the case with RDNA2.


well Futuremark has to nerf this...lol..

Futuremark where never fair to begin with, favoring the Green Team's GPU scheduler (software/driver based) more rather than being neutral..


----------



## jfrob75

My best TS since being able to increase GPU voltage: I scored 24 475 in Time Spy Graphics score very close to the 26K mark.


----------



## jonRock1992

jfrob75 said:


> My best TS since being able to increase GPU voltage: I scored 24 475 in Time Spy Graphics score very close to the 26K mark.


What GPU clock frequency did you set? For some reason my average clock is a lot lower than what I have it set to.


----------



## xR00Tx

Has anyone tested the 21.10.3 driver?!


----------



## jonRock1992

xR00Tx said:


> Has anyone tested the 21.10.3 driver?!


Yeah. It was less stable for my GPU. 21.10.2 WHQL is the best for me.


----------



## jfrob75

jonRock1992 said:


> What GPU clock frequency did you set? For some reason my average clock is a lot lower than what I have it set to.


The MAX GPU clk was set to 2910MHz and the MIN GPU clk was set to 2780 MHz, if my memory is correct. This was achieved using the driver 21.10.3.


----------



## ZealotKi11er

Wow, 2910 is very high. How much was before the extra voltage?


----------



## jonRock1992

jfrob75 said:


> The MAX GPU clk was set to 2910MHz and the MIN GPU clk was set to 2780 MHz, if my memory is correct. This was achieved using the driver 21.10.3.


Nice. The max I can go is 2875MHz @1262mV. I tried the voltage @ 1275mV, but it yielded no frequency gains. I might try 1288mV, but I'm a little weary of cranking the voltage up that high. I saw that people on luxx we're using 1.3V!


----------



## LtMatt

jonRock1992 said:


> Nice. The max I can go is 2875MHz @1262mV. I tried the voltage @ 1275mV, but it yielded no frequency gains. I might try 1288mV, but I'm a little weary of cranking the voltage up that high. I saw that people on luxx we're using 1.3V!


Blimey, takes a brave man to go to 1.3v!


----------



## LtMatt

jfrob75 said:


> My best TS since being able to increase GPU voltage: I scored 24 475 in Time Spy Graphics score very close to the 26K mark.


Nice one, good score. Not too far off from 26K and no Liquid Cooled BIOS.


----------



## weleh

A friend has a waterforce and when running LC bios can't use FT2... What a shame.


----------



## jonRock1992

weleh said:


> A friend has a waterforce and when running LC bios can't use FT2... What a shame.


At least he can run the bios at all lol. My GPU won't boot with it. There's gotta be some sort of technical reason why it won't boot with the Red Devil Ultimate. Because I know it's physically capable or running it.


----------



## weleh

Apparently he had wrong bios hmm, waiting to hear from him now.


----------



## LtMatt

jonRock1992 said:


> At least he can run the bios at all lol. My GPU won't boot with it. There's gotta be some sort of technical reason why it won't boot with the Red Devil Ultimate. Because I know it's physically capable or running it.


This is the only thing putting me off at the moment from trying the flash. However, I will try it soon as I have a dual BIOS switch so can always revert if sheet goes south. 

For now I want to max out the Timespy scores using the new tweak weleh shared before I moved onto the LQ BIOS to see what left I can get out of the Toxic. 

One thing that I always wondered, why is everyone on LUXX and elsewhere so fascinated with Timespy instead of Firestrike Extreme or Ultra? Are there Timespy specific tweaks that don't work on Firestrike? Or is it just a case of Timespy is newer so most people use that? Answers on a postcard please.


----------



## jonRock1992

LtMatt said:


> This is the only thing putting me off at the moment from trying the flash. However, I will try it soon as I have a dual BIOS switch so can always revert if sheet goes south.
> 
> For now I want to max out the Timespy scores using the new tweak weleh shared before I moved onto the LQ BIOS to see what left I can get out of the Toxic.
> 
> One thing that I always wondered, why is everyone on LUXX and elsewhere so fascinated with Timespy instead of Firestrike Extreme or Ultra? Are there Timespy specific tweaks that don't work on Firestrike? Or is it just a case of Timespy is newer so most people use that? Answers on a postcard please.


If you don't mind me asking, what tweak did he share? I'm assuming that people use Timespy because it is newer and it is the hardest one to pass. I was thinking about revisiting Fire strike Ultra soon.


----------



## LtMatt

jonRock1992 said:


> If you don't mind me asking, what tweak did he share? I'm assuming that people use Timespy because it is newer and it is the hardest one to pass. I was thinking about revisiting Fire strike Ultra soon.


The tweak you used above regarding voltage.


----------



## jonRock1992

LtMatt said:


> The tweak you used above regarding voltage.


Ohh lol. Thought you were referring to a different one. So your previous score was without the voltage tweak!?


----------



## Vesimas

I think i got the worst 6900 ever or i'm doing something wrong. I'm using AMD software trying 2300 min, 2400 max, 1100mv, 2150 ram and 15% power, and that are the same settings of my friend's 6900 on air but i'm under water and i can't pass timespy. Can't pass even upping to 1125mv :/


----------



## EastCoast

Vesimas said:


> I think i got the worst 6900 ever or i'm doing something wrong. I'm using AMD software trying 2300 min, 2400 max, 1100mv, 2150 ram and 15% power, and that are the same settings of my friend's 6900 on air but i'm under water and i can't pass timespy. Can't pass even upping to 1125mv :/


What is your friends max OC?


----------



## LtMatt

jonRock1992 said:


> Ohh lol. Thought you were referring to a different one. So your previous score was without the voltage tweak!?


Firestrike Extreme? Hell no, I was running 1.260v. 

My Timespy score of 25501 was without any voltage tweaks though.


----------



## jonRock1992

LtMatt said:


> Firestrike Extreme? Hell no, I was running 1.260v.
> 
> 
> 
> My Timespy score of 25501 was without any voltage tweaks though.


Lol hell yeah. I expect you'll get around 25800 with voltage tweaks in Timespy.


----------



## LtMatt

jonRock1992 said:


> Lol hell yeah. I expect you'll get around 25800 with voltage tweaks in Timespy.


That's what I thought too, but the best I got on my attempts with the tweak was 25503. Yep, two points better despite 50Mhz higher core clocks. 

However, I think something quirky may have been going on as I saw a nice gain in Firestrike. So I am going to try Timespy again to see what improvement I can get when I have time. 

Need to get sign off from the wife so she can look after our daughter and then time between work to spend a few hours playing around with it.


----------



## 99belle99

Vesimas said:


> I think i got the worst 6900 ever or i'm doing something wrong. I'm using AMD software trying 2300 min, 2400 max, 1100mv, 2150 ram and 15% power, and that are the same settings of my friend's 6900 on air but i'm under water and i can't pass timespy. Can't pass even upping to 1125mv :/


There deffo something wrong there. Are you using MPT?


----------



## J7SC

LtMatt said:


> (...)
> *One thing that I always wondered*, why is everyone on LUXX and elsewhere *so fascinated with Timespy* instead of Firestrike Extreme or Ultra? Are there Timespy specific tweaks that don't work on Firestrike? Or is it just a case of Timespy is newer so most people use that? Answers on a postcard please.


...If you check HoF Timespy vs Timespy Extreme, you find that in the latter, BigNavi is much less competitive with Amperes compared to Timespy, where only the exotically-cooled Amperes still beat BigNavi. TimeSpy is 1440 while Timespy Extreme (and of course Firestrike Ultra) are 4K. I still wonder what 'could have been' with BigNavi on a 384 or even 512 memory bus and/or double-the-size InfinityCache...never mind GDDR6_X_


----------



## Vesimas

EastCoast said:


> What is your friends max OC?


EDIT: 2500/2600 max voltage and max power but was too hot

EDIT2: @99belle99 noob question MPT?


----------



## LtMatt

J7SC said:


> ...If you check HoF Timespy vs Timespy Extreme, you find that in the latter, BigNavi is much less competitive with Amperes compared to Timespy, where only the exotically-cooled Amperes still beat BigNavi. TimeSpy is 1440 while Timespy Extreme (and of course Firestrike Ultra) are 4K. I still wonder what 'could have been' with BigNavi on a 384 or even 512 memory bus and/or double-the-size InfinityCache...never mind GDDR6_X_


Yes that's true, Timespy Extreme Ampere's bandwidth comes into play so still rules the roost there, and Port Royal of course. But all the other benches it seems to favour RDNA2 now overall and that is something you don't normally see. Shame there's no RDNA2 cards on LN2 on the HOF.


----------



## 99belle99

Vesimas said:


> EDIT: 2500/2600 max voltage and max power but was too hot
> 
> EDIT2: @99belle99 noob question MPT?


MorePowerTool

Back to basics can your card pass Timespy stock?


----------



## Vesimas

Sorry for doing


99belle99 said:


> MorePowerTool
> Back to basics can your card pass Timespy stock?


Yes and the score is 18213, just tryed his max setting, 2500/2600, max voltage, max ram and max power, i can pass but i'm scoring lower than stock 17779

EDIT: btw my actual target is just do some undervolting but improving also the performance vs stock


----------



## J7SC

LtMatt said:


> Yes that's true, Timespy Extreme Ampere's bandwidth comes into play so still rules the roost there, and Port Royal of course. But all the other benches it seems to favour RDNA2 now overall and that is something you don't normally see. Shame there's no RDNA2 cards on LN2 on the HOF.


No doubt RDNA2 is already a superb architecture, bar perhaps the memory bandwidth limitation. Even in PortRoyal (1440 btw), a title with Ray tracing etc, my 6900XT is now scoring deep into 3080 territory. If the Zen architecture improvements over time by AMD are any indication, RDNA 2+/3/4 will be mind-blowing. Of course NVidia (and perhaps now Intel) won't be standing still either....either way, good times ahead (other than for our wallets ).


----------



## Vesimas

Just tryed 2200/2400, 1150mv, max ram, 7% power and i can pass TS with score 18755 but i don't think it's a good idea considering that i'm consuming more than stock :V

EDIT: trying some combination
2200/2300, 1100mv, ram 2150, 15% no pass
2200/2300, 1100mv, ram stock, 15% no pass
2000/2200, 1100mv, ram 2150, 15% no pass
2000/2300. 1125mv, ram 2150, 15% no pass
2000/2400, 1150mv, ram 2150, 15% pass, 640 more gpu score but more power consumption vs stock


----------



## lawson67

jonRock1992 said:


> Nice. The max I can go is 2875MHz @1262mV. I tried the voltage @ 1275mV, but it yielded no frequency gains. I might try 1288mV, but I'm a little weary of cranking the voltage up that high. I saw that people on luxx we're using 1.3V!


I just had a run at max 2890 @1.262mv 530w and it went straight though no crash but i only hit 25300, did you get your water cooled right down jon?


----------



## jonRock1992

lawson67 said:


> I just had a run at max 2890 @1.262mv 530w and it went straight though no crash but i only hit 25300, did you get your water cooled right down jon?


I opened up the window and had a fan blowing the cool air in. I also cranked my fans up to max speed. So my temps were lower than they were before. I think that helped quite a bit.


----------



## lestatdk

Vesimas said:


> I think i got the worst 6900 ever or i'm doing something wrong. I'm using AMD software trying 2300 min, 2400 max, 1100mv, 2150 ram and 15% power, and that are the same settings of my friend's 6900 on air but i'm under water and i can't pass timespy. Can't pass even upping to 1125mv :/


Try setting the voltage to max. It's not necessarily stable at lowered voltage.

Have you monitored the temperatures ?


----------



## CS9K

lestatdk said:


> Try setting the voltage to max. It's not necessarily stable at lowered voltage.


Came to say this.

@Vesimas, below about 2550-2600MHz, the GPU will obey what you set your "voltage" slider to. That slider doesn't directly control voltage, but it _does_ offset the entire voltage<->frequency curve by that much.

With your GPU set at 2400 or 2500MHz, you are taking away too much voltage for the GPU to function. My reference RX 6900 XT can only do "1145mV" on the voltage slider. Perhaps start there at 2400 or 2500MHz, and see if you can pass Time Spy. If so, drop it by 10mV and test again, once you crash, raise the slider 20mV and never set it lower than that.

And to clarify, above about 2600MHz, the GPU ignores the voltage slider, and applies the voltage it needs to achieve the core clock speed that you have set.

Behold, the most crude paint artwork of what the voltage slider does. Ignore the values at the top right, they are for an RX 6800, but the process is the same:


----------



## Skinnered

I also dont understand the much lower frequencies I am getting with my sapphire toxic ee lc compared to other users.

In Serious Sam 4, vulkan, everthing maxed out, 5120x2880, reshade, rtgi included, I even have to drop my settings to 2755 core and ~ 2116 mem, or my pc shut down.(results in~2710/2100 ingame)
Hotspot temps are in the 95 range, gpu 65 with ~ 60 % fanpower.
Gpu wattage in mpt at 390W, and 1200mv.

The radiator fans are loud and the overall (cooling) perf. advantage compared to my former ref. model is a bit dissapointing.

Is there something wrong with the card I wonder, or just bad luck?


----------



## Vesimas

Sorry for the late answer but i was eating  After a ton of test i think i found the sweetspot, at least for Time Spy normal, i'll test all othe bench and games:
stock total 18231, 20271 GPU
2300/2500, 1135mV, ram max + fast timing, 10% power total 19042, 21523 GPU

Opinions? Thankx all


----------



## Steinar31

xfx rx6900xt speedster merc319

So maybe one of you guys can help me with this, i have never had a radeon card before, but my xfx rx6900xt have 2 bios, rage and normal i guess from the bios switch on the card, the card came with the rage bios enabled. but the fans goes to 3000 rpm always, i need to put the card in quiet mode in the software for the card to not get over 3000 rpm, i took a look into the bios useing morepowertool, if i click the overdrive limits i see that accustic limit/target rpm is sett like this

Rage bios 
3000 queit 1650 rpm
3300 balanced 3000 rpm
2000 turbo 2000 rpm
3300 rage 3300 rpm

queit 85c
balanced 80c
turbo 95c
rage 80c

Radeon Stress test 10 min default settings in rage bios 310w- junction temp 81c-83c current temp 59c-61c fanspeed 3000rpm gpu clock 2461


when i switch over to the other bios on the card i get more of a normal fan curved for rpm

Normal bios
2000 quiet 1200 rpm
2500 balanced 1200 rpm
2250 turbo 1750 rpm
2800 rage 1750 rpm

quiet 90
balanced 89
turbo 95
rage 95

Radeon Stress test 10 min default settings in Normal bios 290w junction temp 90c current temp 75c fanspeed 1260rpm

the temps are really good with this card, but what i want to know is if this rage bios that i have on the xfx rx6900xt speedster merc319 is simular to other rx6900xt? bec i dont get why this card in ragebios need to have fans ramped up to 3000


----------



## lestatdk

Vesimas said:


> Sorry for the late answer but i was eating  After a ton of test i think i found the sweetspot, at least for Time Spy normal, i'll test all othe bench and games:
> stock total 18231, 20271 GPU
> 2300/2500, 1135mV, ram max + fast timing, 10% power total 19042, 21523 GPU
> 
> Opinions? Thankx all


It will go higher if you increase the voltage


----------



## Vesimas

Going to sleep, but for today i can say that with 2300/2500, 1135mV, Ram Max + Fast timing and 10% power i passed all 3DMark bench (normal, extreme, ultra), Heaven and Superposition without problems and max GPU temp 50° increasing also the score vs stock


----------



## EastCoast

Vesimas said:


> EDIT: 2500/2600 max voltage and max power but was too hot
> 
> EDIT2: @99belle99 noob question MPT?


If it's too hot at 2500mhz then below that is more stable. Sounds like 2500/2600mhz is suicide run.


----------



## OC-NightHawk

J7SC said:


> ...If you check HoF Timespy vs Timespy Extreme, you find that in the latter, BigNavi is much less competitive with Amperes compared to Timespy, where only the exotically-cooled Amperes still beat BigNavi. TimeSpy is 1440 while Timespy Extreme (and of course Firestrike Ultra) are 4K. I still wonder what 'could have been' with BigNavi on a 384 or even 512 memory bus and/or double-the-size InfinityCache...never mind GDDR6_X_


Maybe the chips just couldn't handle that kind of bandwidth. Here's hoping that the RX 7900 XT keeps the pace for fill rate improvements and significantly increases the bandwidth in order to give Nvidia a run for it's money.


----------



## J7SC

OC-NightHawk said:


> Maybe the chips just couldn't handle that kind of bandwidth. Here's hoping that the RX 7900 XT keeps the pace for fill rate improvements and significantly increases the bandwidth in order to give Nvidia a run for it's money.


...I think AMD will, notwithstanding progress by their competitors. AMD's recent patent filings on 'tile interconnect' for mGPUs is highly interesting


----------



## CS9K

OC-NightHawk said:


> Maybe the chips just couldn't handle that kind of bandwidth. Here's hoping that the RX 7900 XT keeps the pace for fill rate improvements and significantly increases the bandwidth in order to give Nvidia a run for it's money.


Rumor has it, that RDNA3 will have 256MB of Infinity Cache (which will handily be the fabric that glues the two? cores together).

Just thinking out loud for a bit:

The way the Infinity Fabric does its thing, a 4k frame is _just_ too big for 128MB to be efficient, unlike how 128MB is _plenty_ at 1080p and 1440p. I had hoped that overclocking the fabric would make a larger difference than it does in practice, but I think memory bandwidth is what is holding the card back at 4k, as others have observed with their "unlocked memory speed" bios flash.

I am experienced with DDR4 memory overclocking, and even got to mess around with GDDR6 overclocking with my RX 5600 XT last year. The RX 5600 XT did not have much headroom left in it, unfortunately (I barely got out of margin of error for performance improvements), I am _READY_ for when we finally get bios modding for RDNA2, as I believe the memory setup has a decent bit of headroom left in it so far as timings and voltage go, especially for those of us with full-coverage water blocks (where GDDR6 temps top out under 60C).


----------



## jonRock1992

CS9K said:


> Rumor has it, that RDNA3 will have 256MB of Infinity Cache (which will handily be the fabric that glues the two? cores together).
> 
> Just thinking out loud for a bit:
> 
> The way the Infinity Fabric does its thing, a 4k frame is _just_ too big for 128MB to be efficient, unlike how 128MB is _plenty_ at 1080p and 1440p. I had hoped that overclocking the fabric would make a larger difference than it does in practice, but I think memory bandwidth is what is holding the card back at 4k, as others have observed with their "unlocked memory speed" bios flash.
> 
> I am experienced with DDR4 memory overclocking, and even got to mess around with GDDR6 overclocking with my RX 5600 XT last year. The RX 5600 XT did not have much headroom left in it, unfortunately (I barely got out of margin of error for performance improvements), I am _READY_ for when we finally get bios modding for RDNA2, as I believe the memory setup has a decent bit of headroom left in it so far as timings and voltage go, especially for those of us with full-coverage water blocks (where GDDR6 temps top out under 60C).


I really hope bios modding becomes a thing with RDNA2.


----------



## J7SC

jonRock1992 said:


> I really hope bios modding becomes a thing with RDNA2.


I hope so, too. But it is a bit of an arms race, with vendors tightening things up - even more so as variable boost via algorithms is already exploiting a lot of headroom left in large-scale production of GPUs (and CPUs).

When I first got into overclocking, the likes of GTX 670/680 were ridiculously easy to bios-mod > save .rom as .txt, open with Notepad and mod away to your heart's content... There were additional utilities, like the ATItool I referenced the other day with which you could change VRAM timing on the fly...then the likes of UEFI came along .


----------



## CS9K

J7SC said:


> I hope so, too. But it is a bit of an arms race, with vendors tightening things up - even more so as variable boost via algorithms is already exploiting a lot of headroom left in large-scale production of GPUs (and CPUs).
> 
> When I first got into overclocking, the likes of GTX 670/680 were ridiculously easy to bios-mod > save .rom as .txt, open with Notepad and mod away to your heart's content... There were additional utilities, like the ATItool I referenced the other day with which you could change VRAM timing on the fly...then the likes of UEFI came along .


It's a double-edged sword. I have absolutely zero problem with bios modding being tough to do... god knows that 99.9999% of the general population needs to stay the hell out of the vbios because they don't know what they're doing. In the past, it was too easy for people to nuke a GPU, then whine because they either had to fight for their warranty, or were denied.

And to be fair, I would rather it be a while for us to get things like BIOS modding and Vcore control, because not only does it give us time to sort out card behavior _without_ those things, it also prevents a massive wave of "Mt. Stupid" GPUs from being RMA'd by people who had no idea what they were doing.

From the Peak of Mt. Stupid, Vcore VRM's go ⚡*ZORT*⚡


----------



## J7SC

..whether 'mount stupid' or just the typical bell curve that seems to appear in most things involving a lot of folks, fact is that I bought - not rented - an expensive GPU. I'm leaving out my not insignificant experience on modding, and even the fact that I've never rma'ed any computer part I had modded in case it broke (bios or otherwise).

In general terms, when I add much more cooling capacity and the like, I expect to be able to 'reward that' by going off the bios limits introduced for large-scale production geared towards worst-case scenario air-cooling. I know when I do so, many vendors -rightfully - have the basis to deny warranty if related to mods.

Also, when looking at the NVidia camp, no matter how hard they tied things down, there are always more or less compatible vbios from competing board partners available, whether for example previous-gen 2080 TI or current-gen RTX 3090. For both those cards, it was a cinch to find multiple XOC 1000W vBios (3 different ones for my 3090). I would have preferred a vbios editing tool w/ access to most soft parameters (there are some hard ones baked in) and likely would have settled for s.th. less 'out there'.

In any case, with my current 6900XT, all I really still miss is the ability to boost VRAM beyond the 2150 MHz as even the AMD software 'oc option' tells me that 2260 is doable. MPT has pretty much taken care of anything else


----------



## kairi_zeroblade

OC-NightHawk said:


> Maybe the chips just couldn't handle that kind of bandwidth. Here's hoping that the RX 7900 XT keeps the pace for fill rate improvements and significantly increases the bandwidth in order to give Nvidia a run for it's money.


the Mi200, 250 and 250x are using 128GB HBM2 and an MCM design, hope AMD uses them again for RDNA3 (or future GPU Designs)..for sure they'll give everyone (competition) a run for the money..


----------



## WR-HW95

Hello.
I came to ask from ppl who have had TP pumping going, that have you guys noticed any effect on memory oc when hotspot have started to rise?
Thing is that I had on my watercooled reference card running fine on 2150 memory when I tested TS on got score 23.8K, but 2 weeks later performance was terrible above 2140. And even on 2140 [email protected] did computing errors now and then.
I have had pumping problems with 1080Ti´s and MX-4 before, so I bought Phobya HeGrease for the 6900XT and changed same stuff to 1080Ti too. Now that leaked off from 1080Ti too and I noticed when I was removing block that it didnt even seem to be screwed tight. So when I repasted and screwed it back, I tried screws on 6900XT and all screws tighten up about half turn in it too.
Temps on card have been ok all this time. On ~200W load delta is 9´C and on max 413W 21´C, but is it still possible that memory degrade is related to earlier loose block?


----------



## lawson67

LtMatt said:


> Blimey I see the Timespy scores are now nearing 27K in the Hall of Fame.
> 
> 
> Excellent workm you have a better graphics score than me now Jon well done.
> 
> I have am going to revisit Timespy soon and have another crack at it.


I tried with jon's settings and just about beat your score but i know it will go faster but its all about getting it as cool as you can , i just did that run first thing this morning while it was still cool, my Vram don't help also as it cant go much above 2116mhz without taking a hit


----------



## LtMatt

lawson67 said:


> I tried with jon's settings and just about beat your score but i know it will go faster but its all about getting it as cool as you can , i just did that run first thing this morning while it was still cool, my Vram don't help also as it cant go much above 2116mhz without taking a hit
> View attachment 2530276


Nice one. 

I achieved that score with only 2825Mhz core clock somehow, all prior scores were 100-200 points lower as well so I think mine was a bit of a fluke run. 

Your memory will be holding you back a little for sure.


----------



## lawson67

LtMatt said:


> Nice one.
> 
> I achieved that score with only 2825Mhz core clock somehow, all prior scores were 100-200 points lower as well so I think mine was a bit of a fluke run.
> 
> Your memory will be holding you back a little for sure.


Its getting higher


----------



## LtMatt

lawson67 said:


> Its getting higher
> 
> View attachment 2530277


Going in the right direction.

What clocks are you running, 2875?


----------



## lawson67

LtMatt said:


> Going in the right direction.
> 
> What clocks are you running, 2875?


The last run i did was max 2885mhz my average clock was lower than my 25500 yet scored more??
Edit its cos i left Freesync on on the first run lol

25641

25507


----------



## LtMatt

lawson67 said:


> The last run i did was max 2885mhz my average clock was lower than my 25500 yet scored more??
> Edit its cos i left Freesync on on the first run lol
> 
> 25641
> 
> 25507


I'm not sure if the reported values are always accurate tbh.


----------



## lawson67

LtMatt said:


> Going in the right direction.
> 
> What clocks are you running, 2875?


Slowly heading in the right direction 

I scored 22 173 in Time Spy
View attachment 2530284


----------



## lawson67

I scored 22 228 in Time Spy


----------



## LtMatt

Can you make it to 26K? Seems about the absolute best anyone on non LQ BIOS can hope for I think.


----------



## lawson67

LtMatt said:


> Can you make it to 26K? Seems about the absolute best anyone on non LQ BIOS can hope for I think.


I slowly heading that way but Vram holding me back

I scored 22 188 in Time Spy


----------



## LtMatt

lawson67 said:


> I slowly heading that way but Vram holding me back
> 
> View attachment 2530290


Nice, how much voltage and what clock speed you at?


----------



## jonRock1992

My underdog of an XTXH GPU seems to do much better with more voltage. I've got some ideas for my next bench session. Gotta try to beat @lawson67 now 😜


----------



## LtMatt

jonRock1992 said:


> My underdog of an XTXH GPU seems to do much better with more voltage. I've got some ideas for my next bench session. Gotta try to beat @lawson67 now 😜


Feed it 1.3v.  🔥🔥


----------



## lawson67

LtMatt said:


> Nice, how much voltage and what clock speed you at?


Right now i am using these but it keeps letting me go higher but i am not complaining lol

I scored 22 287 in Time Spy


----------



## jonRock1992

lawson67 said:


> Right now i am using these but it keeps letting me go higher but i am not complaining lol
> 
> I scored 22 287 in Time Spy
> 
> View attachment 2530295


You have such a good bin on that GPU. Too bad the LC bios doesn't work correctly with the Red Devil Ultimate. Wish we could figure that one out lol.


----------



## lestatdk

LtMatt said:


> Feed it 1.3v.  🔥🔥


1.4V . I mean, why not ??


----------



## lawson67

jonRock1992 said:


> You have such a good bin on that GPU. Too bad the LC bios doesn't work correctly with the Red Devil Ultimate. Wish we could figure that one out lol.


I was talking to @xR00Tx he said he had the same with his powercolor when he flashed the LC bios as us but he changed his M/B and his hard drive then it was fine, doubt ill be changing my M/B but i might change my Hard drive and give that a go, if i could run 2150mhz Vram i would already be at 26k i belive


----------



## jonRock1992

I had a feeling it had something to do with the motherboard. I don't really want to change from my Dark Hero though. If swapping the hard drive is what actually fixed it, then it might have something to do with the windows installation. Maybe we could load up the LC bios in MPT and write the LC powerplay tables before booting into Linux and flashing?


----------



## lawson67

jonRock1992 said:


> I had a feeling it had something to do with the motherboard. I don't really want to change from my Dark Hero though. If swapping the hard drive is what actually fixed it, then it might have something to do with the windows installation. Maybe we could load up the LC bios in MPT and write the LC powerplay tables before booting into Linux and flashing?


That might work its worth a shot but i really think you need the programmer to get it on there correctly, most the guys over at LUXX used the 341 programmer as Linux was not successful for everyone, i bought one and it would boot but only until a restart


----------



## LtMatt

jonRock1992 said:


> You have such a good bin on that GPU. Too bad the LC bios doesn't work correctly with the Red Devil Ultimate. Wish we could figure that one out lol.


He really does, great bin. 2925Mhz in Timespy, mother of christ.


----------



## LtMatt

lawson67 said:


> I was talking to @xR00Tx he said he had the same with his powercolor when he flashed the LC bios as us but he changed his M/B and his hard drive then it was fine, doubt ill be changing my M/B but i might change my Hard drive and give that a go, if i could run 2150mhz Vram i would already be at 26k i belive


Which board are you using?


----------



## lawson67

LtMatt said:


> Which board are you using?


Asus b550-E and the scores are still slowly going up, gonna have one last run then that's me done for the day


----------



## J7SC

lestatdk said:


> 1.4V . I mean, why not ??


...add one of these for 1.5V ! 🎃


----------



## LtMatt

J7SC said:


> ...add one of these for 1.5V ! 🎃
> 
> View attachment 2530312


 🔥 🔥 🔥 🔥 🔥 🔥 🔥

Is that how you got 35K+ in Firestrike Extreme?


----------



## J7SC

LtMatt said:


> 🔥 🔥 🔥 🔥 🔥 🔥 🔥
> 
> Is that how you got 35K+ in Firestrike Extreme?


...I'm not commenting on that w/o my lawyer present !


----------



## ZealotKi11er

Waiting for 12900K before I crush the score again.


----------



## LtMatt

J7SC said:


> ...I'm not commenting on that w/o my lawyer present !


 My man!



ZealotKi11er said:


> Waiting for 12900K before I crush the score again.


Traitor! 🤮


----------



## gtz

ZealotKi11er said:


> Waiting for 12900K before I crush the score again.


We will have to see how timespy treats the e cores. Also timespy is very latency sensitive and does not care about bandwidth.

I do want to try it and will prob sell off my secondary Ryzen system (Ryzen has been great to me, just want to experiment) to try the i7 12700K. The p core performance looks great and would prob disable the e cores and just run 8 cores 16 threads. I am very excited for both alder lake and Zen3D. I am just waiting to replace my main rig for the HEDT versions.


----------



## snakeeyes111

Woop woop 12900k highscores on new lvl. 22k cpu score inc


----------



## gtz

snakeeyes111 said:


> Woop woop 12900k highscores on new lvl. 22k cpu score inc


Did these leak already? 

Very nice if true, best I could do with my 9980XE was 19600 (normal water-cooling).


----------



## OC-NightHawk

gtz said:


> Did these leak already?
> 
> Very nice if true, best I could do with my 9980XE was 19600 (normal water-cooling).


they where on preorder at Newegg yesterday and where gone fast.


----------



## kairi_zeroblade

snakeeyes111 said:


> Woop woop 12900k highscores on new lvl. 22k cpu score inc


and somebody complained about the E-Cores being slow..lol..


----------



## CS9K

kairi_zeroblade said:


> and somebody complained about the E-Cores being slow..lol..


"Ah ****, here we go again..."


----------



## J7SC

CS9K said:


> "Ah ****, here we go again..."


..yes, same old soap opera to power the sales...stay tuned for the next episode: Ryzen 5K VCache !

And for those into conspiracy theories, timely Win 11 Ryzen / L3 cache issues before fixes, after Intel inhouse comps 🎃


----------



## ZealotKi11er

gtz said:


> We will have to see how timespy treats the e cores. Also timespy is very latency sensitive and does not care about bandwidth.
> 
> I do want to try it and will prob sell off my secondary Ryzen system (Ryzen has been great to me, just want to experiment) to try the i7 12700K. The p core performance looks great and would prob disable the e cores and just run 8 cores 16 threads. I am very excited for both alder lake and Zen3D. I am just waiting to replace my main rig for the HEDT versions.


Probably 8+8 with HT Disabled might give highest score. Also if E cores can OC to 5GHz it can probably be a TS champ.


----------



## Neoki

Got some new thermal upgrades for the Zero just in time to go along with my hard tubing work.


----------



## xR00Tx

lawson67 said:


> I was talking to @xR00Tx he said he had the same with his powercolor when he flashed the LC bios as us but he changed his M/B and his hard drive then it was fine, doubt ill be changing my M/B but i might change my Hard drive and give that a go, if i could run 2150mhz Vram i would already be at 26k i belive


Hey dude, I think you are confusing me with someone else. I do not have a PC 6900xt and I haven't flashed any bios to my gpu!


----------



## lawson67

xR00Tx said:


> Hey dude, I think you are confusing me with someone else. I do not have a PC 6900xt and I haven't flashed any bios to my gpu!


Yeah i was getting confused sorry mate it was @L!ME who i was talking to as he also has a Powercolor who had the same problem as me and Jon when Flashing the LC bios, however he sorted it by changing M/B and hard drive


----------



## CS9K

Welp, I found the voltage control, gave it 25 extra mV, and all went well: broke 24k in Time Spy I scored 21 431 in Time Spy

I reverted back to my previous daily-driver settings. I'm not happy with the kludge that is temp dependent vmin in the vain of a daily-driver config.

There IS another kludge one can use to temper the "over-volt" people were seeing (full voltage at partial loads before the card's LLC kicks in), and that's using the temperature TVmin+Hysteresis as pictured below. This works easily on water, but may be more tricky to pull off on air. 










Not something I'm happy running daily, but it is what it is. I eagerly await full bios control.


----------



## kairi_zeroblade

CS9K said:


> Welp, I found the voltage control, gave it 25 extra mV, and all went well: broke 24k in Time Spy I scored 21 431 in Time Spy
> 
> I reverted back to my previous daily-driver settings. I'm not happy with the kludge that is temp dependent vmin in the vain of a daily-driver config.
> 
> There IS another kludge one can use to temper the "over-volt" people were seeing (full voltage at partial loads before the card's LLC kicks in), and that's using the temperature TVmin+Hysteresis as pictured below. This works easily on water, but may be more tricky to pull off on air.
> 
> View attachment 2530410
> 
> 
> Not something I'm happy running daily, but it is what it is. I eagerly await full bios control.


Actually I am already using that before on air..not to tricky to setup (though I have a 6800XT) just needs more time for your test and those reboots in between test are pretty much the annoying part..


----------



## lawson67

kairi_zeroblade said:


> Actually I am already using that before on air..not to tricky to setup (though I have a 6800XT) just needs more time for your test and those reboots in between test are pretty much the annoying part..


You don't have to restart your PC after putting settings into MPT if you use ToastyX Restart64.exe which comes packaged with CRU, i used to use it a lot overclocking monitors a few years ago but it will restart your graphic driver while you are still in window's and apply your newly loaded MPT settings without the need for a reboot, you can download CRU in the link below

CRU


----------



## kairi_zeroblade

lawson67 said:


> You don't have to restart your PC after putting settings into MPT if you use ToastyX Restart64.exe which comes packaged with CRU, i used to use it a lot overclocking monitors a few years ago but it will restart your graphic driver while you are still in window's and apply your newly loaded MPT settings without the need for a reboot, you can download CRU in the link below
> 
> CRU


oh that's neat..never knew that..


----------



## CS9K

lawson67 said:


> You don't have to restart your PC after putting settings into MPT if you use ToastyX Restart64.exe which comes packaged with CRU, i used to use it a lot overclocking monitors a few years ago but it will restart your graphic driver while you are still in window's and apply your newly loaded MPT settings without the need for a reboot, you can download CRU in the link below
> 
> CRU


That is indeed neat for a quick-turnaround test. I personally prefer a fresh boot when it comes to testing settings that may or may not achieve a performance difference. I don't want driver wonkiness doing anything to performance outside of the settings I punch in.


----------



## 99belle99

I don't know how nobody knew that. I have been using that method since I was on my old 5700 XT what must be 2 years ago now.


----------



## CS9K

99belle99 said:


> I don't know how nobody knew that. I have been using that method since I was on my old 5700 XT what must be 2 years ago now.


People did, it's just not a popular method it seems.


----------



## 99belle99

CS9K said:


> People did, it's just not a popular method it seems.


The only downside is Radeon settings takes a min to return to normal. It seems longer as you do be impatiently waiting for it.


----------



## Thanh Nguyen

2700mhz vs 2900mhz in game, how many more fps do I get at 1440p guys?


----------



## ZealotKi11er

Thanh Nguyen said:


> 2700mhz vs 2900mhz in game, how many more fps do I get at 1440p guys?


4-5% more fps.


----------



## Thanh Nguyen

ZealotKi11er said:


> 4-5% more fps.


Do u have a video compare those 2 clocks? Thanks.


----------



## Nizzen

Thanh Nguyen said:


> Do u have a video compare those 2 clocks? Thanks.


* 7.41 % is the difference in clock 
4-5% gain in fps sounds about right, if the computer doesn't have any other bottlenecks....*


----------



## LtMatt

ZealotKi11er said:


> 4-5% more fps.


Yep at best I’d say.


----------



## Thanh Nguyen

How many people actually can run every game at 2900mhz ?


----------



## LtMatt

Thanh Nguyen said:


> How many people actually can run every game at 2900mhz ?


Not many unless they add voltage above 1.2v. This would drop even further as you increase resolution.


----------



## Neoki

Thanh Nguyen said:


> How many people actually can run every game at 2900mhz ?


Most of us running high clocks only do it for benchmarking. Daily driving/gaming is not that high.


----------



## kairi_zeroblade

Neoki said:


> Most of us running high clocks only do it for benchmarking. Daily driving/gaming is not that high.


we're on OCN..wut da hek..2.9ghz on GPU is easy peasy daily.. 🤣


----------



## ZealotKi11er

Thanh Nguyen said:


> How many people actually can run every game at 2900mhz ?


3GHz 24/7


----------



## LtMatt

I'm having a go at Timespy Extreme men.








I scored 12 398 in Time Spy Extreme


AMD Ryzen 9 5950X, AMD Radeon RX 6900 XT x 1, 32768 MB, 64-bit Windows 10}




www.3dmark.com





Fastest 6900 XT graphics score atm.


----------



## amigafan2003

LtMatt said:


> I'm having a go at Timespy Extreme men.
> 
> 
> 
> 
> 
> 
> 
> 
> I scored 12 398 in Time Spy Extreme
> 
> 
> AMD Ryzen 9 5950X, AMD Radeon RX 6900 XT x 1, 32768 MB, 64-bit Windows 10}
> 
> 
> 
> 
> www.3dmark.com
> 
> 
> 
> 
> 
> Fastest 6900 XT graphics score atm.


Nice!


----------



## Neoki

LtMatt said:


> I'm having a go at Timespy Extreme men.
> 
> 
> 
> 
> 
> 
> 
> 
> I scored 12 398 in Time Spy Extreme
> 
> 
> AMD Ryzen 9 5950X, AMD Radeon RX 6900 XT x 1, 32768 MB, 64-bit Windows 10}
> 
> 
> 
> 
> www.3dmark.com
> 
> 
> 
> 
> 
> Fastest 6900 XT graphics score atm.


Nice run dude!


----------



## weleh

How much power Matt?


----------



## jonRock1992

Thanh Nguyen said:


> How many people actually can run every game at 2900mhz ?


I do lol. I'm a risk-taker and I need max performance for my 270Hz 1440p monitor.


----------



## LtMatt

weleh said:


> How much power Matt?


Too much Lol. Benching in progress, more details to follow.

Meanwhile, here's a Firestrike Ultra. No LC BIOS.








I scored 17 353 in Fire Strike Ultra


AMD Ryzen 9 5950X, AMD Radeon RX 6900 XT x 1, 32768 MB, 64-bit Windows 10}




www.3dmark.com





Here's the worlds current fastest Firestrike Extreme.








I scored 32 574 in Fire Strike Extreme


AMD Ryzen 9 5950X, AMD Radeon RX 6900 XT x 1, 32768 MB, 64-bit Windows 10}




www.3dmark.com





People will beat these before long, my silicon is not the best.


----------



## kairi_zeroblade

LtMatt said:


> I'm having a go at Timespy Extreme men.
> 
> 
> 
> 
> 
> 
> 
> 
> I scored 12 398 in Time Spy Extreme
> 
> 
> AMD Ryzen 9 5950X, AMD Radeon RX 6900 XT x 1, 32768 MB, 64-bit Windows 10}
> 
> 
> 
> 
> www.3dmark.com
> 
> 
> 
> 
> 
> Fastest 6900 XT graphics score atm.


you can disable SMT for a little bit more push on scores..


----------



## jonRock1992

LtMatt said:


> Too much Lol. Benching in progress Jon, more details to follow.
> 
> Meanwhile, here's a Firestrike Ultra. No LC BIOS.
> 
> 
> 
> 
> 
> 
> 
> 
> I scored 17 353 in Fire Strike Ultra
> 
> 
> AMD Ryzen 9 5950X, AMD Radeon RX 6900 XT x 1, 32768 MB, 64-bit Windows 10}
> 
> 
> 
> 
> www.3dmark.com
> 
> 
> 
> 
> 
> Here's the worlds current fastest Firestrike Extreme.
> 
> 
> 
> 
> 
> 
> 
> 
> I scored 32 574 in Fire Strike Extreme
> 
> 
> AMD Ryzen 9 5950X, AMD Radeon RX 6900 XT x 1, 32768 MB, 64-bit Windows 10}
> 
> 
> 
> 
> www.3dmark.com
> 
> 
> 
> 
> 
> People will beat these before long, my silicon is not the best.


Wow that clock speed is nice! Looking forward to what you can achieve.


----------



## LtMatt

jonRock1992 said:


> Wow that clock speed is nice! Looking forward to what you can achieve.


Not much more now wrapping up.


----------



## LtMatt

kairi_zeroblade said:


> you can disable SMT for a little bit more push on scores..


That applies to Timespy standard only. In Extreme it scales better with SMT.


----------



## gtz

kairi_zeroblade said:


> you can disable SMT for a little bit more push on scores..


Timespy extreme scales to 32 threads, disabling will hurt him. Regular timespy is the one that handles 32 threads and more poorly.


----------



## LtMatt

32619 Firestrike Extreme 🔥🔥🔥








I scored 32 619 in Fire Strike Extreme


AMD Ryzen 9 5950X, AMD Radeon RX 6900 XT x 1, 32768 MB, 64-bit Windows 10}




www.3dmark.com


----------



## LtMatt

Final results from todays run. cc @jonRock1992 @xR00Tx @weleh 

1.3v+ 🔥 🔥 🔥
525W PL
425A TDC
70A SOC

2162Mhz FCLK
1266Mhz SOC
Quiet BIOS for Toxic Extreme

Firestrike Extreme (No1 Worldwide)
AMD Radeon RX 6900 XT video card benchmark result - AMD Ryzen 9 5950X,ASUSTeK COMPUTER INC. ROG CROSSHAIR VIII HERO (3dmark.com)

Firestrike Ultra (No3 Worldwide)
AMD Radeon RX 6900 XT video card benchmark result - AMD Ryzen 9 5950X,ASUSTeK COMPUTER INC. ROG CROSSHAIR VIII HERO (3dmark.com)

Timespy Extreme (No 1 6900 XT graphics core worldwide - No18 overall)
AMD Radeon RX 6900 XT video card benchmark result - AMD Ryzen 9 5950X,ASUSTeK COMPUTER INC. ROG CROSSHAIR VIII HERO (3dmark.com)

I ran out of time to improve Timespy normal score, that will have to be for another day.


----------



## lawson67

LtMatt said:


> Final results from todays run. cc @jonRock1992 @xR00Tx
> 
> 1.3v+ 🔥 🔥 🔥
> 525W PL
> 425A TDC
> 70A SOC
> 
> 2162Mhz FCLK
> 1266Mhz SOC
> Quiet BIOS for Toxic Extreme
> 
> Firestrike Extreme (No1 Worldwide)
> AMD Radeon RX 6900 XT video card benchmark result - AMD Ryzen 9 5950X,ASUSTeK COMPUTER INC. ROG CROSSHAIR VIII HERO (3dmark.com)
> 
> Firestrike Ultra (No3 Worldwide)
> AMD Radeon RX 6900 XT video card benchmark result - AMD Ryzen 9 5950X,ASUSTeK COMPUTER INC. ROG CROSSHAIR VIII HERO (3dmark.com)
> 
> Timespy Extreme (No 1 6900 XT graphics core worldwide - No18 overall)
> AMD Radeon RX 6900 XT video card benchmark result - AMD Ryzen 9 5950X,ASUSTeK COMPUTER INC. ROG CROSSHAIR VIII HERO (3dmark.com)
> 
> I ran out of time to improve Timespy normal score, that will have to be for another day.


You need to beat my Normal Time Spy score Matt i am nearly at 26k now, so near yet so far just 33 points away lol
I scored 22 449 in Time Spy


----------



## Ajdaho pl

I see that you guys already know my method of increasing the voltage on the big navi, which I posted on hardwareluxx (Sylwester.), share with the results of MPT settings, it will make it easier for You to beat new records, I do not know what you already set in MPT but if you need I can throw my settings maybe that help, admittedly with 6800XT liquid devil, but most is universal for all cards


----------



## LtMatt

Ajdaho pl said:


> I see that you guys already know my method of increasing the voltage on the big navi, which I posted on hardwareluxx (Sylwester.), share with the results of MPT settings, it will make it easier for You to beat new records, I do not know what you already set in MPT but if you need I can throw my settings maybe that help, admittedly with 6800XT liquid devil, but most is universal for all cards


Great discovery!


----------



## Henry46277

Umm...
I also want enable TEMP_DEPENDENT_VMIN in MPT.
But not wroking.
How shouid I do?

MPT Version 1.3.5.


----------



## LtMatt

Henry46277 said:


> Umm...
> I also want enable TEMP_DEPENDENT_VMIN in MPT.
> But not wroking.
> How shouid I do?
> 
> MPT Version 1.3.5.


Update to the latest Beta version.


----------



## LtMatt

lawson67 said:


> You need to beat my Normal Time Spy score Matt i am nearly at 26k now, so near yet so far just 33 points away lol
> I scored 22 449 in Time Spy
> 
> View attachment 2530557


I'm less motivated by Timespy since it seems you need a LQ BIOS to compete with the best scores. 

However, I will try to improve my Timespy score next time I have a few hours to spare for benching.


----------



## Henry46277

LtMatt said:


> Update to the latest Beta version.


Thank you Very Much .
Now i can overclock stable with 2800 MHz/ 2150 MHz @1220 mV on my OC forumla.
But too hot to pass the test with 2850 MHz @1250 mV.
Maybe it's my card limit.


----------



## LtMatt

Henry46277 said:


> Thank you Very Much .
> Now i can overclock stable with 2800 MHz/ 2150 MHz @1220 mV on my OC forumla.
> But too hot to pass the test with 2850 MHz @1250 mV.
> Maybe it's my card limit.


Nice. 

Yes these cards really need water once you go past 400W.


----------



## Henry46277

LtMatt said:


> Nice.
> 
> Yes these cards really need water once you go past 400W.


In water now.
140+280+360 thick radiator and two VPP755 v.3 pumps.
Use Bykski GPU waterblock for OC forumla.

OC 2850 MHz @1250 mV can only pass TSE GT1.
GT2 will Driver time out.


----------



## Ajdaho pl

Henry46277 said:


> Thank you Very Much .
> Now i can overclock stable with 2800 MHz/ 2150 MHz @1220 mV on my OC forumla.
> But too hot to pass the test with 2850 MHz @1250 mV.
> Maybe it's my card limit.


set the minimum value similar to the maximum value, it can be a little lower or equal, because the card can be unstable (the voltage can drop to the minimum value entered and cause the benchmark to crash) 
without watercooling I would not even oc 6800XT with raise volts, those cards after voltage boost can pull 500W+ in TS, in BICUBIC tests my 6800XT card pulls 573W, keep an eye on temperatures

that is my daily settings , i use this to play games for last week









and this is for my today record








red mark im not shure , sometimes make card unstable at this settings 
and the cherry on the cake is how much the card takes in under real load


----------



## LtMatt

Henry46277 said:


> In water now.
> 140+280+360 thick radiator and two VPP755 v.3 pumps.
> Use Bykski GPU waterblock for OC forumla.
> 
> OC 2850 MHz @1250 mV can only pass TSE GT1.
> GT2 will Driver time out.


Try 1275.  🔥


----------



## lawson67

LtMatt said:


> I'm less motivated by Timespy since it seems you need a LQ BIOS to compete with the best scores.
> 
> However, I will try to improve my Timespy score next time I have a few hours to spare for benching.


I am more motivated by normal Time spy as its the most challenging benchmark to get a high graphic score in which is why most people concentrate on benchmarking that, i wanna get 26k without an LC bios and i am nearly there even with crap Vram lol


----------



## LtMatt

Ajdaho pl said:


> set the minimum value similar to the maximum value, it can be a little lower or equal, because the card can be unstable (the voltage can drop to the minimum value entered and cause the benchmark to crash)
> without watercooling I would not even oc 6800XT with raise volts, those cards after voltage boost can pull 500W+ in TS, in BICUBIC tests my 6800XT card pulls 573W, keep an eye on temperatures
> 
> that is my daily settings , i use this to play games for last week
> 
> View attachment 2530588
> 
> and this is for my today record
> View attachment 2530589
> 
> red mark im not shure , sometimes make card unstable at this settings
> and the cherry on the cake is how much the card takes in under real load
> View attachment 2530592


God damn 579W on a 6800 XT.

I ran my recent results with the power limit capped at 525W as I didn't want to pull any higher.



lawson67 said:


> I am more motivated by normal Time spy as its the most challenging benchmark to get a high graphic score in which is why most people concentrate on benchmarking that, i wanna get 26k without an LC bios and i am nearly there even with crap Vram lol


I am sure you will get there, not far off now. Timespy is my next target in a week or so.


----------



## Henry46277

I want to know the cause of drver time out in TSE is Voltage not enough or GPU silicon Limit ?


----------



## LtMatt

Henry46277 said:


> I want to know the cause of drver time out in TSE is Voltage not enough or GPU silicon Limit ?


Both most likely. More voltage might allow for higher clocks, but you may also kill the GPU.


----------



## ZealotKi11er

I feel like 6800 might be more fun to OC since it limited to 1.025 at stock.


----------



## Ajdaho pl

ZealotKi11er said:


> I feel like 6800 might be more fun to OC since it limited to 1.025 at stock.





http://imgur.com/klKEI8L


----------



## 99belle99

Well had my first run of upping the voltage to 1250mV. I have bad silicon keeps crashing at the higher frequency's. Which it always done even before you could up the voltage.
Reference card with stock cooler.


----------



## ZealotKi11er

99belle99 said:


> Well had my first run of upping the voltage to 1250mV. I have bad silicon keeps crashing at the higher frequency's. Which it always done even before you could up the voltage.
> Reference card with stock cooler.


this is why we did not want this trick to be used. reference cooler even with stock voltage is not possible.


----------



## JRHudson

Ok, lol, I'm a fool but I gotta ask: 
for the dual bios switch on the Red Devil 6900xt which direction does it need to be switched to for OC mode?🤦‍♂️


----------



## lawson67

JRHudson said:


> Ok, lol, I'm a fool but I gotta ask:
> for the dual bios switch on the Red Devil 6900xt which direction does it need to be switched to for OC mode?🤦‍♂️


The switch must be pushed Towards the DP and Hmdi ports for the OC bios


----------



## JRHudson

lawson67 said:


> The switch must be pushed Towards the DP and Hmdi ports for the OC bios


Awesome. Thank you sir!


----------



## FatFingerGamer

i just figured i'd share this info in case there are some that still dont know the info or where to look at the info. just some data ive found, learned..


----------



## jonRock1992

Does increasing the SOC clock do anything? And what is the purpose of it?


----------



## Ajdaho pl

it gives a little, much more important is the flck at 2200mhz in view of the standard it can give + 200 gpu score in TS


----------



## FatFingerGamer

jonRock1992 said:


> Does increasing the SOC clock do anything? And what is the purpose of it?


you need to realize and remember that like normal ram, gpu vram has the same rules. so SoC, Fclk and all the adjustables play a part in how your card will respond.


----------



## LtMatt

FatFingerGamer said:


> i just figured i'd share this info in case there are some that still dont know the info or where to look at the info. just some data ive found, learned..
> View attachment 2530676
> 
> 
> View attachment 2530677


Appreciate the share, is there more to this as I see only some of those numbers explained?


----------



## lDevilDriverl

Ajdaho pl said:


> I see that you guys already know my method of increasing the voltage on the big navi, which I posted on hardwareluxx (Sylwester.), share with the results of MPT settings, it will make it easier for You to beat new records, I do not know what you already set in MPT but if you need I can throw my settings maybe that help, admittedly with 6800XT liquid devil, but most is universal for all cards


can you share this method here or link on your post on hardwareluxx?
Thanks in advance!


----------



## FatFingerGamer

LtMatt said:


> Appreciate the share, is there more to this as I see only some of those numbers explained?


left out either because unsure or has not been traced. cuz technically the way to find what each thing does is to go into your gpu bios. view it as a hex then manually find and search each pairing then looking up the information on amd's end. then you need to run benchmarks that include possible shunt placements so that you can get viable readouts as you cycle through. there is a lot of information that has been given from various people forgive me for not knowing there names but i assure you that they are appreciated. 



lDevilDriverl said:


> can you share this method here or link on your post on hardwareluxx?
> Thanks in advance!


all the info i have found up to this point on various sites 










[Guide] - Navi 21 Max Overclocking Tutorial [6800 XT / 69X0 XT]


Wer wissen will, was die eigene Navi 21 Karte wirklich kann, aber nicht weiß, wie man das anstellt, der ist hier richtig. Ein Typischer Fall ist dieser: Karte gekauft und jetzt läuft die viel langsamer als bei den großen Jungs im Luxx Forum. Was tun? Inhaltsverzeichnis 1. Time Spy: das (fast)...




www.hardwareluxx.de













Sammelthread - Big Navi RX 6700XT/6800(XT)/6900XT Overclocking/Undervolting


Dann hast du nen schlechten Chip erwischt. Wenn du UV betreiben willst ist es ja blöd wenn du einerseits den Core weniger Power gibst aber auf der anderen Seite der Gesamtkarte mehr geben möchtest. Mir ist bisher noch keine 6900 untergekommen die den Core nicht auf 1080 laufen lasse konnte...




www.computerbase.de





and theres a lot on The complete Big Navi UV Guide: Undervolting and power saving with the MorePowerTool simply explained | Practice | Page 3 | igor´sLAB mainly in the forums that follow the articles. start from page 1 and use the search within the post feature and look for specific things you may need. sometimes it leads you to other posts and such. 

Now i must ask you do you want the Red pill or the Green Pill...


----------



## ZealotKi11er

jonRock1992 said:


> Does increasing the SOC clock do anything? And what is the purpose of it?


With 6800/6900 not much. Purpose? Run the SOC tasks and related functions.


----------



## jonRock1992

ZealotKi11er said:


> With 6800/6900 not much. Purpose? Run the SOC tasks and related functions.


I see. And what exactly does the SOC control? Would running it at say 1500MHz be a waste of power?


----------



## Blameless

FatFingerGamer said:


> i just figured i'd share this info in case there are some that still dont know the info or where to look at the info. just some data ive found, learned..
> View attachment 2530676


Judging from how the Fabric/SoC power states are described in AMD CPUs and console SoCs, I'm pretty sure the DF_CSTATE is data fabric c-states, not "down force". Should control intermediate power states between sleep and full power, and disabling them should force the P0 (highest) state at all loads rather than allowing P1 or P2 (reduced power), or P3 (connected standby). Not sure if there is a functional difference between P3 and deep sleep.



jonRock1992 said:


> I see. And what exactly does the SOC control? Would running it at say 1500MHz be a waste of power?


PCI-E interface, video encoder, possibly memory controllers (separate from PHYs), some other I/O.

Lowering it too much starts to impact memory performance and PCI-E bandwidth. Raising it can help a little, but I can't imagine it being worth the power trade-off in most scenarios.


----------



## Ajdaho pl

lDevilDriverl said:


> can you share this method here or link on your post on hardwareluxx?
> Thanks in advance!











[Sammelthread] - Offizieller AMD [[ RX6700 // RX 6700XT // X6800 // 6800XT // 6900XT ]] Overclocking und Modding Thread [[ Wakü - Lukü - LN2]]


Die Mods haben sich offensichtlich bezahlt gemacht.




www.hardwareluxx.de


----------



## Ajdaho pl

FatFingerGamer said:


> left out either because unsure or has not been traced. cuz technically the way to find what each thing does is to go into your gpu bios. view it as a hex then manually find and search each pairing then looking up the information on amd's end. then you need to run benchmarks that include possible shunt placements so that you can get viable readouts as you cycle through. there is a lot of information that has been given from various people forgive me for not knowing there names but i assure you that they are appreciated.
> 
> 
> 
> all the info i have found up to this point on various sites
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> [Guide] - Navi 21 Max Overclocking Tutorial [6800 XT / 69X0 XT]
> 
> 
> Wer wissen will, was die eigene Navi 21 Karte wirklich kann, aber nicht weiß, wie man das anstellt, der ist hier richtig. Ein Typischer Fall ist dieser: Karte gekauft und jetzt läuft die viel langsamer als bei den großen Jungs im Luxx Forum. Was tun? Inhaltsverzeichnis 1. Time Spy: das (fast)...
> 
> 
> 
> 
> www.hardwareluxx.de
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> Sammelthread - Big Navi RX 6700XT/6800(XT)/6900XT Overclocking/Undervolting
> 
> 
> Dann hast du nen schlechten Chip erwischt. Wenn du UV betreiben willst ist es ja blöd wenn du einerseits den Core weniger Power gibst aber auf der anderen Seite der Gesamtkarte mehr geben möchtest. Mir ist bisher noch keine 6900 untergekommen die den Core nicht auf 1080 laufen lasse konnte...
> 
> 
> 
> 
> www.computerbase.de
> 
> 
> 
> 
> 
> and theres a lot on The complete Big Navi UV Guide: Undervolting and power saving with the MorePowerTool simply explained | Practice | Page 3 | igor´sLAB mainly in the forums that follow the articles. start from page 1 and use the search within the post feature and look for specific things you may need. sometimes it leads you to other posts and such.
> 
> Now i must ask you do you want the Red pill or the Green Pill...


On none of these pages you will find information on how to raise the voltage beyond the limit, no one has tried it before, and the method turned out to be trivial, even Hellm and Gurdi were surprised, and they wrote the MPT program


----------



## ZealotKi11er

Ajdaho pl said:


> On none of these pages you will find information on how to raise the voltage beyond the limit, no one has tried it before, and the method turned out to be trivial, even Hellm and Gurdi were surprised, and they wrote the MPT program


They did not write anything. They copied AMDs internal tool which they probably got from internal sources or vendors. They just build a gui.


----------



## amigafan2003

ZealotKi11er said:


> They did not write anything. They copied AMDs internal tool which they probably got from internal sources or vendors. They just build a gui.


Meow!


----------



## jonRock1992

Was able to get my Time Spy graphics score up a little bit this morning. Went back up to 18th for Time Spy single-gpu graphics score. I think I'm reaching the limits of my silicon unfortunately.


----------



## ZealotKi11er

jonRock1992 said:


> Was able to get my Time Spy graphics score up a little bit this morning. Went back up to 18th for Time Spy single-gpu graphics score. I think I'm reaching the limits of my silicon unfortunately.
> View attachment 2530776


What was ur clk before the extra voltage?


----------



## jonRock1992

ZealotKi11er said:


> What was ur clk before the extra voltage?


Before using TEMP_DEPENDENT_VMIN, I could get 2695/2795 with the occasional pass at 2700/2800.


----------



## Ajdaho pl

ZealotKi11er said:


> They did not write anything. They copied AMDs internal tool which they probably got from internal sources or vendors. They just build a gui.


I see that some people do nothing but fart in the stool, but it is easy for them to criticize others, if you are in the subject, you have access to AMD drivers documentation because they are under open source license, but so what if you get a pure documentation without any commentary from the creator and everything that he did you have to find out by reverse engineering, looking how something reacts to changes that you make in previously prepared by someone else software, they did a good job and do not deserve to be disrespected, especially that they did it voluntarily and do not want anything from anyone for it


----------



## Ajdaho pl

jonRock1992 said:


> Before using TEMP_DEPENDENT_VMIN, I could get 2695/2795 with the occasional pass at 2700/2800.


can You show Yours MPT settings , include features?


----------



## ZealotKi11er

Ajdaho pl said:


> I see that some people do nothing but fart in the stool, but it is easy for them to criticize others, if you are in the subject, you have access to AMD drivers documentation because they are under open source license, but so what if you get a pure documentation without any commentary from the creator and everything that he did you have to find out by reverse engineering, looking how something reacts to changes that you make in previously prepared by someone else software, they did a good job and do not deserve to be disrespected, especially that they did it voluntarily and do not want anything from anyone for it


Sorry my words did not portray what I really meant. They took a tool made by AMD and reverse engineered it.


----------



## kazukun

*Micron Delivers High-Performance GDDR6 Memory for AMD Radeon RX 6000 Series Graphics Cards*
Micron Delivers High-Performance GDDR6 Memory for AMD Radeon RX 6000 Series Graphics Cards | TechPowerUp


----------



## kairi_zeroblade

kazukun said:


> *Micron Delivers High-Performance GDDR6 Memory for AMD Radeon RX 6000 Series Graphics Cards*
> Micron Delivers High-Performance GDDR6 Memory for AMD Radeon RX 6000 Series Graphics Cards | TechPowerUp


Bad quality IMO..I had my 3060 mobile equipped with these Micron GDDR6 and they can barely OC beyond +250mhz on memory, don't know if they would yield the same on Desktop GPU's..


----------



## geriatricpollywog

kairi_zeroblade said:


> Bad quality IMO..I had my 3060 mobile equipped with these Micron GDDR6 and they can barely OC beyond 250mhz, don't know if they would yield the same on Desktop GPU's..


Micron makes GDDR6X


----------



## kairi_zeroblade

0451 said:


> Micron makes GDDR6X


and??


----------



## geriatricpollywog

kairi_zeroblade said:


> and??


You’re right, it will probably suck.


----------



## kairi_zeroblade

0451 said:


> You’re right, it will probably suck.


I did said on my post 



kairi_zeroblade said:


> don't know if they would yield the same on Desktop GPU's..


micron != samsung..even if its the same frequncy and bandwidth..they will still differ on some itsy tiny bitsy things(probabl this is done to cut the production cost or whatever they have in plan)..I did remember on some of my old GPU's on DDR5 (with Micron) I didn't yield a spectacular memory overclock despite almost everything I tried (raising voltages and editing the bios and many more)


----------



## geriatricpollywog

kairi_zeroblade said:


> I did said on my post
> 
> 
> 
> micron != samsung..even if its the same frequncy and bandwidth..they will still differ on some itsy tiny bitsy things(probabl this is done to cut the production cost or whatever they have in plan)..I did remember on some of my old GPU's on DDR5 (with Micron) I didn't yield a spectacular memory overclock despite almost everything I tried (raising voltages and editing the bios and many more)


Samsung G6 does +1000 on most 2080ti and Micron G6X does +1000 on most 3090. So both overachieve to the same level across G6 and G6X. That’s not to say Micron will make an effort on this new G6 like they did on G6X.


----------



## kairi_zeroblade

0451 said:


> Micron G6X does +1000 on most 3090


when cooled properly (anyway you want to cool it) <======== you missed this part

while the GDDR6 on the RTX 2000 series despite being passively cooled or not doesn't have any issues, on my RTX2070m and RTX2060m (both mobile) I did went overboard +700mhz with negligible difference, it was efficient upto +600mhz but beyond that I did never see any much difference at all..on my 5700XT I was all upto 2050mhz if I remeber it right (the max allowed by wattman)


----------



## geriatricpollywog

kairi_zeroblade said:


> when cooled properly (anyway you want to cool it) <======== you missed this part
> 
> while the GDDR6 on the RTX 2000 series despite being passively cooled or not doesn't have any issues, on my RTX2070m and RTX2060m (both mobile) I did went overboard +700mhz with negligible difference, it was efficient upto +600mhz but beyond that I did never see any much difference at all..on my 5700XT I was all upto 2050mhz if I remeber it right (the max allowed by wattman)


I did not miss that part. G6X does not need to be cooled properly to perform. I can run the memory at 70C on the AIO or 15C with a waterblock running 0C water and a slab of dry ice on the backplate and it will perform the same.


----------



## kairi_zeroblade

0451 said:


> I did not miss that part. G6X does not need to be cooled properly to perform. I can run the memory at 70C on the AIO or 15C with a waterblock running 0C water and a slab of dry ice on the backplate and it will perform the same.


and my friend complains the 3070ti is scorching POS with 92c on the memory..I never owned one..I only owned the 3070 with samsung GDDR6 and it was a splendid overclocker..but I needed moar..lol..


----------



## TaunyTiger

Just installed a Powercolor 6900XT Liquid Devil Ultimate a couple of days ago.
But i'm a bit dissapointed in the clocks i get in Timespy. 2510mhz min and 2610mhz max. Tryed 2525-2625mhz, but the its crashing!
This gpu should be binned! I was hoping for 2700-2800mhz+! Powertaget @ +15% and 1200mV.Any suggestions?


Spoiler


----------



## CS9K

TaunyTiger said:


> Just installed a Powercolor 6900XT Liquid Devil Ultimate a couple of days ago.
> But i'm a bit dissapointed in the clocks i get in Timespy. 2510mhz min and 2610mhz max. Tryed 2525-2625mhz, but the its crashing!
> This gpu should be binned! I was hoping for 2700-2800mhz+! Powertaget @ +15% and 1200mV.Any suggestions?


What model power supply are you using? Also what cable extensions or replacement cables do you have?


----------



## TaunyTiger

CS9K said:


> What model power supply are you using? Also what cable extensions or replacement cables do you have?


Also saw it is a XTXHC board in gpu-z.
Corsair RM850X with Cablemods white extensions.
With a PCI-E 4.0 riser from Lian Li.


----------



## kairi_zeroblade

TaunyTiger said:


> Also saw it is a XTXHC board in gpu-z.
> Corsair RM850X with Cablemods white extensions.
> With a PCI-E 4.0 riser from Lian Li.


just a few pages back you can skim on how to get more out of your GPU..it has been a hot topic lately..


----------



## lestatdk

My guess is that it's the riser causing problems. Have seen this many times before


----------



## ZealotKi11er

0451 said:


> I did not miss that part. G6X does not need to be cooled properly to perform. I can run the memory at 70C on the AIO or 15C with a waterblock running 0C water and a slab of dry ice on the backplate and it will perform the same.


My G6X fails with anything over +500 because its runs close to 100C.


----------



## CS9K

TaunyTiger said:


> Also saw it is a XTXHC board in gpu-z.
> Corsair RM850X with Cablemods white extensions.
> With a PCI-E 4.0 riser from Lian Li.


Your power supply should be good for it, and Cablemods' extensions are of good quality.

The riser card _may_ be a/the issue, but brand-name PCIe 4.0 risers are not usually faulty (as some cheaper and/or off-brand risers are). Perhaps, re-seat the riser connections on both the mainboard and GPU side (if possible)


----------



## Ajdaho pl

TaunyTiger said:


> Just installed a Powercolor 6900XT Liquid Devil Ultimate a couple of days ago.
> But i'm a bit dissapointed in the clocks i get in Timespy. 2510mhz min and 2610mhz max. Tryed 2525-2625mhz, but the its crashing!
> This gpu should be binned! I was hoping for 2700-2800mhz+! Powertaget @ +15% and 1200mV.Any suggestions?
> 
> 
> Spoiler


Factory card limit is too low for 6900xt LDU, my card (6800XT liquid devil) with factory 1150mV on gpu in time spy is pulling 390W after increasing liimit in mpt - factory limits was choking it on 350W (300W +15%).
and when I increase the voltage to 1250mv it pulls up to 500W in time spy (480-490), but in games it pulls not much more (1200mV), and now it runs for all games non stop on 2700-2800mhz setting in wattman


----------



## FatFingerGamer

ok so i'll just say this here and publicly address it guess i need to after that malding pm i got. so this is just my personal feelings and opinion on the info shared on this post. whatever posted here or other sites that are meant to help educate or inform or clarify is for that purpose alone. no one is exploiting anyones personal settings for timespy. the way i see it is that 99% of the people that post on this post right now are what you can consider privateers. meaning people that out of their own pocket understand the risk, the headache the frustration all on their own choosing. if they fry their card they dont have a company or sponsor backing them up to replace a card. replacement would mean eating ramen or crackers for the next few months or selling a kidney even maybe (joke). with that said the way i see it is we are all on one mission or quest to attain the highest score possible for ourselves no fame seeked nothing. just for our own personal achievement. what may be nothing to one guy may mean the world to another. and all of us in a sense when we face the scores that are just continuing to rise on 3dmark or other benchmark sites we the privateers the only competition i see ARE those guys that have like 10 gpus ready in a box in their corporate garage in case they blow one up. 

so that accusation or whatever you said that why am i reposting information that's already somewhere i should make it hard for new people to find it like it was hard for you or me. well man thats exactly why i posted the info. i dont want to make it harder for the community. and thats what the community is for. i could care less if i get positive or negative feedback for posting what i post. i ll take it all in as educational material. it is what it is.


----------



## Ajdaho pl

You put Your post under mine, but it's probably not directed at me, because I don't know what you mean?


----------



## J7SC

0451 said:


> Samsung G6 does +1000 on most 2080ti and Micron G6X does +1000 on most 3090. So both overachieve to the same level across G6 and G6X. That’s not to say Micron will make an effort on this new G6 like they did on G6X.


The 6900XT here happens to have Samsung which runs at the max allowed 2150 f/t all day long as its most efficient, and with AMD's built-in software oc suggesting 2260 would be ideal (...if only I could get there via vbios / MPT)  . That said, two 2080 Ti here bought in late 2018 have Micron GDDR6 w/ efficient max at +1300, and the Micron GDDR6X on the 3090 Strix is at 1160+. Weirdly, most 3090s have the GDDR6X set at 1219 MHz even though the IC chips are rated at 1313 MHz...given that is is not only 24GB of the hotter GDDR6X but double-sided, may be it's a concern by NVidia and or board partners about heat /power consumption.

All said and done, it is good to know that Micron is bumping more of the faster GDDR6 into production.


----------



## jonRock1992

I wonder if the bios for these Micron ram chip gpu's can be flashed onto current Samsung ram chip XTX-H GPU's? Would be interesting to find out. I'm definitely gonna try whenever one is available lol.


----------



## EastCoast

jonRock1992 said:


> I wonder if the bios for these Micron ram chip gpu's can be flashed onto current Samsung ram chip XTX-H GPU's? Would be interesting to find out. I'm definitely gonna try whenever one is available lol.


----------



## jonRock1992

Dual bios. Unless it physically damages something on the GPU, it'll be alright lol.


----------



## CS9K

EastCoast said:


>


I recall from bios modding the RX 5600 XT, that bioses from the two memory types were not compatible. I don't know if it would be as easy as copying memory straps from one bios to the other, or if there are other differences. We'll have to wait and find out.


----------



## J7SC

CS9K said:


> I recall from bios modding the RX 5600 XT, that bioses from the two memory types were not compatible. I don't know if it would be as easy as copying memory straps from one bios to the other, or if there are other differences. We'll have to wait and find out.


AFAIK, there are tags in the vbios linked to some sort of signal from the VRAM IC; the CPUz of an 980 Classified below shows that all potential VRAM types are included, noting that this was a custom vbios flashed from another card (Strix XOC) with another VRAM vendor, but still 'chose' the correct VRAM model. 

From what I recall about GDDR6 VRAM (ie. 2080 Ti), Micron had slightly tighter timings than Samsung but also slightly lower MHz headroom (depending of course on the individual 'lottery'). What may be possible with the right kind of software utility (like the old ATItool) is to copy timings over from one type to another.


----------



## cfranko

I applied liquid metal today, I have a Bykski block. Are these temps good?


----------



## LtMatt

cfranko said:


> View attachment 2530855
> 
> 
> I applied liquid metal today, I have a Bykski block. Are these temps good?


Looks decent for 280W. As long as you peak at around 20c under worst case scenario, which I am sure you will be below, then all is fine IMO.


----------



## cfranko

LtMatt said:


> Looks decent for 280W. As long as you peak at around 20c under worst case scenario, which I am sure you will be below, then all is fine IMO.


I am going to keep the GPU at stock even though I have LM. Tweaking with MPT isn’t as fun as it was before for me idk why


----------



## cfranko

@lawson67 Will the Nickel Plated Copper on my waterblock absorb the LM over time like regular copper does (non-nikcel plated copper)? Will I have to reapply lm in 6 months or so?


----------



## jonRock1992

I know I'm not Lawson67, but from personal experience, it will leave a gray stain. It won't really affect performance. I have a Bykski block on my 6900 XT with liquid metal, and I haven't noticed any performance degradation over the last 4 or so months that it's been on there for.


----------



## cfranko

jonRock1992 said:


> I know I'm not Lawson67, but from personal experience, it will leave a gray stain. It won't really affect performance. I have a Bykski block on my 6900 XT with liquid metal, and I haven't noticed any performance degradation over the last 4 or so months that it's been on there for.


I don't really care about the stain, I just want to maintain good temperatures long term, thanks for the info. I tagged lawson because I know he has experience with LM, I didn't know you also had LM


----------



## CS9K

J7SC said:


> AFAIK, there are tags in the vbios linked to some sort of signal from the VRAM IC; the CPUz of an 980 Classified below shows that all potential VRAM types are included, noting that this was a custom vbios flashed from another card (Strix XOC) with another VRAM vendor, but still 'chose' the correct VRAM model.
> 
> From what I recall about GDDR6 VRAM (ie. 2080 Ti), Micron had slightly tighter timings than Samsung but also slightly lower MHz headroom (depending of course on the individual 'lottery'). What may be possible with the right kind of software utility (like the old ATItool) is to copy timings over from one type to another.


OH right! I remember reading a little about that when I was researching a bios flash from my 2070 Super Hybrid -> FTW3 bios. 

It would make sense for AMD bioses to be written the same way. Let us hope a door opens for memory timing/speed modification soon!


----------



## jonRock1992

CS9K said:


> OH right! I remember reading a little about that when I was researching a bios flash from my 2070 Super Hybrid -> FTW3 bios.
> 
> It would make sense for AMD bioses to be written the same way. Let us hope a door opens for memory timing/speed modification soon!


I want a working bios editor so bad. I remember making an Nvidia Boost disable guide for Nvidia Maxwell gpu's on an abandoned OCN account of mine way back then. Loved messing around with that bios editor. Oh yeah here it is: Disable Boost and "Bake-In" Max Game Stable...
That thread brings back memories lol.

At least we have MPT though. It's the next best thing.


----------



## Neoki

jonRock1992 said:


> I want a working bios editor so bad. I remember making an Nvidia Boost disable guide for Nvidia Maxwell gpu's on an abandoned OCN account of mine way back then. Loved messing around with that bios editor. Oh yeah here it is: Disable Boost and "Bake-In" Max Game Stable...
> That thread brings back memories lol.
> 
> At least we have MPT though. It's the next best thing.


I used this similar disable boost logic on my current daily driver GTX 980 Ti (Still tinkering and hard tube building my 5950x/6900xt). I loved using that bios editor to tinker with stuff and run constant stable clock. My EVGA GTX 980 Ti FTW has been running 1480mhz core @ 975mv for about 6 years now.


----------



## geriatricpollywog

kairi_zeroblade said:


> and my friend complains the 3070ti is scorching POS with 92c on the memory..I never owned one..I only owned the 3070 with samsung GDDR6 and it was a splendid overclocker..but I needed moar..lol..





ZealotKi11er said:


> My G6X fails with anything over +500 because its runs close to 100C.


Have you taken apart your card and re applied the pads? I don’t see how G6X chips can get that hot on the front of a card unless there is poor thermal contact. From 0 to 70C I don’t see any difference in where I can adjust the memory slider. Admittedly I haven’t tried overclocking at 90C since even my back side memory never got that hot on the stock AIO.



jonRock1992 said:


> I wonder if the bios for these Micron ram chip gpu's can be flashed onto current Samsung ram chip XTX-H GPU's? Would be interesting to find out. I'm definitely gonna try whenever one is available lol.


You can flash a Vega64 bios to a Vega56 if the Vega56 has Samsung memory. You can’t flash A Vega64 bios to a Vega56 if the 56 has Hynix memory. The Hynix memory will not boot the Samsung speed and timings and the card will soft brick. You can try cross flashing on a 6900XT but make sure it has a dual bios switch just in case.


----------



## J7SC

CS9K said:


> OH right! I remember reading a little about that when I was researching a bios flash from my 2070 Super Hybrid -> FTW3 bios.
> 
> It would make sense for AMD bioses to be written the same way. Let us hope a door opens for memory timing/speed modification soon!


...not sure if AMD vbios are written the same way, but yes - open door for VRAM speed and timing control pleeaaze ! 

I do recall from way back that the driver load on the v-core CPU varied depending on whether on AMD or NVidia card was used w/ otherwise identical settings, also presumably as a result of different vbios treatment by those two protagonists...mind you that was years ago, may be things changed since then.


----------



## kairi_zeroblade

0451 said:


> Have you taken apart your card and re applied the pads? I don’t see how G6X chips can get that hot on the front of a card unless there is poor thermal contact. From 0 to 70C I don’t see any difference in where I can adjust the memory slider. Admittedly I haven’t tried overclocking at 90C since even my back side memory never got that hot on the stock AIO.


The 3070 never had overheating vram issues as far as I remember and as far as me googling it now..the 3070 (non ti) uses only GDDR6

I suggested that to my friend but after sales warranty is voided if he does that..


----------



## lawson67

cfranko said:


> @lawson67 Will the Nickel Plated Copper on my waterblock absorb the LM over time like regular copper does (non-nikcel plated copper)? Will I have to reapply lm in 6 months or so?


No you wont need to repaste in six months with nickel plated copper you will be fine for a year or more, as jon said it will stain the nickel but it does not make any difference to cooling and a rub with a bit of scotch bright you can remove the stain off the nickel if you want when you come to repaste


----------



## Thanh Nguyen

Finally hit 25k graphic . What value can I change to get more? And by the way, why in BF5 Operation Underground map, my gpu usage cant hold 99% all the time? Anyone know?


----------



## Neoki

For those curious, I'm repasting/repadding my XFX Zero. So I took some pics of the PCB and stuff. Also I was surprised to see that XFX replaced the stock EK pads (Which I believe are usually blue?), with what I believe are thermalrights (Grey). However, they did seem to use the same size pads for everything, so not sure there was great contact. I'll be using Gelid as my paste, and 3 various sizes (1/1.5/2mm) of thermalright pads.


----------



## robiatti

Neoki said:


> For those curious, I'm repasting/repadding my XFX Zero. So I took some pics of the PCB and stuff. Also I was surprised to see that XFX replaced the stock EK pads (Which I believe are usually blue?), with what I believe are thermalrights (Grey). However, they did seem to use the same size pads for everything, so not sure there was great contact. I'll be using Gelid as my paste, and 3 various sizes (1/1.5/2mm) of thermalright pads.
> 
> View attachment 2530913
> 
> View attachment 2530914



Looking forward to your results. I kind of want to do the same however i'm concerned that coil whine will be an issue once repasted. right now the only audible whine is at over 500fps.


----------



## Papa Emeritus

I've managed to get hold of both the 6900 XT Strix LC TOP and the RTX 3080 Ti TUF for more or less the same price, which one would you choose to keep?


----------



## cfranko

Papa Emeritus said:


> I've managed to get hold of both the 6900 XT Strix LC TOP and the RTX 3080 Ti TUF for more or less the same price, which one would you choose to keep?


I would take the 6900 XT, it has more VRAM and liquid cooling


----------



## robiatti

Papa Emeritus said:


> I've managed to get hold of both the 6900 XT Strix LC TOP and the RTX 3080 Ti TUF for more or less the same price, which one would you choose to keep?


I had both for a little while, 6900xtxh and the 3080ti FTW3.

I ended up selling the the 3080ti as I preferred the performance of the 6900. The games I like to play like COD CW I had to use dlss to match performance of the 6900. But to me DLSS makes the image way to soft so I wouldn’t t use it. This is all at [email protected]


----------



## jonRock1992

Papa Emeritus said:


> I've managed to get hold of both the 6900 XT Strix LC TOP and the RTX 3080 Ti TUF for more or less the same price, which one would you choose to keep?


3080 Ti for ray tracing. 6900 XT for raw performance and better thermals. Whichever you prefer.


----------



## ZealotKi11er

Papa Emeritus said:


> I've managed to get hold of both the 6900 XT Strix LC TOP and the RTX 3080 Ti TUF for more or less the same price, which one would you choose to keep?


I have 3080 TUF and dont like memory temps.


----------



## Ajdaho pl

komputer\HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\Class\{4d36e968-e325-11ce-bfc1-08002be10318}\0000 - komputer is in polish ver. windows in english probably will be computer
Could one of the 6900XT owners please export this file from windows registry and send it to me, this file apart from SPPT contains other parameters of the card installed in the computer, I want to try swapping the values one by one, maybe I can trick the system that instead of 6800XT there is 6900xt installed in the computer and it will allow to set gpu and memory clocks as in 6900xt


----------



## Papa Emeritus

cfranko said:


> I would take the 6900 XT, it has more VRAM and liquid cooling





robiatti said:


> I had both for a little while, 6900xtxh and the 3080ti FTW3.
> 
> I ended up selling the the 3080ti as I preferred the performance of the 6900. The games I like to play like COD CW I had to use dlss to match performance of the 6900. But to me DLSS makes the image way to soft so I wouldn’t t use it. This is all at [email protected]





jonRock1992 said:


> 3080 Ti for ray tracing. 6900 XT for raw performance and better thermals. Whichever you prefer.





ZealotKi11er said:


> I have 3080 TUF and dont like memory temps.


Thank you all. I'm gonna consider which features i like the most and then make a decision. The 6900 XT is cooler and quieter (fan speed wise) however it has a lot more coilwhine than the 3080 Ti.


----------



## D-EJ915

Newegg has the toxic air cooled card now if anybody was interested in that one.


----------



## geriatricpollywog

Which 6900XT is best 6900XT?


----------



## robiatti

Papa Emeritus said:


> Thank you all. I'm gonna consider which features i like the most and then make a decision. The 6900 XT is cooler and quieter (fan speed wise) however it has a lot more coilwhine than the 3080 Ti.


The coil whine is an issue I have observed with 6900s.. I have had reference 6900 sapphire and gigabyte, merc 319, & red devil ultimate and now the xfx zero. By far the worse for coil whine was the Red Devil . The xfx cards have been the best with no coil whine below 500fps.

The 3080ti had no coil while, but the 3090 king pin and 3090 ventus sung like Pavarotti


All cards can have coil whine and I believe it’s due to poor mounting or wrong thermal pads.


----------



## ogmadvlad

the reference 6900xt with a waterblock is kinda nice. ppt settings 350w 375 tdc. pretty much locked at 2670 frequency wise


----------



## Luggage

Hmm the asrock OC formula is the cheapest 6900xt you can get in Sweden now, tempting.
Anything wrong with it? Can you get wb?


----------



## kairi_zeroblade

0451 said:


> Which 6900XT is best 6900XT?


Probably the best one your money can buy..

for me Best is very subjective (due to personal preference)..


----------



## lDevilDriverl

Wow... I scored 25 102 in Time Spy how to get 2400 on mem?


----------



## kairi_zeroblade

lDevilDriverl said:


> Wow... I scored 25 102 in Time Spy how to get 2400 on mem?


you need the LC model bios (requires flashing)..it has the 18gbps GDDR6..


----------



## lDevilDriverl

kairi_zeroblade said:


> you need the LC model bios (requires flashing)..it has the 18gbps GDDR6..


I thought that flashing bios on 6000 won't work. Also, it will be interesting to compare mem timings. Will RED BIOS EDITOR and MorePowerTool for Polaris, Navi and Big Navi - MPT 1.3.7 Final (New) | igor´sLAB work correctly with win10? I had a lot of problems with flashing bios on my previous 5700xt.


----------



## kairi_zeroblade

lDevilDriverl said:


> I thought that flashing bios on 6000 won't work. Also, it will be interesting to compare mem timings. Will RED BIOS EDITOR and MorePowerTool for Polaris, Navi and Big Navi - MPT 1.3.7 Final (New) | igor´sLAB work correctly with win10? I had a lot of problems with flashing bios on my previous 5700xt.


Linux..read a few pages back somebody (weleh I think that was the one who made a guide) posted the files and instructions on how to do so in detail..but you are on your own with the risk..(warning, not for the faint of heart)


----------



## lDevilDriverl

kairi_zeroblade said:


> not for the faint of heat


I agree, since 5700xt I have turned a little gray


----------



## kairi_zeroblade

anybody tried the 21.11.1 beta drivers?


----------



## danphillips26

Hi Guys,

I've got a toxic extreme and I am really enjoying it so far. I've managed to hit 25690 k with the standard BIOS.









I scored 24 267 in Time Spy


AMD Ryzen 9 5950X, AMD Radeon RX 6900 XT x 1, 32768 MB, 64-bit Windows 10}




www.3dmark.com





I've flashed the LC BIOS but at the moment I can't get any stability at all, could anyone point me in the direction of info for this BIOS? Any help would be greatly appreciated.

Thanks


----------



## ZealotKi11er

You can try to lower the memory clk with MPT.


----------



## danphillips26

ZealotKi11er said:


> You can try to lower the memory clk with MPT.



Thanks for the reply, say take it down to 1200 mhz from 1500?


----------



## gtz

Here is my 6900XT

Only change made is 360w in MPT, 2150FT Mem and 2600 core in Wattman. This is also my 24/7 settings, hotspot never goes over 65 on water.










Once GPUs become readily available I will slam 1.25 volts and 500w and see if I can break 25K. For right now I will keep it conservative, dont want to kill a card that cost me 1600 bucks.


----------



## lawson67

So the RX 6900 XT are in the 27k TS scores now beating some of the 3090's on LN2, snakeeyes score from LUXX

I scored 24 940 in Time Spy


----------



## jonRock1992

lawson67 said:


> So the RX 6900 XT are in the 27k TS scores now beating some of the 3090's on LN2
> 
> I scored 24 940 in Time Spy


That's crazy.


----------



## weleh

I don't understand those clocks at all vs the scores.


----------



## jonRock1992

weleh said:


> I don't understand those clocks at all vs the scores.


I know right? What's the secret!? Lol.


----------



## J7SC

jonRock1992 said:


> I know right? What's the secret!? Lol.


At 1440p, CPU type and speed still make a bigger difference than at higher resolution - that score has a 10900KF at over 5.5GHz...either way, superb score to be congratulated on !


----------



## weleh

J7SC said:


> At 1440p, CPU type and speed still make a bigger difference than at higher resolution - that score has a 10900KF at over 5.5GHz...either way, superb score to be congratulated on !


Ehh, huge doubt.


----------



## J7SC

...that's your prerogative...

Elsewhere, I just started up the left half (w/ 6900XT) for the first time of the Raven_A dual mobo project after bleeding that loop...superb temps even w/o many of the fans. Superposition 4K run at temp delta to ambient of CPU at 19 C, VRAM at 18 C, Hotspot at 37 C.

...sorry for the pic quality; better ones once it is all complete and cleaned up...


----------



## ZealotKi11er

weleh said:


> I don't understand those clocks at all vs the scores.


I had to have a much faster card to beat his score. We will see once people get ADL if it helps at all with which should easily beat 5.5GHz Skylake.


----------



## weleh

lol









Result not found







www.3dmark.com





He's #1 HOF at 27.6K

"it's the CPU guys"


----------



## ZealotKi11er

weleh said:


> lol
> 
> 
> 
> 
> 
> 
> 
> 
> 
> Result not found
> 
> 
> 
> 
> 
> 
> 
> www.3dmark.com
> 
> 
> 
> 
> 
> He's #1 HOF at 27.6K
> 
> "it's the CPU guys"


Its not lol. Maybe CPU can get u 100-200 point but not 1000 points. With those clk speeds and all known tricks he should be at like 26.4-26.5 max.


----------



## hellm

At these high clockrates memory bandwidth is very important. If you can't have more of that, cache would be the next best thing. As it was for any bandwidth problem so far. And if you also can't have more cache, you want a faster one.

Funny, with lower GPU clock and higher clock rates on memory and cache you can reach a higher score. Science first, as the americans say.

There is more to overclocking a RDNA2 Radeon than cold temperatures and high voltages. The score is legit, give snakeeyes the credit he deserves. He is a benchmarker, not a hacker. That would be me.


----------



## ZealotKi11er

hellm said:


> At these high clockrates memory bandwidth is very important. If you can't have more of that, cache would be the next best thing. As it was for any bandwidth problem so far. And if you also can't have more cache, you want a faster one.
> 
> Funny, with lower GPU clock and higher clock rates on memory and cache you can reach a higher score. Science first, as the americans say.
> 
> There is more to overclocking a RDNA2 Radeon than cold temperatures and high voltages. The score is legit, give snakeeyes the credit he deserves. He is a benchmarker, not a hacker. That would be me.


Trick /= Hacks.


----------



## weleh

hellm said:


> At these high clockrates memory bandwidth is very important. If you can't have more of that, cache would be the next best thing. As it was for any bandwidth problem so far. And if you also can't have more cache, you want a faster one.
> 
> Funny, with lower GPU clock and higher clock rates on memory and cache you can reach a higher score. Science first, as the americans say.
> 
> There is more to overclocking a RDNA2 Radeon than cold temperatures and high voltages. The score is legit, give snakeeyes thr credit he deserves. He is a benchmarker, not a hacker. That would be me.


No idea why you think I'm calling him a hacker.

Way before snakeeyes was on the leaderboard I was already #1 or #2 6900XT in the world with a XTX card.
Unfortunately had to sell my system so I'm out for a while but still keep an eye here and on Luxx.

It's just that people here and on Luxx keep repeating the same lies over and over again and then they kind of become truth even they aren't.

CPU is important, but it's not like you're getting bottlenecked on TS by any CPU from the last 3 years. Hell my best Graphic score run was done with a bugged CPU clock which resulted in a trash overall (11K CPU points instead of the usual 14K). 

The fact that you need lower core to run higher memory was known for a long time, between me and Faxtor, we had realized this back in April or whatever it was. 

Another problem is how people hide some of their tricks (nothing against) but then keep saying random stuff that isn't true to dissimulate the real thing. Remember the drama between Faxtor and Shirk on your own forum? 

Anyway, impressive stuff, keep it comming. Enjoying the battle and when I rebuild I'll probably get a XTXH 6900XT to apply some of the newer concepts that snake has been sharing.


----------



## J7SC

hellm said:


> At these high clockrates memory bandwidth is very important. If you can't have more of that, cache would be the next best thing. As it was for any bandwidth problem so far. And if you also can't have more cache, you want a faster one.
> 
> Funny, with lower GPU clock and higher clock rates on memory and cache you can reach a higher score. Science first, as the americans say.
> 
> There is more to overclocking a RDNA2 Radeon than cold temperatures and high voltages. The score is legit, give snakeeyes thr credit he deserves. He is a benchmarker, not a hacker. That would be me.


...speaking of memory bandwidth, he indeed is hitting DDR4 4638 + - while he also indicated chilled water for some of his components at HWBot and a 10900Kf even beyond 5600 MHz...he's also 1st in Port Royal and Superposition 8k at HWBot for 6900XT, btw.

Be that as it may, he obviously knows a heck of a lot about the 6900XT, and 10900K/F...if folks suspect s.th. untoward-like, they should simply flag it for the respective benchmark publishers; this on the other hand is the official OCN 6900XT thread.


----------



## hellm

ZealotKi11er said:


> Trick /= Hacks.


I know what it is. And i wouldn't call me a hacker before GreenPowerTool. But i was in some kind of write flow.. 


weleh said:


> [..]Anyway, impressive stuff, keep it comming. Enjoying the battle and when I rebuild I'll probably get a XTXH 6900XT to apply some of the newer concepts that snake has been sharing.





J7SC said:


> ^^


I think intel came up with something faster now..


----------



## J7SC

hellm said:


> (...)
> 
> 
> I think intel came up with something faster now..


...I can hardly wait for the arguments when Intel releases 'Titicaca Lake' and AMD 'Zen 77'...


----------



## ryouiki

Seems like I _might_ have finally solved my 6900XT issues.

Without any other leads to go on, I ended up ditching 4x8 SR memory kits, and purchased a 4x16 DR memory kit (only installing 2 modules)…. but still unstable. I then spent days tweaking memory/validating with Karhu/testing against games until it seems I found some strange interaction with secondary memory timings that would pass memory testing without an issue, but would result in either a TDR or full system reboot when the GPU was loaded.

After loosening those timings slightly everything appeared to be stable, so finally I did yet another fresh Windows reinstall + 21.11.1 optional drivers, and haven't had an issue since.

I don't know if this is some issue specific with how the specific motherboard/BIOS is setting non-primary timings, and maybe aggravated further by PCIE 4.0, but it is a repeatable issue unless I manually set some of the secondary timings. Interestingly it doesn't seem to impact my older 1080Ti, whether that is an function of it only using PCIE 3.0 or some type of error handling by Nvidia's drivers etc. I can't say for sure.

Either way, I beyond relieved that the card seems to be working now... especially since finding a GPU right now is an absolute nightmare.

I did a quick Time Spy run which seems to be fairly decent score wise?... but returned the card to defaults since it is already clocked higher than anything I need it for from the factory.

I appreciate the various suggestions given here by various folks while I was troubleshooting this 😁


----------



## kairi_zeroblade

J7SC said:


> ...that's your prerogative...
> 
> Elsewhere, I just started up the left half (w/ 6900XT) for the first time of the Raven_A dual mobo project after bleeding that loop...superb temps even w/o many of the fans. Superposition 4K run at temp delta to ambient of CPU at 19 C, VRAM at 18 C, Hotspot at 37 C.
> 
> ...sorry for the pic quality; better ones once it is all complete and cleaned up...
> 
> View attachment 2531171


Is that a Dual system in a Phanteks 719 case? gorgeous..



ryouiki said:


> fresh Windows reinstall + 21.11.1 optional drivers


Is this a windows 10?? how are the drivers? (i mean just a follow up)



ryouiki said:


> I did a quick Time Spy run which seems to be fairly decent score wise?... but returned the card to defaults since it is already clocked higher than anything I need it for from the factory.


Seems about right for a 6900XT..nothing wrong down there.. (IMHO)


----------



## J7SC

kairi_zeroblade said:


> Is that a Dual system in a Phanteks 719 case? gorgeous..


Tx  ...Dual system (6900XT, 3090) in a 'modded' TT Core P8...


----------



## kairi_zeroblade

J7SC said:


> Tx  ...Dual system (6900XT, 3090) in a 'modded' TT Core P8...


Ohh, though it was a Phanteks..damn huge.. never knew the TT Core P8 was that also big..


----------



## J7SC

kairi_zeroblade said:


> Ohh, though it was a Phanteks..damn huge.. never knew the TT Core P8 was that also big..


TT Core P8 is big and HEAVY ! Fortunately, I mounted it on a cart w/wheels, as is the separate 'cooling table' (2520x60+ rads total, 5x D5s total) 

...long story, but it is actually a dual case / four mobo work-play build project to streamline my home office which, pandemic et al, had started to sprawl all over the place across several rooms (I'm in the computer-related field).


----------



## geriatricpollywog

J7SC said:


> TT Core P8 is big and HEAVY ! Fortunately, I mounted it on a cart w/wheels, as is the separate 'cooling table' (2520x60+ rads total, 5x D5s total)
> 
> ...long story, but it is actually a dual case / four mobo work-play build project to streamline my home office which, pandemic et al, had started to sprawl all over the place across several rooms (I'm in the computer-related field).


I’m lucky I only use a computer for work about 20 minutes per day. When I worked on the computer 100%, I wanted nothing to do with computers for play.


----------



## J7SC

0451 said:


> I’m lucky I only use a computer for work about 20 minutes per day. When I worked on the computer 100%, I wanted nothing to do with computers for play.


...I enjoy the best of both worlds


----------



## snakeeyes111




----------



## lawson67

snakeeyes111 said:


>


If you carry on like this there will be a 28k club soon, great score BTW


----------



## bloot

snakeeyes111 said:


>


----------



## CS9K

ryouiki said:


> Seems like I _might_ have finally solved my 6900XT issues.
> 
> Without any other leads to go on, I ended up ditching 4x8 SR memory kits, and purchased a 4x16 DR memory kit (only installing 2 modules)…. but still unstable. I then spent days tweaking memory/validating with Karhu/testing against games until it seems I found some strange interaction with secondary memory timings that would pass memory testing without an issue, but would result in either a TDR or full system reboot when the GPU was loaded.
> 
> After loosening those timings slightly everything appeared to be stable, so finally I did yet another fresh Windows reinstall + 21.11.1 optional drivers, and haven't had an issue since.
> 
> I don't know if this is some issue specific with how the specific motherboard/BIOS is setting non-primary timings, and maybe aggravated further by PCIE 4.0, but it is a repeatable issue unless I manually set some of the secondary timings. Interestingly it doesn't seem to impact my older 1080Ti, whether that is an function of it only using PCIE 3.0 or some type of error handling by Nvidia's drivers etc. I can't say for sure.
> 
> Either way, I beyond relieved that the card seems to be working now... especially since finding a GPU right now is an absolute nightmare.
> 
> I did a quick Time Spy run which seems to be fairly decent score wise?... but returned the card to defaults since it is already clocked higher than anything I need it for from the factory.
> 
> I appreciate the various suggestions given here by various folks while I was troubleshooting this 😁


I'm glad you seem to have found the solution to your issues. I have a good deal of experience overclocking memory by hand, with midrange and B-die kits alike on Intel _and_ AMD platforms. My go-to tests are Memtest86 (no errors on a full run), and Prime 95 Large FFT no-avx, all cores for 1hr.

Hear me out: A lot of folks nowadays **** on Memtest86, but I've found it to be consistent.

I use Memtest86 + P95 LFFT instead of TM5 for a few reasons:

Memtest86 removes Windows and it's house-of-cards driver system out of the works, so that ALL of the memory can be tested without an OS getting in the way.
TM5 (and memtest86) will not load down the memory like actual games and other applications will (you found this out with a TM5 pass, but crash in games)
Instead of TM5, I use Prime 95, Large FFT, *no-avx*, all threads, for one hour. P95 LFFT loads down the IMC and memory harder than _*any*_ other application that your PC will ever run. I've seen P95 LFFT called "The IMC Beatdown" for this reason. Be advised, your memory may need active cooling if your case airflow is poor. Keep the modules under 50C during P95 LFFT.
Don't get me wrong, TM5 is just fine to use (use all the tools in your toolbox, eh?), but in all the times I've tested stability my way, then tested with TM5, TM5 has always come back clean, so I don't usually use it except as a sanity check at the end of the memory tuning process.

I've spent tens of hours scouring the 'net for insights into memory overclocking, as well as many hours withe the JEDEC DDR4 Standards publication, to try and wrap my head around memory tuning. If you need further help, or find instability again, and suspect it is your system memory, send me a DM and I will see what I can do to help.


----------



## TaunyTiger

Ajdaho pl said:


> Factory card limit is too low for 6900xt LDU, my card (6800XT liquid devil) with factory 1150mV on gpu in time spy is pulling 390W after increasing liimit in mpt - factory limits was choking it on 350W (300W +15%).
> and when I increase the voltage to 1250mv it pulls up to 500W in time spy (480-490), but in games it pulls not much more (1200mV), and now it runs for all games non stop on 2700-2800mhz setting in wattman


Have not been messing with MPT. The card is'nt stable @ 2625m,hz in timespy. But I've been playing lots of RDR2 with 2730-2830mhz in Adrenalin, that gives me a mhz ingame @ 2780mhz. So I've been happy with that so far. 

Looking forward to BF2042 and testing in that game.


----------



## LtMatt

ryouiki said:


> Seems like I _might_ have finally solved my 6900XT issues.
> 
> Without any other leads to go on, I ended up ditching 4x8 SR memory kits, and purchased a 4x16 DR memory kit (only installing 2 modules)…. but still unstable. I then spent days tweaking memory/validating with Karhu/testing against games until it seems I found some strange interaction with secondary memory timings that would pass memory testing without an issue, but would result in either a TDR or full system reboot when the GPU was loaded.
> 
> After loosening those timings slightly everything appeared to be stable, so finally I did yet another fresh Windows reinstall + 21.11.1 optional drivers, and haven't had an issue since.
> 
> I don't know if this is some issue specific with how the specific motherboard/BIOS is setting non-primary timings, and maybe aggravated further by PCIE 4.0, but it is a repeatable issue unless I manually set some of the secondary timings. Interestingly it doesn't seem to impact my older 1080Ti, whether that is an function of it only using PCIE 3.0 or some type of error handling by Nvidia's drivers etc. I can't say for sure.
> 
> Either way, I beyond relieved that the card seems to be working now... especially since finding a GPU right now is an absolute nightmare.
> 
> I did a quick Time Spy run which seems to be fairly decent score wise?... but returned the card to defaults since it is already clocked higher than anything I need it for from the factory.
> 
> I appreciate the various suggestions given here by various folks while I was troubleshooting this 😁


So it was not an AMD driver issue after all.
Glad you got to the bottom of it.


----------



## LtMatt

jonRock1992 said:


> 3080 Ti for ray tracing. 6900 XT for raw performance and better thermals. Whichever you prefer.


Spot on tbf.


----------



## Skinnered

Ajdaho pl said:


> Factory card limit is too low for 6900xt LDU, my card (6800XT liquid devil) with factory 1150mV on gpu in time spy is pulling 390W after increasing liimit in mpt - factory limits was choking it on 350W (300W +15%).
> and when I increase the voltage to 1250mv it pulls up to 500W in time spy (480-490), but in games it pulls not much more (1200mV), and now it runs for all games non stop on 2700-2800mhz setting in wattman


Hmm, should this be the problem of my sapphire ee lc?

It shuts the system constant off when going much above 2600mhz
It does it random, but often in certain cases, switching from in game to menu, or reloading a level.
I will try with higher powelimit and voltages.
I use reshade and 5k, and I think it also puts a lot more straign/load on the card.
I also noted max hotspot temps reaching near 100 degrees. Bad thermal paste cover?

I should run every game at least between 2700 and 2800 mhz stable with this card I thought?


----------



## lawson67

Well i am in the 26k club despite not having the LC bios and with my crap Vram 

I scored 22 426 in Time Spy


----------



## ZealotKi11er

lawson67 said:


> Well i am in the 26k club despite not having the LC bios and with my crap Vram
> 
> I scored 22 426 in Time Spy
> 
> View attachment 2531447


Add +400 points if you had LC vBIOS. Still far away from 27.8K.


----------



## lawson67

ZealotKi11er said:


> Add +400 points if you had LC vBIOS. Still far away from 27.8K.


Everyone with a RX 6900 XT bar one person is a 1000 points or more away from 27.8k lol


----------



## ZealotKi11er

lawson67 said:


> Everyone with a RX 6900 XT bar one person is a 1000 points or more away from 27.8k lol


I got to 26.2K with no extra voltage. I dont see anything over 26.7K-26.9K with my current known methods. I would like everyone to mat snake and dominate 27-28K with 6900 XT lol. Get those 3090 out of the chart. 12900K will also play a big role on combined score.


----------



## chispy

@lawson67 , awesome score. Guys how do you get so high graphics score on TS ? Is there's a trick or tweak i'm missing on my os windows 10 ? I already maxed out my 6900xt last night and at 2900Mhz real core clock and 2074Mhz real ram clocks , fclk , soc , etc ... completely maxed out on mpt and cannot even get anything near you guys  , my best graphics score is 24700 .

Anyone willing to help me get 25k ? Any special tweaks and tips for windows 10 and TS benchmark ? changing reslution , in driver tweaks , game mode on or off , etc... I woill really apreciate it any guidance and help. Thanks in advance as i'm completely lost on this and have no idea why i'm getting such low gpu scores when a lot of people with 25k + scores run much , much lower clocks.


----------



## lawson67

chispy said:


> @lawson67 , awesome score. Guys how do you get so high graphics score on TS ? Is there's a trick or tweak i'm missing on my os windows 10 ? I already maxed out my 6900xt last night and at 2900Mhz real core clock and 2074Mhz real ram clocks , fclk , soc , etc ... completely maxed out on mpt and cannot even get anything near you guys  , my best graphics score is 24700 .
> 
> Anyone willing to help me get 25k ? Any special tweaks and tips for windows 10 and TS benchmark ? changing reslution , in driver tweaks , game mode on or off , etc... I woill really apreciate it any guidance and help. Thanks in advance as i'm completely lost on this and have no idea why i'm getting such low gpu scores when a lot of people with 25k + scores run much , much lower clocks.


I could of gone faster i left freesync on lol, forgot to turn it off and i had my Vram overvolted when i have already proved that it makes no difference but i did that run late last night so was tired and loaded a saved MPT file and adjusted it slightly, the gains are coming from FCLK and SOC i ran 1500 soc and 2300 fclk @2923mhz, also anyone that has upped vclk and dclk then leave it as standard as its actually making your scores lower.


----------



## lawson67

chispy said:


> @lawson67 , awesome score. Guys how do you get so high graphics score on TS ? Is there's a trick or tweak i'm missing on my os windows 10 ? I already maxed out my 6900xt last night and at 2900Mhz real core clock and 2074Mhz real ram clocks , fclk , soc , etc ... completely maxed out on mpt and cannot even get anything near you guys  , my best graphics score is 24700 .
> 
> Anyone willing to help me get 25k ? Any special tweaks and tips for windows 10 and TS benchmark ? changing reslution , in driver tweaks , game mode on or off , etc... I woill really apreciate it any guidance and help. Thanks in advance as i'm completely lost on this and have no idea why i'm getting such low gpu scores when a lot of people with 25k + scores run much , much lower clocks.


I forgot to say you do know that you can gain voltage control over GPU core via MPT right?, you need to set these feature settings in MPT to allow you to do it, this is all thanks to the time and effort and research put into finding optimal settings in MPT by @Ajdaho pl AKA Sylwester from Hardware Luxx


----------



## chispy

Thank you for the info @lawson67 , apreciate it.


----------



## lawson67

ZealotKi11er said:


> I got to 26.2K with no extra voltage. I dont see anything over 26.7K-26.9K with my current known methods. I would like everyone to mat snake and dominate 27-28K with 6900 XT lol. Get those 3090 out of the chart. 12900K will also play a big role on combined score.


It now transpires that snakeeyes 27k scores are invalid according to what he has written on Hardware Luxx, the problem came from a tesslation bug in the driver

("In my opinion, the score is invalid and was probably caused by a bug in the driver, the problem is tesslation. It was activated as normal in the driver, but apparently still inactive")

He found out this from reinstalling windows and finding the could not get the same scores back again using the same MPT settings and driver, he then did some research trying to find out what was actually happening and realised that even though tessellation appeared to be active in the driver and therefore kept 3D Mark happy and allowing a valid score that in fact tesslation was not active due to the driver bug, he goes on to say that he will be deleting his TS and Port Royal scores and carries on to say -

("I just want to emphasize again that this bug was not used deliberately by me! If I hadn't restarted Windows, I probably wouldn't have noticed either. Because even a restart or switching off the PC has not changed anything. The driver still allowed the high fps")

I belive it was a genuine mistake with a bugged driver and think it he has done the right thing by admitting his findings, its a shame that the RX 6900 XT can not legitimately at this moment hit 27k and higher, but who knows in the future, i also genuinely feel sorry for the guy and you have to respect his honesty , it must be a great feeling to hit that score only then to realise you only got it from a driver bug


----------



## jonRock1992

lawson67 said:


> It now transpires that snakeeyes 27k scores are invalid according to what he has written on Hardware Luxx, the problem came from a tesslation bug in the driver
> 
> ("In my opinion, the score is invalid and was probably caused by a bug in the driver, the problem is tesslation. It was activated as normal in the driver, but apparently still inactive")
> 
> He found out this from reinstalling windows and finding the could not get the same scores back again using the same MPT settings and driver, he then did some research trying to find out what was actually happening and realised that even though tessellation appeared to be active in the driver and therefore kept 3D Mark happy and allowing a valid score that in fact tesslation was not active due to the driver bug, he goes on to say that he will be deleting his TS and Port Royal scores and carries on to say -
> 
> ("I just want to emphasize again that this bug was not used deliberately by me! If I hadn't restarted Windows, I probably wouldn't have noticed either. Because even a restart or switching off the PC has not changed anything. The driver still allowed the high fps")
> 
> I belive it was a genuine mistake with a bugged driver and think it he has done the right think by admitting his findings, its a shame that the RX 6900 XT can not legitimately at this moment hit 27k and higher, but who knows in the future, i also genuinely feel sorry for the guy and you have to respect his honesty , it must be a great feeling to hit that score only then to realise you only got it from a driver bug


Wow. That's crazy. I would be so disappointed to find out that it wasn't legit. I respect the guy for sharing his findings and removing the scores though!


----------



## J7SC

lawson67 said:


> It now transpires that snakeeyes 27k scores are invalid according to what he has written on Hardware Luxx, the problem came from a tesslation bug in the driver
> 
> ("In my opinion, the score is invalid and was probably caused by a bug in the driver, the problem is tesslation. It was activated as normal in the driver, but apparently still inactive")
> 
> He found out this from reinstalling windows and finding the could not get the same scores back again using the same MPT settings and driver, he then did some research trying to find out what was actually happening and realised that even though tessellation appeared to be active in the driver and therefore kept 3D Mark happy and allowing a valid score that in fact tesslation was not active due to the driver bug, he goes on to say that he will be deleting his TS and Port Royal scores and carries on to say -
> 
> ("I just want to emphasize again that this bug was not used deliberately by me! If I hadn't restarted Windows, I probably wouldn't have noticed either. Because even a restart or switching off the PC has not changed anything. The driver still allowed the high fps")
> 
> I belive it was a genuine mistake with a bugged driver and think it he has done the right thing by admitting his findings, its a shame that the RX 6900 XT can not legitimately at this moment hit 27k and higher, but who knows in the future, i also genuinely feel sorry for the guy and you have to respect his honesty , it must be a great feeling to hit that score only then to realise you only got it from a driver bug


Do you have a link to the Hardwareluxx segment you described (I'm multilingual)... haven't been able to find it and haven't seen score removal so far...also Luxx 'members' seem to have the top 7 out of 11 in TS Graphics and I like to check out their threads a bit more, actually, a lot more 

Elsewhere, Rauf (XOCer, Sweden) has shown what new 12900K architecture brings to the table


----------



## lawson67

J7SC said:


> Do you have a link to the Hardwareluxx segment you described (I'm multilingual)... haven't been able to find it and haven't seen score removal so far...also Luxx 'members' seem to have the top 7 out of 11 in TS Graphics and I like to check out their threads a bit more, actually, a lot more
> 
> Elsewhere, Rauf (XOCer, Sweden) has shown what new 12900K architecture brings to the table
> 
> View attachment 2531534


Look for Snakeeyes post in this Link, as for the score to be removed i think that is a conversation that's going to happen between snakeeyes and UL


----------



## J7SC

lawson67 said:


> Look for Snakeeyes post in this Link, as for the score to be removed i think that is a conversation that's going to happen between snakeeyes and UL


Thanks, I found it  

also noticed that there's a separate discussion about the latest MPT beta not playing nice with the latest driver ?! Still have more reading to do but have you folks noticed anything re. MPT (beta 11?) and latest AMD driver ?


----------



## lawson67

J7SC said:


> Thanks, I found it
> 
> also noticed that there's a separate discussion about the latest MPT beta not playing nice with the latest driver ?! Still have more reading to do but have you folks noticed anything re. MPT (beta 11?) and latest AMD driver ?


I am using the final version of MPT i have not noticed any difference or problems, the problem they are talking about is a certain feature selection choice in MPT that Sylwester from HardwareLuxx found that does not allow the GPU voltage to drop or the GPU to downclock even on the desktop with no load on it, with this particular feature setting selected in MPT it crashes the driver if you attempt to open wattman to change frequencies etc however some belive this feature selection has help there scores


----------



## J7SC

lawson67 said:


> I am using the final version of MPT i have not noticed any difference or problems, the problem they are talking about is a certain feature selection choice in MPT that Sylwester from HardwareLuxx found that does not allow the GPU voltage to drop or the GPU to downclock even on the desktop with no load on it, with this particular feature setting selected in MPT it crashes the driver if you attempt to open wattman to change frequencies etc
> 
> View attachment 2531549


Thanks ! I'm currently bleeding the 2nd loop of the dual mobo build and will have to update MPT and AMD driver from a few versions back...


----------



## jonRock1992

lawson67 said:


> I am using the final version of MPT i have not noticed any difference or problems, the problem they are talking about is a certain feature selection choice in MPT that Sylwester from HardwareLuxx found that does not allow the GPU voltage to drop or the GPU to downclock even on the desktop with no load on it, with this particular feature setting selected in MPT it crashes the driver if you attempt to open wattman to change frequencies etc however some belive this feature selection has help there scores
> 
> View attachment 2531549


I have no idea how, but I triggered a bug in one of the drivers that locked the frequency and voltage to max with no fluctuations even in the desktop. I haven't been able to reproduce it though.


----------



## lawson67

jonRock1992 said:


> I have no idea how, but I triggered a bug in one of the drivers that locked the frequency and voltage to max with no fluctuations even in the desktop. I haven't been able to reproduce it though.


If you use those settings from the image i posted above of the Feature setting you will be able to reproduce it


----------



## weleh

Surprised?

No, but you can't say anything without people throwing shade at you for not believing some of the results given all other variables.

If you have benched these cards you would know that the score didn't go well with the clock speeds he was showing.

No need to repeat the same stuff like Intel CPU's score better or faster CPU's help graphic scores, when the load on the CPU during GT1 and GT2 is non existant. 

Holzman score is more in line with what I would expect from a maxed out card volt modded. 2400 memory clock and almost 2900 effective clock, versus what snakeeyes score at 2780 effective clock...

Either way, props to him for admiting but I haven't seen any scores removed. Either way, I contacted 3D Mark today about another person's score and they seem to be aware of many scores that aren't correct and will be fixed later.


----------



## weleh

And apparently, other Luxx scores were also done with the same "bug".


----------



## lawson67

weleh said:


> And apparently, other Luxx scores were also done with the same "bug".


Snakeeyes has had a reply from 3DMark a bunch of scores could be removed that used the same driver as he did which is the 21.9.2 so don't use that driver for your runs, i tried it last night but i found it slower so i used 21.8.2


----------



## jonRock1992

I used 21.8.1 to get my best GPU score. It's the best driver for me. I get lower clock speeds with 21.8.2.


----------



## snakeeyes111

Bro u are rly mad. U can reach 27k with good settings and without Bug.

Im rly sorry, i dont delete my score, some people go to work and earn money. The small free time i have, i spent to 3dmark conversation.

Deal with it or leave it if u Not able to reach scores around 27k. Its not the Problem from other people, only your own fault.


----------



## nilssohn

Hi,
I am ShirKhan from Luxx. We are currently trying to sort out under which circumstances the tesselation bug can be used to increase scores. Unfortunately it seems to be not too hard.

@snakeeyes111 has volontarily reported this bug to our community and to UL/3DMark. Give us and them a reasonable amount of time to figure out what is happening.

There is no good reason for suspicions about any other [LUXX] score. This is new to us, too. Let's keep calm about this.

Regards


----------



## ZealotKi11er

snakeeyes111 said:


> Bro u are rly mad. U can reach 27k with good settings and without Bug.
> 
> Im rly sorry, i dont delete my score, some people go to work and earn money. The small free time i have, i spent to 3dmark conversation.
> 
> Deal with it or leave it if u Not able to reach scores around 27k. Its not the Problem from other people, only your own fault.


I am back in the game. 27.7K score u had before demotivated me. Now its game again. I can smell 27K but I still have to wait for Canada winter.


----------



## LtMatt

lawson67 said:


> It now transpires that snakeeyes 27k scores are invalid according to what he has written on Hardware Luxx, the problem came from a tesslation bug in the driver
> 
> ("In my opinion, the score is invalid and was probably caused by a bug in the driver, the problem is tesslation. It was activated as normal in the driver, but apparently still inactive")
> 
> He found out this from reinstalling windows and finding the could not get the same scores back again using the same MPT settings and driver, he then did some research trying to find out what was actually happening and realised that even though tessellation appeared to be active in the driver and therefore kept 3D Mark happy and allowing a valid score that in fact tesslation was not active due to the driver bug, he goes on to say that he will be deleting his TS and Port Royal scores and carries on to say -
> 
> ("I just want to emphasize again that this bug was not used deliberately by me! If I hadn't restarted Windows, I probably wouldn't have noticed either. Because even a restart or switching off the PC has not changed anything. The driver still allowed the high fps")
> 
> I belive it was a genuine mistake with a bugged driver and think it he has done the right thing by admitting his findings, its a shame that the RX 6900 XT can not legitimately at this moment hit 27k and higher, but who knows in the future, i also genuinely feel sorry for the guy and you have to respect his honesty , it must be a great feeling to hit that score only then to realise you only got it from a driver bug


I had a feeling something was wrong as I know we have some people here capable of high scores in Timespy through knowledge (not including myself in that) and they have decent silicon and they cannot get near that.

That said, look at this, wow!











ZealotKi11er said:


> I am back in the game. 27.7K score u had before demotivated me. Now its game again. I can smell 27K but I still have to wait for Canada winter.


You really have a competitive streak in you don't you Lol.



nilssohn said:


> Hi,
> I am ShirKhan from Luxx. We are currently trying to sort out under which circumstances the tesselation bug can be used to increase scores. Unfortunately it seems to be not too hard.
> 
> @snakeeyes111 has volontarily reported this bug to our community and to UL/3DMark. Give us and them a reasonable amount of time to figure out what is happening.
> 
> There is no good reason for suspicions about any other [LUXX] score. This is new to us, too. Let's keep calm about this.
> 
> Regards


Fair play and respect to you and LUXX. 

Maybe this is why ol LtMatt is still No1 in Firestrike Extreme (for now) as you can't cheat that one. 



weleh said:


> And apparently, other Luxx scores were also done with the same "bug".


Big oof!

Big shoutout to @jonRock1992 for breaking 25K graphics! 🥳


----------



## lawson67

LtMatt said:


> I had a feeling something was wrong as I know we have some people here capable of high scores in Timespy through knowledge (not including myself in that) and they have decent silicon and they cannot get near that.
> 
> That said, look at this, wow!
> View attachment 2531572
> 
> 
> 
> You really have a competitive streak in you don't you Lol.
> 
> 
> Fair play and respect to you and LUXX.
> 
> Maybe this is why ol LtMatt is still No1 in Firestrike Extreme (for now) as you can't cheat that one.
> 
> 
> Big oof!
> 
> Big shoutout to @jonRock1992 for breaking 25K graphics! 🥳


To be honest i think some like ShirKhan from Luxx are not happy that Devcom has done this, its all going on right now over there at Hardwareluxx


----------



## weleh

snakeeyes111 said:


> Bro u are rly mad. U can reach 27k with good settings and without Bug.
> 
> Im rly sorry, i dont delete my score, some people go to work and earn money. The small free time i have, i spent to 3dmark conversation.
> 
> Deal with it or leave it if u Not able to reach scores around 27k. Its not the Problem from other people, only your own fault.


Mad?

Brother, I don't even have a system anymore 
You weren't even a thing on leaderboards I was already breaking records on a 6800XT and then 6900XT, on XTX cards my guy.

Besides, I was the first sharing my ways around here, MPT settings, driver settings, windows settings, for benchmarks, way before most of you guys decided to go public about all the stuff you do. 

Nobody is accusing you of anything and most are even praising you for coming out about this bug but it doesn't change the fact that I always thought some of the scores were a bit suspicious. Whether you did it on purpose or not, that's on your own conscience.

Scores are public, you are the one that has to deal with people criticising or commenting them. You don't need to get all mad about it.


----------



## nilssohn

What I was writing above about legit [LUXX] scores has become obsolete with the Graphics Score of 27.394. I am sorry for that.

Edit: It has been deleted in this second, thx @Devcom.


----------



## LtMatt

lawson67 said:


> To be honest i think some like ShirKhan from Luxx are not happy that Devcom has done this, its all going on right now over there at Hardwareluxx


I know what forum I’ll be reading when I’m at the desktop tomorrow.


----------



## hellm

sh.., didn't see that coming. Although the gap was quite large, i just expected the cache frequency might have such an impact. Anyway, if someone masters cold temperatures on RDNA2, we might see even higher scores. Here is how who can completely disable thermal monitoring (used to prevent zero bug on previous Radeon generations).


Code:


SMU_11_0_7_PP_THERMALCONTROLLER_NONE                0
SMU_11_0_7_PP_THERMALCONTROLLER_SIENNA_CICHLID      0x1C

It is not far from the beginning of the table, 69XT Reference:
Start of PowerPlayTable -> A6 09 0F 00 02 22 03 AF 09 00 00 77 40 00 00 80 00 18 00 00 00 *1C* 00 00 00 00 00 00 76 00 00 00
..or in any 69XT mpt-file at offset 0x115.


----------



## Skinnered

Skinnered said:


> Hmm, should this be the problem of my sapphire ee lc?
> 
> It shuts the system constant off when going much above 2600mhz
> It does it random, but often in certain cases, switching from in game to menu, or reloading a level.
> I will try with higher powelimit and voltages.
> I use reshade and 5k, and I think it also puts a lot more straign/load on the card.
> I also noted max hotspot temps reaching near 100 degrees. Bad thermal paste cover?
> 
> I should run every game at least between 2700 and 2800 mhz stable with this card I thought?


I tried upping the powelimit gpu and tdc and soc.
I tried resp
420, 415 and 65. (Rising voltage immediatly crash in game.).
No improvement, gpu often shuts system down above 2600mhz.
Do I have a faulty card, or kust bad luck?

It almost looks like a power issue (on the mobo?)
Strange thing is, I can run 2800+/2160 for some time.


----------



## ZealotKi11er

If the system shuts down there are 2 things. PSU or you hit a temp limit that causes the system to shut down to protect itself.


----------



## weleh

What PSU do you have?

These cards can peak over 600W at stock settings let alone overclocked and unlimited...


----------



## CS9K

hellm said:


> sh.., didn't see that coming. Although the gap was quite large, i just expected the cache frequency might have such an impact. Anyway, if someone masters cold temperatures on RDNA2, we might see even higher scores. Here is how who can completely disable thermal monitoring (used to prevent zero bug on previous Radeon generations).
> 
> 
> Code:
> 
> 
> SMU_11_0_7_PP_THERMALCONTROLLER_NONE                0
> SMU_11_0_7_PP_THERMALCONTROLLER_SIENNA_CICHLID      0x1C
> 
> It is not far from the beginning of the table, 69XT Reference:
> Start of PowerPlayTable -> A6 09 0F 00 02 22 03 AF 09 00 00 77 40 00 00 80 00 18 00 00 00 *1C* 00 00 00 00 00 00 76 00 00 00
> ..or in any 69XT mpt-file at offset 0x115.


Hmm, time to give one of my radiators an ice bath, you say? 😈


----------



## Skinnered

I've replaced the PSU with a be quiet! Dark Power Pro 12 1500W and connected the PCI power cables dedicated/not shared.
As far as temperature, I hit 65, 97 for resp core and hotspot according to MSI AB, but sometimes at lower temps it's happening.
I will test with the fans on a higher setting to see if it continues.


----------



## Skinnered

I am almost ashame of myself, it was just a matter of rising the fans  Steady 2800+ without issues so far. Temps are 55/70. 
I went this way because I want it to stay quiet and thought 65/97 temps couldn't hurt, but appearently it does.

Funny how I feel now about this monster


----------



## hellm

CS9K said:


> Hmm, time to give one of my radiators an ice bath, you say? 😈


This might be a solution for any strange behaviour from the driver at minus temperatures. I can't promise anything, but this was the solution on previous generations, as i mentioned. Best case, the driver doesn't know any temperatures anymore. Needs to be tested.


----------



## J7SC

CS9K said:


> Hmm, time to give one of my radiators an ice bath, you say? 😈


Why stop there ? After the water-ice, add some DICE on top of the water/ice to the bucket / basin and see it dance, steam and make a big racket (even scratch up your rad-fins). I used to do that back for 4-GPU HWbot benchies... the whole thing lasted less than a minute before the fun was over and temps went back up


----------



## ZealotKi11er

-30c air is as low as I will go. We getting a hot November here is Canada.


----------



## J7SC

ZealotKi11er said:


> -30c air is as low as I will go. We getting a hot November here is Canada.


...Out here on the West Coast it's colder (for now) w/ snow on the local mountains since a few days ago...might swing a bit, but la nina year confirmed, so the PacN.West should be below average this winter. Good benching weather !


----------



## CS9K

I live in Texas. Even during the autumn and winter, it is still humid at times. I wish I could go cold, but I am not prepared to deal with the condensation that would follow.

Ambient water cooling for me, only... Though, I may still try the SPPT mod and see what happens.


----------



## Bobbydo

Hi you guys, This is the best score so far. Is there a way to reach graphics score like 25k or more? Using morepowertool or anything like that? I have Asus Strix 6900 xt LC GAMING Top T16G. 
Settings: Core:2650-2750. Memory:2150:fast. Power:+15. Voltage: 1170. But I tried 1125, 1180, 1150.


----------



## ZealotKi11er

Do you have rebar enabled? Yes with MPT you will get more score for sure.


----------



## Bobbydo

Yes. I have rebar enabled. When I go above 2760 MHz the GPU crashes to desktop. Times out. I'm not sure why. I have 1000w Power supply Gold. Processor is 11700f. So you get it.


----------



## Bobbydo




----------



## zGunBLADEz

if i can do almost 23k i wasnt even trying yet on a STOCK reference 6800xt.... on my 6800xt im tapped out by drivers @ 2.8GHz with so much room left my voltage is just mere 1.175mV and this ppl crying bloody murder around here like :rollseyes:
and all i did was up the voltage on MPT and power like lolz... NOT even a unlocked bios i have lol

btw i dont post in luxx just a lurker there and i saw the whole {{OCN cry me river}} {they cheating over there at luxx bloody murder!!!!} looking and reading info about MPT "where i got the news about volt unlocker" and watching some scores and UL reporting like doesnt surprise me at all for ocn fellas.. idk splav comes to mind in such instances lol


----------



## CS9K

hellm said:


> This might be a solution for any strange behaviour from the driver at minus temperatures. I can't promise anything, but this was the solution on previous generations, as i mentioned. Best case, the driver doesn't know any temperatures anymore. Needs to be tested.


_edit_ To all: When benchmarking, I do not set a minimum core clock speed. It never made a difference for me, so I just leave it alone and only set a maximum core clock speed.

I tried changing this setting, and the only difference that I noticed, is that instead of my actual clock vs set clock being 50MHz apart (2700MHz set, actual 2650MHz on average, range from 2600-2650MHz depending on load), instead the actual clock speed sat firmly at 100MHz behind set clock (2700MHz set, 2600MHz actual). It never boosted up to 2650MHz. This behavior followed the set-clock. 
I could tell no other difference in performance, so I changed it back.


----------



## zGunBLADEz

CS9K said:


> _edit_ To all: When benchmarking, I do not set a minimum core clock speed. It never made a difference for me, so I just leave it alone and only set a maximum core clock speed.
> 
> I tried changing this setting, and the only difference that I noticed, is that instead of my actual clock vs set clock being 50MHz apart (2700MHz set, actual 2650MHz on average, range from 2600-2650MHz depending on load), instead the actual clock speed sat firmly at 100MHz behind set clock (2700MHz set, 2600MHz actual). It never boosted up to 2650MHz. This behavior followed the set-clock.
> I could tell no other difference in performance, so I changed it back.


did you change the power envelope?? like lets say 800w for example? and the setting that forces set voltage?

btw i did saw my 6800xt hitting/kissing @ 500w @ 1.25mV the first time i did that lol so be careful


----------



## CS9K

zGunBLADEz said:


> did you change the power envelope?? like lets say 800w for example? and the setting that forces set voltage?
> 
> btw i did saw my 6800xt hitting/kissing @ 500w @ 1.25mV the first time i did that lol


I did not. While I _do_ have 16ga wires in my two 8-pin PCIe power connectors, I don't trust the reference PCB nor power delivery for more than about 400W for benching and daily-driving. While I don't mind pushing the limits a little, I _am_ here to enjoy my $1000 GPU in games, not worry about benchmark scores. Also, I refuse to run the "temp dependent vmin" outside of the one time I tried the setting, as _any_ load ramps the voltage up to the user-set voltage, regardless of clock speed and load. No thanks.


----------



## Bobbydo

CS9K said:


> _edit_ To all: When benchmarking, I do not set a minimum core clock speed. It never made a difference for me, so I just leave it alone and only set a maximum core clock speed.
> 
> I tried changing this setting, and the only difference that I noticed, is that instead of my actual clock vs set clock being 50MHz apart (2700MHz set, actual 2650MHz on average, range from 2600-2650MHz depending on load), instead the actual clock speed sat firmly at 100MHz behind set clock (2700MHz set, 2600MHz actual). It never boosted up to 2650MHz. This behavior followed the set-clock.
> I could tell no other difference in performance, so I changed it back.


When I don't not set min clock. I get 21k in time spy. No matter what. 22k if I undervolt. GPU power usage goes as high as 450W. At least what I saw during the benchmark.


----------



## zGunBLADEz

CS9K said:


> I did not. While I _do_ have 16ga wires in my two 8-pin PCIe power connectors, I don't trust the reference PCB nor power delivery for more than about 400W for benching and daily-driving. While I don't mind pushing the limits a little, I _am_ here to enjoy my $1000 GPU in games, not worry about benchmark scores. Also, I refuse to run the "temp dependent vmin" outside of the one time I tried the setting, as _any_ load ramps the voltage up to the user-set voltage, regardless of clock speed and load. No thanks.


thats one of the tricks lol
i did try out like i said gpu went full ballistic in that one @ 1.25mV so if you dont have hefty cooling and hefty psu well you know whats up with that.. then i started notched back down lol
in luxx this peeps were hitting 1khw loads so doesnt surprise me on the scores.. if i hit peaks of 500w dont even want to see transients on those peaks lol lucky didnt trigger no ocp over here..

yeah this gpus they go ballistic they do have power and are VERY VERY AND VERY power starved for a reason. they are monsters tho thats why.. when i saw i almost hitted 23k on time spy graphic score at just 1.175mV where i was at...i knew i was been hold on place by bios/drivers and what not.. mostly power starved and locked by that 2.8GHz wall.. the card wasnt even sweating on my cooling


----------



## CS9K

zGunBLADEz said:


> thats one of the tricks lol
> i did try out like i said gpu went full ballistic in that one @ 1.25mV so if you dont have hefty cooling and hefty psu well you know whats up with that.. then i started notched back down lol
> in luxx this peeps were hitting 1khw loads so doesnt surprise me on the scores.. if i hit peaks of 500w dont even want to see transients on those peaks lol lucky didnt trigger no ocp over here..
> 
> yeah this gpus they go ballistic they do have power and are VERY VERY AND VERY power starved for a reason. they are monsters tho thats why.. when i saw i almost hitted 23k on time spy graphic score at just 1.175mV where i was at...i knew i was been hold on place by bios/drivers and what not.. mostly power starved and locked by that 2.8GHz wall.. the card wasnt even sweating on my cooling


I myself only tried 1200mV on my reference RX 6900 XT. I got another 90MHz out of my overclock, and it was neat, but until I can set that voltage and still have the card treat the rest of the MHz/Voltage curve properly, I'll pass.

But I agree, it is _bonkers_ that we went from 2100MHz tops one generation, to RDNA2 regularly doing 2500MHz, and the top end pushing nearly 3GHz the next generation.


----------



## J7SC

CS9K said:


> I did not. While I _do_ have 16ga wires in my two 8-pin PCIe power connectors, I don't trust the reference PCB nor power delivery for more than about 400W for benching and daily-driving. While I don't mind pushing the limits a little, I _am_ here to enjoy my $1000 GPU in games, not worry about benchmark scores. Also, I refuse to run the "temp dependent vmin" outside of the one time I tried the setting, as _any_ load ramps the voltage up to the user-set voltage, regardless of clock speed and load. No thanks.


I keep my card's PL limit at 450W all in, and that's for a decent 3x8 pin custom PCB and extensive custom w-cooling. I might push that a bit more during benching in the winter, but for daily use, the odd bench to test a new driver, and gaming, that's my limit.

Benching can be great fun, unless it takes over most of one's free time. I've done a fair bit of it before, including XOC. But one can get caught up 'in the moment' and push the power and voltage limits too far, even if the effect is not 'immediate' ... It is easy to forget that this GPU gen is a complex 7nm (focused heat) piece of electronic equipment with almost 27 billion transistors. Then there are the temporary 'spikes' which probably don't fully show up in HWInfo every time, so a bit of a buffer via a lower limit helps.

I use MPT mostly to extend the PL to said level and leave the rest alone, such as 'temp dep. vmin'...I really only miss VRAM MHz limit extensions on my non-H model which surprised me with the clocks it can do right out of the box - and it is whisper-quiet in this new w-cooling config. The 6900 XT setup replaced an older but trusty 5960X HEDT Asus X99 workstation board w/ 2x 980s as I shifted everything to 4K now on the work-and-play front, and it makes up half of the primary-use daily systems - I tend to keep these things for a while...hopefully this 6900XT as well.


----------



## CS9K

J7SC said:


> I keep my card's PL limit at 450W all in, and that's for a decent 3x8 pin custom PCB and extensive custom w-cooling. I might push that a bit more during benching in the winter, but for daily use, the odd bench to test a new driver, and gaming, that's my limit.
> 
> Benching can be great fun, unless it takes over most of one's free time. I've done a fair bit of it before, including XOC. But one can get caught up 'in the moment' and push the power and voltage limits too far, even if the effect is not 'immediate' ... It is easy to forget that this GPU gen is a complex 7nm (focused heat) piece of electronic equipment with almost 27 billion transistors. Then there are the temporary 'spikes' which probably don't fully show up in HWInfo every time, so a bit of a buffer via a lower limit helps.
> 
> I use MPT mostly to extend the PL to said level and leave the rest alone, such as 'temp dep. vmin'...I really only miss VRAM MHz limit extensions on my non-H model which surprised me with the clocks it can do right out of the box - and it is whisper-quiet in this new w-cooling config. The 6900 XT setup replaced an older but trusty 5960X HEDT Asus X99 workstation board w/ 2x 980s as I shifted everything to 4K now on the work-and-play front, and it makes up half of the primary-use daily systems - I tend to keep these things for a while...hopefully this 6900XT as well.


Aye, of all the things to give us via SPPT, we have VDDIO (memory) control past 1350mV, but are still stuck with a 2150MHz lock on non-LC GPU's. Makes no damn sense, since there's a decent bit more performance just sitting on the PCB waiting to be tapped in to.

Have patience, CS9K... we'll get there...


----------



## J7SC

CS9K said:


> Aye, of all the things to give us via SPPT, we have VDDIO (memory) control past 1350mV, but are still stuck with a 2150MHz lock on non-LC GPU's. Makes no damn sense, since there's a decent bit more performance just sitting on the PCB waiting to be tapped in to.
> 
> Have patience, CS9K... we'll get there...


...yeah, and to add 'insult to injury', AMD's own software tells me that my VRAM oc can be 2260 MHz after that little test it runs - and I'm sure that's somewhat conservative.


----------



## WR-HW95

I managed to break 24k in TS with my reference card finally.
Graphics Score 24 198
I had to use 450W PL and 1.218V voltage to get there, because my card is bad clocker.
But this that bothers me is that memory behave is even worse than last time.
Month ago I got 23.8k when mem was set at 2150, 2 weeks later I had to drop it to 2140 to score even 23.7k.
Now I had to drop down to 2110 for best score.
Funny how it works like that, because it says I scored 22 020 in Time Spy in there that average mem clock was about same than now, but best performance was with 2150 set.
Also.. someone said memory oc test says how much it would run. For me it makes error and resets settings.


----------



## weleh

zGunBLADEz said:


> if i can do almost 23k i wasnt even trying yet on a STOCK reference 6800xt.... on my 6800xt im tapped out by drivers @ 2.8GHz with so much room left my voltage is just mere 1.175mV and this ppl crying bloody murder around here like :rollseyes:
> and all i did was up the voltage on MPT and power like lolz... NOT even a unlocked bios i have lol
> 
> btw i dont post in luxx just a lurker there and i saw the whole {{OCN cry me river}} {they cheating over there at luxx bloody murder!!!!} looking and reading info about MPT "where i got the news about volt unlocker" and watching some scores and UL reporting like doesnt surprise me at all for ocn fellas.. idk splav comes to mind in such instances lol


So you only used new drivers, upped the voltage and all the other tips and tricks already available for the past 6 months to do a mediocre score on a 6800XT? This is why dropping in a parachute randomly into a conversation doesn't do any good.

Nice buddy. next time try harder, your trolling is about 2/10.

Nobody reported any score from LUXX, so I have no idea where you or anyone else over at LUXX came to that conclusion. 

Here's the score that I personally reported.









I scored 47 802 in Fire Strike


AMD Ryzen 7 5800X, AMD Radeon RX 6900 XT x 1, 16384 MB, 64-bit Windows 11}




www.3dmark.com





Clearly tesselation off but you wouldn't know. In fact, the score has been invalidated so I guess I was right.


----------



## CS9K

Can we keep it classy in here, please? Regardless of who thinks they were right, both of yall are being skillets. We're better than this.

Else yall invoke XKCD 









Duty Calls







xkcd.com


----------



## zGunBLADEz

CS9K said:


> I myself only tried 1200mV on my reference RX 6900 XT. I got another 90MHz out of my overclock, and it was neat, but until I can set that voltage and still have the card treat the rest of the MHz/Voltage curve properly, I'll pass.
> 
> But I agree, it is _bonkers_ that we went from 2100MHz tops one generation, to RDNA2 regularly doing 2500MHz, and the top end pushing nearly 3GHz the next generation.


yeah it gets to a point tho. in my case, i prefer to have it under 200w tdp. The difference in performance over power needs is negligeble at best only measured on benchmarks.. The power and heat generated for regular usage aint worth it, it do look nice when you benchmark. but for 2x the tdp like lol no





weleh said:


> So you only used new drivers, upped the voltage and all the other tips and tricks already available for the past 6 months to do a mediocre score on a 6800XT? This is why dropping in a parachute randomly into a conversation doesn't do any good.
> 
> Nice buddy. next time try harder, your trolling is about 2/10.
> 
> Nobody reported any score from LUXX, so I have no idea where you or anyone else over at LUXX came to that conclusion.
> 
> Here's the score that I personally reported.
> 
> 
> 
> 
> 
> 
> 
> 
> 
> I scored 47 802 in Fire Strike
> 
> 
> AMD Ryzen 7 5800X, AMD Radeon RX 6900 XT x 1, 16384 MB, 64-bit Windows 11}
> 
> 
> 
> 
> www.3dmark.com
> 
> 
> 
> 
> 
> Clearly tesselation off but you wouldn't know. In fact, the score has been invalidated so I guess I was right.
> 
> View attachment 2531659


Let me see one instance of one user that has a driver bug.. but let me ´ blindly quote´
"possible more users for luxx are using this exploits"


Mediocre score? for a 6800xt yeah right lol.. funny tho im not even trying on her lol..


----------



## weleh

zGunBLADEz said:


> yeah it gets to a point tho. in my case, i prefer to have it under 200w tdp. The difference in performance over power needs is negligeble at best only measured on benchmarks.. The power and heat generated for regular usage aint worth it, it do look nice when you benchmark. but for 2x the tdp like lol no
> 
> 
> 
> 
> Let me see one instance of one user that has a driver bug.. but let me ´ blindly quote´
> "possible more users for luxx are using this exploits"
> 
> 
> Mediocre score? for a 6800xt yeah right lol.. funny tho im not even trying on her lol..


Not only was I right, the scores have been removed/manually removed by said users or else no reason for them to remove them at all.

Besides, nobody in here said anything other than express doubt about how certain score got attained. The fact that so many people got so defensive about this speaks more than anything I could ever write. 

At the end of the day competing is only fun if done legitimately and therefore I can only praise people who noticed the bug and reported it, to keep competition fair and square.


----------



## J7SC

CS9K said:


> *Can we keep it classy in here, please? *Regardless of who thinks they were right, both of yall are being skillets. We're better than this.
> 
> Else yall invoke XKCD
> 
> 
> 
> 
> 
> 
> 
> 
> 
> Duty Calls
> 
> 
> 
> 
> 
> 
> 
> xkcd.com


...ok, I'll keep my Classies here then instead of selling them 


Spoiler














...btw, they worked great w/ EVbot...you could switch GPU core, VRAM and other voltages and parameters during a run. That was great for 3DM GT1 vs GT2, making changes on the fly during a bench. I suppose Elmor's EVC2SX would do similar things for a 6900XT, but EVBot was a simple plug-in/plug-out, even on brand new unopened cards.


----------



## LtMatt

Why can’t we just all get along?


----------



## ZealotKi11er

Tomorrow 12900K in the house. I am expecting 1000 point in gpu score because of this CPU.


----------



## Ajdaho pl

I can see that many people here are pinching the bottom part of their backs, you are accusing others of cheating, without any evidence, Weleh or Zealot if you claim that there was cheating with tessellation or anything then please prove it, do a test with this alleged edge and beat the results, because so far for me it is a cry of people who can not accept the loss


----------



## ZealotKi11er

Ajdaho pl said:


> I can see that many people here are pinching the bottom part of their backs, you are accusing others of cheating, without any evidence, Weleh or Zealot if you claim that there was cheating with tessellation or anything then please prove it, do a test with this alleged edge and beat the results, because so far for me it is a cry of people who can not accept the loss


We did not accuse anyone of cheating. Both me and Weleh have tested these gpu a lot and know what is possible with current "tricks". We knew that with clk speed most people would be behind even further. It was either a new trick that someone found or some driver issue. Looking at snakes score, he is pretty good at it. At some people i believed we needed Intel CPU because he was crushing everyone.


----------



## Ajdaho pl

I don't believe it, I'm pretty sure an Intel platform give better 3dmark gpu score, especially since the introduction of the resizable bar, it's not the processor it's the memory subsystem which on Intel gets low latency not available on AMD processors, You only need to look at the HOF scores to see that in most cases an Intel based gpu needs lower clock speeds to achieve the same GPU score as an AMD platform


----------



## ZealotKi11er

Ajdaho pl said:


> I don't believe it, I'm pretty sure an Intel platform give better 3dmark gpu score, especially since the introduction of the resizable bar, it's not the processor it's the memory subsystem which on Intel gets low latency not available on AMD processors, You only need to look at the HOF scores to see that in most cases an Intel based gpu needs lower clock speeds to achieve the same GPU score as an AMD platform


The point was that Time Spy gfx score could only be influenced to much by the CPU.


----------



## The EX1

Grabbed a Sapphire TOXIC Air cooled as soon as I saw them on Newegg and I was assured that it would be XTXH.

Nope. Card reports Navi XTX and runs out of steam at 2600mhz before crashing. It also lists 1.175 as maximum voltage which = XTX.

Extremely disappointed with Sapphire.


----------



## ZealotKi11er

The EX1 said:


> Grabbed a Sapphire TOXIC Air cooled as soon as I saw them on Newegg and I assured that it would be XTXH.
> 
> Nope. Card reports Navi XTX and runs out of steam at 2600mhz before crashing. It also lists 1.175 as maximum voltage which = XTX.
> 
> Extremely disappointed with Sapphire.


How much was it?


----------



## Ajdaho pl

ZealotKi11er said:


> The point was that Time Spy gfx score could only be influenced to much by the CPU.


I don't know if it's the processor or the memory subsystem , but as I was looking through google now for memory test results on alder lake , You might be disappointed > latency of 90ns?


----------



## The EX1

ZealotKi11er said:


> How much was it?


$1,799


----------



## weleh

Ajdaho pl said:


> I can see that many people here are pinching the bottom part of their backs, you are accusing others of cheating, without any evidence, Weleh or Zealot if you claim that there was cheating with tessellation or anything then please prove it, do a test with this alleged edge and beat the results, because so far for me it is a cry of people who can not accept the loss


Just finished watching Malignant movie so I'm going off to bed now but before I go, I want to ask where do you see me saying someone was cheating?   

You guys over your own forum where the ones crying at each other not me. You guys are the ones finding bugs and exploits even if by mistake, not me. Now you come here to cry about it and write on your own forum that we are mad at you for having higher scores? Bro please... Don't make me laugh.

I've merely drawn conclusions based off of what I read over there.

You had this drama some months ago, now this again. The same tesselation bug was reported here too but contrary to LUXX, here there has never been any drama about scores, hacks or cheats because we all share with eachother our findings in a positive way in the hopes everyone can get the best out of their cards.

Maybe it's some sort of language barrier from translations or maybe you guys are just taking all of this too seriously, but nobody in here wants anyone going to prision because of some 3D Mark scores, hell I don't even have a system anymore, sold it and moved countries 

Sleep well everyone, good night.


----------



## J7SC

Over at the 'HWBot farm', Alder Lake is starting to make its presence felt a bit...as is usually the case when new CPUs or GPUs come out, XOC folk will first fill up their USB sticks w/backup results, then test the waters with subs, ie. a new top 3DM FSEx score today...6700 MHz on the CPU, DDR5 at 6400 / CL24, and the 6900XT at 3200 MHz or thereabouts - probably not on an AIO 

I look forward to see what will happen between Alder Lake and the upcoming AMD Zen3 w/ stacked V-Cache.


----------



## jonRock1992

The EX1 said:


> Grabbed a Sapphire TOXIC Air cooled as soon as I saw them on Newegg and I was assured that it would be XTXH.
> 
> Nope. Card reports Navi XTX and runs out of steam at 2600mhz before crashing. It also lists 1.175 as maximum voltage which = XTX.
> 
> Extremely disappointed with Sapphire.


Dude I'd be pissed. Return it.


----------



## LtMatt

The EX1 said:


> Grabbed a Sapphire TOXIC Air cooled as soon as I saw them on Newegg and I was assured that it would be XTXH.
> 
> Nope. Card reports Navi XTX and runs out of steam at 2600mhz before crashing. It also lists 1.175 as maximum voltage which = XTX.
> 
> Extremely disappointed with Sapphire.


There are two versions. 1) Toxic Extreme Edition (XTXH). 2) Toxic Limited Edition (XTX).

Can you share a link to the retailer and I will check to see if they are labelled incorrectly.

EDIT - Sorry misunderstood, you are referring to the Toxic Air. Not sure about that one.


----------



## Ajdaho pl

weleh
I read your posts and I see that you eminently need ointment for pain ... it hurts you so much that your scores went so low ? now after discovering soft volt mod , just anyone following the LUXX guidelines will destroy those scores? pathetic
look again at what you wrote, you do not accuse anyone, you "only" insinuate "not possible with this clock" "probably others from LUXX do it too" "I was right" 
the same as with the guy you reported on, and did you look at the next guy after him? the current first place, he has in 1 graphics test result 7fps better, and in the 2nd test worse, but it's nothing, the most important thing is that all this at even lower clocks he obtained, you are a denouncer, on whom request someone's probably honest result was erased, I do not know where you are from, but in my country denouncers have nothing to be proud of

yesterday after reading your envious posts my blood rushed, at 2 am I did a test, following the guidelines Snake found, lower soc gives higher flck and there you go, with the same gpu clocks and increasing the flck by 50 points, I raised my score by 76 graphical points, you don't even know how to tune these cards, you have no idea, now you can report me to UL, "because he has too many fps" go ahead


----------



## geriatricpollywog

Ajdaho pl said:


> weleh
> I read your posts and I see that you eminently need ointment for pain ... it hurts you so much that your scores went so low ? now after discovering soft volt mod , just anyone following the LUXX guidelines will destroy those scores? pathetic
> look again at what you wrote, you do not accuse anyone, you "only" insinuate "not possible with this clock" "probably others from LUXX do it too" "I was right"
> the same as with the guy you reported on, and did you look at the next guy after him? the current first place, he has in 1 graphics test result 7fps better, and in the 2nd test worse, but it's nothing, the most important thing is that all this at even lower clocks he obtained, you are a denouncer, on whom request someone's probably honest result was erased, I do not know where you are from, but in my country denouncers have nothing to be proud of
> 
> yesterday after reading your envious posts my blood rushed, at 2 am I did a test, following the guidelines Snake found, lower soc gives higher flck and there you go, with the same gpu clocks and increasing the flck by 50 points, I raised my score by 76 graphical points, you don't even know how to tune these cards, you have no idea, now you can report me to UL, "because he has too many fps" go ahead
> View attachment 2531762
> 
> 
> View attachment 2531763


Wow 24.5k graphics score on a 6800XT. Is that a Ford Fusion bumper?

You are like the Ice Man from Hollywood blockbuster, Top Gun.


----------



## LtMatt

Move along please folks, nothing to see here.


----------



## Ajdaho pl

0451 said:


> Wow 24.5k graphics score on a 6800XT. Is that a Ford Fusion bumper?
> 
> You are like the Ice Man from Hollywood blockbuster, Top Gun.


No, it was the handles from Fiat 125p that gave it that kick, and the purple neon underneath.
This is the result of collaboration many people sharing their settings and insights on Hardwareluxx, without Them I would not have done anything


----------



## The EX1

jonRock1992 said:


> Dude I'd be pissed. Return it.


Newegg makes it hard to return open GPUs. Too many people returning cards because they are bad overclockers I guess.
I am pretty pissed, ya. The original Toxic AIO card was out before XTXH was really on the market so I understand having a XTX SKU there. What I don’t get is Sapphire putting regular XTX in their Nitro +, Nitro + SE, and Toxic air cooled offerings. Toxic should of been XTXH to make it special. Especially since the Nitro SE has the Toxic PCB too. At least the card is nice to look at?


LtMatt said:


> There are two versions. 1) Toxic Extreme Edition (XTXH). 2) Toxic Limited Edition (XTX).
> 
> Can you share a link to the retailer and I will check to see if they are labelled incorrectly.
> 
> EDIT - Sorry misunderstood, you are referring to the Toxic Air. Not sure about that one.


XTX evidently. I was shocked that they skimped out on it. I played with the Toxic boost button just for fun and it consistently makes the card downclock and undervolt to around 1130mv. When Boost is turned on, it sets a target clock of 2500mhz on my card. The issue is that the card already boosts to 2480ish in stock form so it essentially down clocks 50mhz. Even the Toxic Boost gimmick is a fail on the Air version.

Gave the card all the power it needed with MPT, but could only do 2650mhz actual sustained clock along with 2110mhz on the memory. The card does have a much better cooler mount than my XTX Red Devil. Hotspot stays in the low 90s compared to the 98-102C on the Devil and the 108C I get on my reference card.


----------



## weleh

J7SC said:


> Over at the 'HWBot farm', Alder Lake is starting to make its presence felt a bit...as is usually the case when new CPUs or GPUs come out, XOC folk will first fill up their USB sticks w/backup results, then test the waters with subs, ie. a new top 3DM FSEx score today...6700 MHz on the CPU, DDR5 at 6400 / CL24, and the 6900XT at 3200 MHz or thereabouts - probably not on an AIO
> 
> I look forward to see what will happen between Alder Lake and the upcoming AMD Zen3 w/ stacked V-Cache.


Whoever doesnt have a 12th Gen CPU and wants to be competitive on HWBOT is screwed royally. Overclocking was always pay to win.


----------



## weleh

Ajdaho pl said:


> blablabla spam spam blablabla cry cry


I don't know if this is a language barrier or what it is, but I'll repeat myself for the 10th time but this will be my last.

Not a single person in this thread called anyone a cheater. You guys on the other hand, are fuming at each other on your own forum oer who is using the bug and who isn't, and it's not the first time as you might remember yourself.

Not a single soul in here went to harass you on your own forum, however, you keep sending your brigade here to talk smack while doing the same at your own forum. Looks like some of you have an hurt ego or something. 

Here, we merely expressed doubt and amazement at some of the scores and it turns out, we were right since some of the scores were removed. I mean correct me if I'm wrong but looks like admission of guilt.

I made a comment speculating some other people over at your forum had used the bug, intentionally or not, turns out I was also right because more than one person removed their scores. 

Nobody reported anyone from your own forum to UL, I already showed you the email with the score that I personally reported. UL replied by saying they are aware and there are more scores that are looking out of place.

You also think coming here and attacking me or my scores makes you look like a big guy when in fact, it's just hella cringe that you do that. 

May I remind you again I was using a 6900XTX card? May I remind you I wasn't on a custom loop with MORAs sitting outside or using DICE to lower temps? May I remind you my card doesn't have cap mods or liquid metal? May I remind you I was benching almost half a year ago? May I remind you I don't even remember seeing any of you guys on the leaderboard back when me L1ME and Faxtor2 were top cards over there for months? May I remind you I don't have a system anymore so I'm not benching anymore?

May I also tell you nobody in here represents OCN nor have any affiliation with it nor do we care about defending some invisible forum honor like a cult?

So please, next time you come here write smack about me or any other member on this thread, show some godamn respect or don't write anything at all because you're going to be promptly ignored.


----------



## lawson67

Ajdaho pl said:


> weleh
> I read your posts and I see that you eminently need ointment for pain ... it hurts you so much that your scores went so low ? now after discovering soft volt mod , just anyone following the LUXX guidelines will destroy those scores? pathetic
> look again at what you wrote, you do not accuse anyone, you "only" insinuate "not possible with this clock" "probably others from LUXX do it too" "I was right"
> the same as with the guy you reported on, and did you look at the next guy after him? the current first place, he has in 1 graphics test result 7fps better, and in the 2nd test worse, but it's nothing, the most important thing is that all this at even lower clocks he obtained, you are a denouncer, on whom request someone's probably honest result was erased, I do not know where you are from, but in my country denouncers have nothing to be proud of
> 
> yesterday after reading your envious posts my blood rushed, at 2 am I did a test, following the guidelines Snake found, lower soc gives higher flck and there you go, with the same gpu clocks and increasing the flck by 50 points, I raised my score by 76 graphical points, you don't even know how to tune these cards, you have no idea, now you can report me to UL, "because he has too many fps" go ahead
> View attachment 2531762
> 
> 
> View attachment 2531763


Lets not try to make it about us and them (OCN) vs (LUXX) that's not the case, i am a member now of both forums and we are all striving for the same thing high "GPU scores" as its Cleary something we all enjoy, we are a global community on both forums LUXX and OCN the internet brings everyone together from all over the world and which ever forum you post on the most you are more likely to use that tag, some people use no tag at all, i only found out about hardwareluxx a few months ago and only joined recently and consequently have only made about 4 posts there and have learned a lot, there are clearly a lot of knowledgeable people on LUXX, i personally in this forum OCN have attributed thanks to you for all the hard work and effort that you have clearly put into MPT and i don't see you as us or a them, that kind of thinking in the past has started wars.

As for Snakeeyes 27k score we was all surprised when he got that, i was in-fact excited that a 27k plus score was possible from a RX 6900 XT it was great to see and great to see him above the 3090's, we all of course wondered how he had achieved such a huge lead over the rest of us, then a few days later Snakeeyes himself honourably announced that he had doubts about his own score being legitimate and quoted that he believed it was a tessellation driver bug, then within hours of that announcement Devcom suddenly throws up a 27k score almost 600 points higher than his current best score, it was almost as if he has worked something out from what Snakeeyes has been saying about the tessellation driver bug, he then deletes the score which obviously looks suspicious, after that Shirkhan comes on OCN to say that all the LUXX scores are legit other than Devcoms 27k score!

From what i have seen it appears that some a very small number maybe 2 people on LUXX know something the rest of the community does not know weather its legitimate or not i will leave to there conscious and an apparent investigation from 3Dmark/ UL, i have been reading the harwareluxx forum a lot over that last couple of days and i must say i have the upmost respect for Shirkhan he has worked tirelessly for a couple of days trying to work out what is going on so big props to him, as for the differences in your score being 74 points i respectfully belive that such a low difference proves nothing as 74 points is within a margin of error, anyhow lets all please try to keep this civil, there is no us and them we all belong to a global community and we all want the same thing, legitimate high GPU scores!


----------



## J7SC

lawson67 said:


> Lets not try to make it about us and them (OCN) vs (LUXX) that's not the case, i am a member now of both forums and we are all striving for the same thing high "GPU scores" as its Cleary something we all enjoy, we are a global community on both forums LUXX and OCN the internet brings everyone together from all over the world and which ever forum you post on the most you are more likely to use that tag, some people use no tag at all, i only found out about hardwareluxx a few months ago and only joined recently and consequently have only made about 4 posts there and have learned a lot, there are clearly a lot of knowledgeable people on LUXX, i personally in this forum OCN have attributed thanks to you for all the hard work and effort that you have clearly put into MPT and i don't see you as us or a them, that kind of thinking in the past has started wars.
> 
> 
> Spoiler
> 
> 
> 
> As for Snakeeyes 27k score we was all surprised when he got that, i was in-fact excited that a 27k plus score was possible from a RX 6900 XT it was great to see and great to see him above the 3090's, we all of course wondered how he had achieved such a huge lead over the rest of us, then a few days later Snakeeyes himself honourably announced that he had doubts about his own score being legitimate and quoted that he believed it was a tessellation driver bug, then within hours of that announcement Devcom suddenly throws up a 27k score almost 600 points higher than his current best score, it was almost as if he has worked something out from what Snakeeyes has been saying about the tessellation driver bug, he then deletes the score which obviously looks suspicious, after that Shirkhan comes on OCN to say that all the LUXX scores are legit other than Devcoms 27k score!
> 
> From what i have seen it appears that some a very small number maybe 2 people on LUXX know something the rest of the community does not know weather its legitimate or not i will leave to there conscious and an apparent investigation from 3Dmark/ UL, i have been reading the harwareluxx forum a lot over that last couple of days and i must say i have the upmost respect for Shirkhan he has worked tirelessly for a couple of days trying to work out what is going on so big props to him, as for the differences in your score being 74 points i respectfully belive that such a low difference proves nothing as 74 points is within a margin of error, anyhow lets all please try to keep this civil, there is no us and them we all belong to a global community and we all want the same thing, legitimate high GPU scores!


That's a very thoughtful and thorough post . Various performance-related threads at OCN (and elsewhere) have had problems with 'trial by innuendo'. Benchmarking by definition can be very competitive, and that very competition leads to various discoveries many of us benefit from, be it MPT here or custom bios over at for example the 3090 thread for our own use even if we don't get into top-level benchmarking comps.

If a score stands out by a large margin, it is either by a.) a superb chip sample b.) other 'legal' discoveries by really dedicated folk, c.) an overall great 'chain of sub-systems' as well as cooling, d.) some naughty trickery or e.) some combination of some of these factors. But as I posted before, it is best left to the specific benchmark publishers to police the posted results on their results pages. Clearly, when someone new comes along with a top score that beats others, the 'local beachmasters' (I watch a lot of nature documentaries ) will question results. That is 'normal', but it should be a respectful and factual process, rather than one solely driven by emotions. Best to step back for a while before posting a response. Factual in this context means querying it with the benchmark publisher.

With the latest gens of CPUs and GPUs becoming ever more complex algorithm-driven to extract max performance and a reasonable (stock) power budget, I suspect benchmarking discussions will get more heated (pardon the pun) down the line as there are a lot more (even hidden) parameters that can affect results....Now back to our regular programming...


----------



## Enzarch

Well, I was able to get my hands on the reference liquid cooled 6900, and got my water block and EVC swapped over to it from my original reference card.

Obvious, but yeah, the memory speed is a huge boon to benchmark scores, and not just for 4K, before even touching memory settings.
'Fast timing' doesn't seem to be as aggressive as it is on the 16Gbps chips. But I have barely begun testing.

PCB is identical, and the only component change I noticed beyond the memory chips, was different chokes, and an additional header for the cooler (though in my rush I forgot to check the mosfets)

I wont be overvolting yet until I can solve my hotspot/edge delta being notably worse than on my original card; At the same power/voltage I never saw more than Δ15°C, but this one regularly exceeds Δ20°C.
Could be a bad paste job, but doubtful. Seems the block did not sit quite as nicely on this card, maybe slightly different Z height on the core. I may attempt to re-use the stock back-side retention bracket to get more even/central pressure.

As an aside, the stock LC cooler is made by CoolerMaster and appears to be very well made/designed.


----------



## ZealotKi11er

Enzarch said:


> Well, I was able to get my hands on the reference liquid cooled 6900, and got my water block and EVC swapped over to it from my original reference card.
> 
> Obvious, but yeah, the memory speed is a huge boon to benchmark scores, and not just for 4K, before even touching memory settings.
> 'Fast timing' doesn't seem to be as aggressive as it is on the 16Gbps chips. But I have barely begun testing.
> 
> PCB is identical, and the only component change I noticed beyond the memory chips, was different chokes, and an additional header for the cooler (though in my rush I forgot to check the mosfets)
> 
> I wont be overvolting yet until I can solve my hotspot/edge delta being notably worse than on my original card; At the same power/voltage I never saw more than Δ15°C, but this one regularly exceeds Δ20°C.
> Could be a bad paste job, but doubtful. Seems the block did not sit quite as nicely on this card, maybe slightly different Z height on the core. I may attempt to re-use the stock back-side retention bracket to get more even/central pressure.
> 
> As an aside, the stock LC cooler is made by CoolerMaster and appears to be very well made/designed.


What is the default clk for that card?


----------



## CS9K

Enzarch said:


> Well, I was able to get my hands on the reference liquid cooled 6900, and got my water block and EVC swapped over to it from my original reference card.
> 
> Obvious, but yeah, the memory speed is a huge boon to benchmark scores, and not just for 4K, before even touching memory settings.
> 'Fast timing' doesn't seem to be as aggressive as it is on the 16Gbps chips. But I have barely begun testing.
> 
> PCB is identical, and the only component change I noticed beyond the memory chips, was different chokes, and an additional header for the cooler (though in my rush I forgot to check the mosfets)
> 
> I wont be overvolting yet until I can solve my hotspot/edge delta being notably worse than on my original card; At the same power/voltage I never saw more than Δ15°C, but this one regularly exceeds Δ20°C.
> Could be a bad paste job, but doubtful. Seems the block did not sit quite as nicely on this card, maybe slightly different Z height on the core. I may attempt to re-use the stock back-side retention bracket to get more even/central pressure.
> 
> As an aside, the stock LC cooler is made by CoolerMaster and appears to be very well made/designed.


Very cool! 

I don't recall: Do we have available, a copy of the "Reference" RX 6900 XT LC bios? And has anyone tried flashing _that_ LC bios to a standard reference RX 6900 XT?


----------



## Enzarch

ZealotKi11er said:


> What is the default clk for that card?


Mine defaults to 2579MHz



CS9K said:


> Very cool!
> I don't recall: Do we have available, a copy of the "Reference" RX 6900 XT LC bios? And has anyone tried flashing _that_ LC bios to a standard reference RX 6900 XT?


Wont/shouldn't work, as the LC is an XTXH
This should also be the same vBIOS thats been circulating allowing higher mem clocks. Though I can upload if requested


----------



## CS9K

Enzarch said:


> Wont/shouldn't work, as the LC is an XTXH
> This should also be the same vBIOS thats been circulating allowing higher mem clocks. Though I can upload if requested


Right, I get that... I was more curious if there was a reference RX 6900 XT LC bios floating around yet for hellm and others to check out.

A copy of your vbios would be wonderful! 💗


----------



## Enzarch

CS9K said:


> Right, I get that... I was more curious if there was a reference RX 6900 XT LC bios floating around yet for hellm and others to check out.
> A copy of your vbios would be wonderful! 💗











Microsoft OneDrive - Access files anywhere. Create docs with free Office Online.


Store photos and docs online. Access them from any PC, Mac or phone. Create and work together on Word, Excel or PowerPoint documents.



1drv.ms


----------



## CS9K

Enzarch said:


> Microsoft OneDrive - Access files anywhere. Create docs with free Office Online.
> 
> 
> Store photos and docs online. Access them from any PC, Mac or phone. Create and work together on Word, Excel or PowerPoint documents.
> 
> 
> 
> 1drv.ms


You're the best! 💗

cc @hellm, the bios from a reference model RX 6900 XT LC


----------



## J7SC

Enzarch said:


> Microsoft OneDrive - Access files anywhere. Create docs with free Office Online.
> 
> 
> Store photos and docs online. Access them from any PC, Mac or phone. Create and work together on Word, Excel or PowerPoint documents.
> 
> 
> 
> 1drv.ms


...Thanks  

BTW, MPT 1.3.7 final is out, and here are screenies for that LC Bios above via that MPT version:


----------



## LtMatt

Enzarch said:


> Well, I was able to get my hands on the reference liquid cooled 6900, and got my water block and EVC swapped over to it from my original reference card.
> 
> Obvious, but yeah, the memory speed is a huge boon to benchmark scores, and not just for 4K, before even touching memory settings.
> 'Fast timing' doesn't seem to be as aggressive as it is on the 16Gbps chips. But I have barely begun testing.
> 
> PCB is identical, and the only component change I noticed beyond the memory chips, was different chokes, and an additional header for the cooler (though in my rush I forgot to check the mosfets)
> 
> I wont be overvolting yet until I can solve my hotspot/edge delta being notably worse than on my original card; At the same power/voltage I never saw more than Δ15°C, but this one regularly exceeds Δ20°C.
> Could be a bad paste job, but doubtful. Seems the block did not sit quite as nicely on this card, maybe slightly different Z height on the core. I may attempt to re-use the stock back-side retention bracket to get more even/central pressure.
> 
> As an aside, the stock LC cooler is made by CoolerMaster and appears to be very well made/designed.


Nice one. Where did you get it from?

I wish I could buy one myself but the only place selling them individually is Mindfactory in Germany and they don't ship to the UK.


----------



## Enzarch

LtMatt said:


> Nice one. Where did you get it from?
> I wish I could buy one myself but the only place selling them individually is Mindfactory in Germany and they don't ship to the UK.


CyberPower PC;
I have a buddy that wants a GPU and I needed other components for another project, so a pre-built was a good bet, was originally just going to grab something that had a 3080/6800 for him but cyber had a deal to get the LC at no upcharge and prices overall were dipping a bit at the time.... Unfortunately the first GPU they sent was DOA, so here we are now a month later, and my buddy will get the old 6900.


----------



## Bobb3rdown

Got my toxic air-cooled today.


----------



## LtMatt

Bobb3rdown said:


> Got my toxic air-cooled today.


I like the toxic design and colour theme, it’s nice.
The back looks almost identical to the Toxic Extreme except the back LEDs are a little different.
I am curious to know what your temp deltas are between edge and junction under 300W+ load.


----------



## Bobb3rdown

LtMatt said:


> I like the toxic design and colour theme, it’s nice.
> The back looks almost identical to the Toxic Extreme except the back LEDs are a little different.
> I am curious to know what your temp deltas are between edge and junction under 300W+ load.


Pretty toasty tbf. Sub 10c ambient last night. Around 65c edge, around 95c hotspot.

100% fan speed, +15%power draw. Like 330w. 2150mhz mem. Havent messed with voltage or clock speed yet.

Sucks it might not be an xtxh die though. Like another said earlier. Wattman limits are no different then a nitro+ se card.


----------



## Bobb3rdown




----------



## The EX1

Bobb3rdown said:


> Pretty toasty tbf. Sub 10c ambient last night. Around 65c edge, around 95c hotspot.
> 
> 100% fan speed, +15%power draw. Like 330w. 2150mhz mem. Havent messed with voltage or clock speed yet.
> 
> Sucks it might not be an xtxh die though. Like another said earlier. Wattman limits are no different then a nitro+ se card.


I contacted Sapphire support regarding it. They asked for my HWINFO screenshots and I got this reply:










Still hard for me to believe they stuck XTX in the Toxic. Maybe it is for the best? Seems like all XTXH cards (except the Devil Ultimate) have a hotspot throttle limit of 95C which is easy to get to on air. My Toxic does boost to 2500mhz out of the box which is still 125-ish mhz faster than my XTX Devil did.

My memory doesn't stop throwing errors until I bring it down to 2100mhz unfortunately. I ran games and benchmarks just fine at 2150mhz, but I noticed random dips in hashrate which means the memory was unstable. How far can you push your core clock?


----------



## LtMatt

Bobb3rdown said:


> Pretty toasty tbf. Sub 10c ambient last night. Around 65c edge, around 95c hotspot.
> 
> 100% fan speed, +15%power draw. Like 330w. 2150mhz mem. Havent messed with voltage or clock speed yet.
> 
> Sucks it might not be an xtxh die though. Like another said earlier. Wattman limits are no different then a nitro+ se card.


Yes that is a bit of a disappointing difference, guess it's not an ideal mount as I like to see sub 25c or lower on air at +325W.



The EX1 said:


> I contacted Sapphire support regarding it. They asked for my HWINFO screenshots and I got this reply:
> 
> View attachment 2531980
> 
> 
> Still hard for me to believe they stuck XTX in the Toxic. Maybe it is for the best? Seems like all XTXH cards (except the Devil Ultimate) have a hotspot throttle limit of 95C which is easy to get to on air. My Toxic does boost to 2500mhz out of the box which is still 125-ish mhz faster than my XTX Devil did.
> 
> My memory doesn't stop throwing errors until I bring it down to 2100mhz unfortunately. I ran games and benchmarks just fine at 2150mhz, but I noticed random dips in hashrate which means the memory was unstable. How far can you push your core clock?


Yes, it's hard to keep XTXH on air below 95c if you go over 300W. I had a couple of XFX Merc Blacks and the cooler could not handle it and temps quickly got out of control. Air cooled cards are generally best kept to XTX and 1.175v at max.


----------



## CS9K

LtMatt said:


> Yes, it's hard to keep XTXH on air below 95c if you go over 300W. I had a couple of XFX Merc Blacks and the cooler could not handle it and temps quickly got out of control. Air cooled cards are generally best kept to XTX and 1.175v at max.


Agree. The amount of juice the (relatively small) RDNA2 die uses is a LOT of power to concentrate in such a small area. Even with the larger air coolers, wicking the heat away from a _core_ that's using 300W+ just by itself, is tough for any heatpipe/vapor chamber to try and do.


----------



## Bobb3rdown

The EX1 said:


> I contacted Sapphire support regarding it. They asked for my HWINFO screenshots and I got this reply:
> 
> View attachment 2531980
> 
> 
> Still hard for me to believe they stuck XTX in the Toxic. Maybe it is for the best? Seems like all XTXH cards (except the Devil Ultimate) have a hotspot throttle limit of 95C which is easy to get to on air. My Toxic does boost to 2500mhz out of the box which is still 125-ish mhz faster than my XTX Devil did.
> 
> My memory doesn't stop throwing errors until I bring it down to 2100mhz unfortunately. I ran games and benchmarks just fine at 2150mhz, but I noticed random dips in hashrate which means the memory was unstable. How far can you push your core clock?


It stays under 2500mhz no matter what I do in timespy. I'm assuming power draw limited. But it will boost into the 2600s in games. Was running to high of a cpu overclock compared to normal in game. So had some funny hitches. But undervolt and underclocked it to 2000mhz @1v in wattman and it ran great. I'm gonna run it at reference game clock I think and just undervolt the crap out of it. 
Or rip the cooler off and liquid metal it lol


----------



## J7SC

LtMatt said:


> Yes that is a bit of a disappointing difference, guess it's not an ideal mount as I like to see sub 25c or lower on air at +325W.
> 
> Yes, it's hard to keep XTXH on air below 95c if you go over 300W. I had a couple of XFX Merc Blacks and the cooler could not handle it and temps quickly got out of control. Air cooled cards are generally best kept to XTX and 1.175v at max.





CS9K said:


> Agree. The amount of juice the (relatively small) RDNA2 die uses is a LOT of power to concentrate in such a small area. Even with the larger air coolers, wicking the heat away from a _core_ that's using 300W+ just by itself, is tough for any heatpipe/vapor chamber to try and do.


...yeah, once you start pushing PL via MPT, the main concern becomes Hotspot...pic below compares 'new, totally stock' air-cooled card w/fans on max on the left with a bit of MPT PL boost, still on air, on the right - check the Hotspot temps. 

Since then, I water-cooled the card (dual D5s, 1200x60+) and temps have dropped by between 35 C and 40 C, depending on the parameter measured. I should add that when I opened the XTX card up, the factory paste spread on the die was abominable - even had a thumbprint on it, partially bare die underneath  ...


----------



## Bobb3rdown

J7SC said:


> ...yeah, once you start pushing PL via MPT, the main concern becomes Hotspot...pic below compares 'new, totally stock' air-cooled card w/fans on max on the left with a bit of MPT PL boost, still on air, on the right - check the Hotspot temps.
> 
> Since then, I water-cooled the card (dual D5s, 1200x60+) and temps have dropped by between 35 C and 40 C, depending on the parameter measured. I should add that when I opened the XTX card up, the factory paste spread on the die was abominable - even had a thumbprint on it, partially bare die underneath  ...
> 
> View attachment 2531993


Makes me want to pull the cooler and just slather kryoknaut all over the die lmao. Worked on my nirto+ 5700xt se. Especially around the edges of the die.


----------



## Bobb3rdown

I suppose someone should let techpowerup know as well so people dont buy these cards based off them saying its an xtxh die like I did lol


----------



## J7SC

Bobb3rdown said:


> I suppose someone should let techpowerup know as well so people dont buy these cards based off them saying its an xtxh die like I did lol


...was it advertised as XTX_H _? ...or perhaps the old (but legal / fine print) switcheroo after all the initial reviews were in ? Anyway, if you decide to keep it, I would definitely work on the cooling, then throw some extra PL at it.


----------



## Bobb3rdown




----------



## The EX1

Bobb3rdown said:


> Makes me want to pull the cooler and just slather kryoknaut all over the die lmao. Worked on my nirto+ 5700xt se. Especially around the edges of the die.


Well maybe the Toxic's have something special to offer. Both of ours boost really high for XTX cards.

GC Extreme has become popular in this group since it is a bit thicker. A few people have had worse temps over time with the thinner compounds. If your card isn't throttling then I wouldn't bother opening it up. Hotspot temps in the 90s on air is normal.


----------



## Bobb3rdown

I wouldn't say advertised. But they have it listed as such in the gpu database. I'll keep it. Might just keep it undervolt and underclocked in the mean time. It's more then plenty enough fps.


----------



## J7SC

The EX1 said:


> Well maybe the Toxic's have something special to offer. Both of ours boost really high for XTX cards.
> 
> GC Extreme has become popular in this group since it is a bit thicker. A few people have had worse temps over time with the thinner compounds. If your card isn't throttling then I wouldn't bother opening it up. Hotspot temps in the 90s on air is normal.


For the bigger dies such as 6900XT, I also used the thicker GC Extreme...it's a bit of a pain to apply (compared to Kryonaut anyway), but longer term temp stability is worth it.


----------



## CS9K

J7SC said:


> For the bigger dies such as 6900XT, I also used the thicker GC Extreme...it's a bit of a pain to apply (compared to Kryonaut anyway), but longer term temp stability is worth it.


I meant to come back and post exactly this! You beat me to it 

I _did_ finally take video of how I apply Gelid GC-Extreme for those that would feel more comfortable with a guide. I need to pull it off of my phone, slap a narration over top of it, and upload it so that yall can see it. It's not a difficult process, just tedious.


----------



## The EX1

CS9K said:


> I meant to come back and post exactly this! You beat me to it
> 
> I _did_ finally take video of how I apply Gelid GC-Extreme for those that would feel more comfortable with a guide. I need to pull it off of my phone, slap a narration over top of it, and upload it so that yall can see it. It's not a difficult process, just tedious.


I just use the spreader that comes with it and evenly coat the die. Then I put a small layer on the coldplate of the cooler. What do you do?


----------



## CS9K

The EX1 said:


> I just use the spreader that comes with it and evenly coat the die. Then I put a small layer on the coldplate of the cooler. What do you do?


I do exactly as you do with the die. I don't put any on the cold plate of the cooler.

The concept of spreading paste out flat, by hand, using the spreader, has confused and/or confounded a few people... Especially when I insist that it _needs_ to be flat, but there has to also be enough paste on there to not be able to see the die through.

Perhaps I'm not the best with words, so pictures will help folks see what I've explained in the past. I've applied GC-Extreme poorly before, and hotspot temps are not forgiving with a bad/uneven paste job with GX-Extreme.


----------



## J7SC

CS9K said:


> I meant to come back and post exactly this! You beat me to it
> 
> I _did_ finally take video of how I apply Gelid GC-Extreme for those that would feel more comfortable with a guide. I need to pull it off of my phone, slap a narration over top of it, and upload it so that yall can see it. It's not a difficult process, just tedious.


...sorry about the ninja...in any case, I've done a few threadrippers et al with their giant IHS; rice kernel sized paste deposits need not apply, especially not with 8 little chiplets plus 1 I/O...

IMO, the bigger the die, the higher the chance of some 'unevenness' to make its presence felt w/its cooler coldplate counterpart, so the thicker the paste, the better if that makes any sense 


Spoiler: goop incorporated


----------



## The EX1

Played around a bit more with the Air Toxic. I upped the min memory SoC voltage from 800 to 825mv and the memory stabilized from 2110 to 2120. Bumped SoC to 850 and the memory held at 2130. Anyone know of any dangers bumping SoC up for 24/7 use?


----------



## kairi_zeroblade

was backreading and whew..drama..though I did saw a ton of german peeps from Luxx submit 3dmark scores for rankings with the "tessalation payload modified" watermark from 3dmark and HWBot stupidly accepts them..lol..


----------



## The EX1

Looks like the stock Vmin for SoC on my RX 6800 is way up at 950mv. That is a 150mv difference between that card and my 6900 XT. I guess the base 6800s require more power.


----------



## ZealotKi11er

The EX1 said:


> Looks like the stock Vmin for SoC on my RX 6800 is way up at 950mv. That is a 150mv difference between that card and my 6900 XT. I guess the base 6800s require more power.


6800 have less phases on soc so probably cant handle vdroop as well.


----------



## cfranko

Does liquid metal on copper(not nickel plated) only stain the heatsink or does the performance of the LM also degrade over time?


----------



## Bobb3rdown

cfranko said:


> Does liquid metal on copper(not nickel plated) only stain the heatsink or does the performance of the LM alao degrade over time?


As long as it's actually copper it just stains it


----------



## Bobb3rdown

Doing some undervolting tonight on the toxic air-cooled. 2550mhz on the slider, 1130mv. Stays locked at 2500mhz and has a 20deg separation of temps, 65c and 85c. Right around 250w power draw. With a locked 65% fan speed.
The fans on this thing are super quiet.

At least in the couple games I played. Crashed in timespy. Just downloaded metro exodus enhanced. Gonna try out ray tracing for the first time.


----------



## LtMatt

Bobb3rdown said:


> Doing some undervolting tonight on the toxic air-cooled. 2550mhz on the slider, 1130mv. Stays locked at 2500mhz and has a 20deg separation of temps, 65c and 85c. Right around 250w power draw. With a locked 65% fan speed.
> The fans on this thing are super quiet.
> 
> At least in the couple games I played. Crashed in timespy. Just downloaded metro exodus enhanced. Gonna try out ray tracing for the first time.


You could perhaps get a better delta and improved temps with a repaste.

Ray Tracing is a big fat nothing burger for the most part so far.


----------



## Enzarch

cfranko said:


> Does liquid metal on copper(not nickel plated) only stain the heatsink or does the performance of the LM also degrade over time?





Bobb3rdown said:


> As long as it's actually copper it just stains it


To elaborate, the copper will 'absorb' (alloy with) some of the LM, so it will "dry out". Therefore plan on doing a reapplication of LM soon. But once the copper is stained it wont absorb any more gallium, and will last a long time without any degradation. Do not put off this repasting, if it gets too 'dry' it can be difficult to remove and cause some light pitting.

This staining will not effect the performance of the copper to any measurable degree. And can be repolished if needed since its not just on the surface.


----------



## ZealotKi11er

Bobb3rdown said:


> Doing some undervolting tonight on the toxic air-cooled. 2550mhz on the slider, 1130mv. Stays locked at 2500mhz and has a 20deg separation of temps, 65c and 85c. Right around 250w power draw. With a locked 65% fan speed.
> The fans on this thing are super quiet.
> 
> At least in the couple games I played. Crashed in timespy. Just downloaded metro exodus enhanced. Gonna try out ray tracing for the first time.


If you increase the slider and lower the voltage, you are not under-volting with the slider. Check what the voltage is during the game run.


----------



## zGunBLADEz

LtMatt said:


> You could perhaps get a better delta and improved temps with a repaste.
> 
> Ray Tracing is a big fat nothing burger for the most part so far.


usually timespy/firestrike or heavy power apps including RT requires beetween 100mhz+ less than lets say heaven benchmark which is the one i loop for hrs to catch voltage glitches and test overclocks.. So wathever is stable in heaven should be stable in heavy power game or RT with -100mhz less... your mileage may vary.

Also i notice RT doesnt like memory overclock with fast timing you will see artifacts or just straight crashing.. i was testing on control and shadow of the tomb raider.. control is an automatic crash. tomb raider will show artifacts if memorry is down south..

now i didnt add no more voltage to the soc while testing it..


----------



## Bobb3rdown

ZealotKi11er said:


> If you increase the slider and lower the voltage, you are not under-volting with the slider. Check what the voltage is during the game run.


Will do. Stock settings it its at 2538mhz on the slider and 1175mv.

So I figured it undervolted some.


----------



## Kawaz

long time lurker. 

Finally broke that 25k, even though it took 1.3V


----------



## Bobb3rdown

I scored 20 109 in Time Spy 

my best so far. My cpu score used to be higher. But I can hardly crack 13k these days. So I leave it at 4.7ghz at 1.2625.


----------



## J7SC

LtMatt said:


> You could perhaps get a better delta and improved temps with a repaste.
> 
> *Ray Tracing is a big fat nothing burger for the most part so far.*


...'mostly' yes, but in games such as Cyberpunk2077, full-on ray tracing is gorgeous. 6900XT still loses out to 3090 there, but a well-tuned / oc'ed 6900XT can get close to a 3080...not too shabby w/ ray tracing (along w/ some 4k) being 6900XT's '''weakest"" performance areas.



Spoiler


----------



## CS9K

zGunBLADEz said:


> usually timespy/firestrike or heavy power apps including RT requires beetween 100mhz+ less than lets say heaven benchmark which is the one i loop for hrs to catch voltage glitches and test overclocks.. So wathever is stable in heaven should be stable in heavy power game or RT with -100mhz less... your mileage may vary.
> 
> Also i notice RT doesnt like memory overclock with fast timing you will see artifacts or just straight crashing.. i was testing on control and shadow of the tomb raider.. control is an automatic crash. tomb raider will show artifacts if memorry is down south..
> 
> now i didnt add no more voltage to the soc while testing it..


I've mentioned this before, but I'll mention it again:

Throughout all of the testing I've done with my prior RX 6800 XT, and current RX 6900 XT: If your settings are stable in Time Spy and Port Royal, then you _will_ be stable in _everything_ else.

Some games _can_ run at settings higher than what TS/PR are stable at, but in my experience, running settings higher than TS/PR, all ultimately led to GPU-stability-based crashes.

TL;DR: Running settings higher than what you are stable at in TS/PR means that you _aren't actually_ stable, and just haven't crashed _*yet*_.

*Edit: I say this not to nay-say, but in hopes of saving you some trouble down the road should you end up finding instability in your settings. I'm old and tired... When I'm in the mood to tinker, it's great, but when I'm in the mood to sit down, turn off the brain, and play some games, I want my PC _just work_.


----------



## Enzarch

Alright, I reamed the stock back-side spring retention plate so I can use it with the EK block, this improved my hotspot/edge delta by ~5° @420W. 

Memory is stable up to about 2420 MHz fast timing at stock voltages. Will begin pushing mem and controller volts up this weekend see if I can get it to a nice round 2500

Core seems 'meh', but hard to tell if thats silicon or the reference VRM. Not sure how hard I will push this, as I am a little leery about pushing much more than 420W through the reference PCB. Can anyone alleviate these concerns?


----------



## CS9K

Enzarch said:


> Alright, I reamed the stock back-side spring retention plate so I can use it with the EK block, this improved my hotspot/edge delta by ~5° @420W.
> 
> Memory is stable up to about 2420 MHz fast timing at stock voltages. Will begin pushing mem and controller volts up this weekend see if I can get it to a nice round 2500
> 
> Core seems 'meh', but hard to tell if thats silicon or the reference VRM. Not sure how hard I will push this, as I am a little leery about pushing much more than 420W through the reference PCB. Can anyone alleviate these concerns?


Personally, 420W is where I set the power target limit when benching for stability. The VRM setup is great on the reference PCB (AMD kept with their typical/historical slight-overbuild of power delivery for the reference design), though if you ask Buildzoid, the input filtering capacitor setup isn't the best, and I would much prefer there be a third PCIe 8-pin power connector (and subsequent beefier power plane on the PCB) if I were going to let the GPU use more than 420W. I lower the power target limit to 400W for daily-driving.

That's just me though.


----------



## Enzarch

CS9K said:


> Personally, 420W is where I set the power target limit when benching for stability. The VRM setup is great on the reference PCB (AMD kept with their typical/historical slight-overbuild of power delivery for the reference design), though if you ask Buildzoid, the input filtering capacitor setup isn't the best, and I would much prefer there be a third PCIe 8-pin power connector (and subsequent beefier power plane on the PCB) if I were going to let the GPU use more than 420W. I lower the power target limit to 400W for daily-driving.
> That's just me though.


Yeah, seems like you have come to pretty similar conclusions to me, Just unfortunate that 420W leaves basically no room for over-volting, even just +15mV hits limit so hard it causes performance loss.

I suspect the mediocre power filtering is more to blame for middling core clocks than the silicon, though this xtxh die still definitely clocks better than my old xtx at equal voltage.

I only run 16ga PCIe cables, so im pretty comfortable with anything this card could pull through 2x8-pins, I am sure I will get brave when I get bored and bump to 475W just for some benching runs at least.


----------



## CS9K

Enzarch said:


> Yeah, seems like you have come to pretty similar conclusions to me, Just unfortunate that 420W leaves basically no room for over-volting, even just +15mV hits limit so hard it causes performance loss.
> 
> I suspect the mediocre power filtering is more to blame for middling core clocks than the silicon, though this xtxh die still definitely clocks better than my old xtx at equal voltage.
> 
> I only run 16ga PCIe cables, so im pretty comfortable with anything this card could pull through 2x8-pins, I am sure I will get brave when I get bored and bump to 475W just for some benching runs at least.


I too made a set of 16ga power cables for my GPU, since Seasonic interestingly still only uses 80C 18ga wire for their PCIe cables (even on the Titanium PSU, grr). That said, most cards pulling north of 400W have three PCIe power cables, so I suppose I am an edge case here. 

In benchmarks, a voltage increase has little headroom, but I rarely go north of 350W in games, so that extra 25mV that got me another 100MHz core clock, would be a welcome addition to performance in games.


----------



## LtMatt

Enzarch said:


> To elaborate, the copper will 'absorb' (alloy with) some of the LM, so it will "dry out". Therefore plan on doing a reapplication of LM soon. But once the copper is stained it wont absorb any more gallium, and will last a long time without any degradation. Do not put off this repasting, if it gets too 'dry' it can be difficult to remove and cause some light pitting.
> 
> This staining will not effect the performance of the copper to any measurable degree. And can be repolished if needed since its not just on the surface.


I just had to repaste my Toxic Extreme as the temperatures had climbed up around 10c from when i last put my initial liquid metal application on.

Used more liquid metal and put a little more on this time.

Managed to reduce the temps 10c and the delta between edge and junction down from 22c to 17c at 400W+ load with a 35% fan speed.


----------



## weleh

People with EVC on their cards, I just posted my "secret" on Luxx forum.

If you have an EVC and want to be competitive on the leaderboard this trick is for you. If you don't care, don't bother.


----------



## ZealotKi11er

weleh said:


> People with EVC on their cards, I just posted my "secret" on Luxx forum.
> 
> If you have an EVC and want to be competitive on the leaderboard this trick is for you. If you don't care, don't bother.


I got EVC but did not bother after we learned about the vmin trick. Want to share here too?


----------



## jonRock1992

Has anyone here tried out battlefield 2042 yet? I'm getting serious stability issues with that game. Lots of stutters and I had to lower my overclock by 70MHz and increase voltage. This game makes Timespy GT2 look like nothing.


----------



## LtMatt

jonRock1992 said:


> Has anyone here tried out battlefield 2042 yet? I'm getting serious stability issues with that game. Lots of stutters and I had to lower my overclock by 70MHz and increase voltage. This game makes Timespy GT2 look like nothing.


Could it be anything to do with the new driver perhaps? I noticed that I had to lower my GPU overclock by 100Mhz in Far Cry 6 otherwise it eventually crashes. Not sure why as the game does not even draw much power.Has anyone else found similar in Far Cry 6? Playing 4K max settings?

I can run a 100Mhz+ higher Timespy stress test and pass however so not sure what is going on there.


----------



## kairi_zeroblade

jonRock1992 said:


> Has anyone here tried out battlefield 2042 yet? I'm getting serious stability issues with that game. Lots of stutters and I had to lower my overclock by 70MHz and increase voltage. This game makes Timespy GT2 look like nothing.


Same, tho I am on a 6800XT..I didn't lower my usual overclocks as I had no crashes on any game from my settings..its just BF2042 is a stuttery hell..I am running the latest 21.11.2 drivers..


----------



## weleh

jonRock1992 said:


> Has anyone here tried out battlefield 2042 yet? I'm getting serious stability issues with that game. Lots of stutters and I had to lower my overclock by 70MHz and increase voltage. This game makes Timespy GT2 look like nothing.


That game is ****ed for RDNA2 GPUs, bunch of people on reddit complaining.


----------



## LtMatt

weleh said:


> That game is ****ed for RDNA2 GPUs, bunch of people on reddit complaining.


Yep just saw that, lot of Nvidia users too complaining looks like a game issue.


----------



## ZealotKi11er

LtMatt said:


> Yep just saw that, lot of Nvidia users too complaining looks like a game issue.


Yes, same issue with my 3080. I had to lower OC 50MHz so far and still getting crashes. Game is fun but very buggy. What sucks is that you cant really wait for the game to get better because the population will drop and the players remaining will be much better and better equipped.


----------



## jonRock1992

ZealotKi11er said:


> Yes, same issue with my 3080. I had to lower OC 50MHz so far and still getting crashes. Game is fun but very buggy. What sucks is that you cant really wait for the game to get better because the population will drop and the players remaining will be much better and better equipped.


I kept getting DirectX driver crashes until I lowered my OC. I can only run 2700MHz min / 2800 MHz max clocks in this game. I'm also running 2220 MHz default timings for the memory. Does anyone know how 2220 MHz default timings compares to 2150 MHz fast timings? I just went with the auto memory OC value of 2220 MHz just in case 2150 FT wasn't stable in this game.


----------



## Maulet//*//

Hi guys!

What hotspot temps are you getting with BF2042 at 4K and ultra settings?
After 2 days not having a single problem, today I am hitting 111ºC and some direct X lockups as a consequence.... Considering the Conductonaut route


----------



## LtMatt

jonRock1992 said:


> I kept getting DirectX driver crashes until I lowered my OC. I can only run 2700MHz min / 2800 MHz max clocks in this game. I'm also running 2220 MHz default timings for the memory. Does anyone know how 2220 MHz default timings compares to 2150 MHz fast timings? I just went with the auto memory OC value of 2220 MHz just in case 2150 FT wasn't stable in this game.


Are you now on the LC BIOS? What resolution are you using?


----------



## ZealotKi11er

Maulet//*// said:


> Hi guys!
> 
> What hotspot temps are you getting with BF2042 at 4K and ultra settings?
> After 2 days not having a single problem, today I am hitting 111ºC and some direct X lockups as a consequence.... Considering the Conductonaut route


Hotspot hitting 111c should not cause that much instability. Your OC probably not stable. Also right now its seems like there is a issue with BF so give it some time.


----------



## LtMatt

Well men, it happened (if anyone cares ). My Indian summer is over, as Mantiz (I think from Luxx forum) has beaten my No1 FireStrike Extreme score with Alderlake.

I am in the process of selling my Toxic, so the score is safe and not sure I could beat it anyway without using silly voltage on both CPU and GPU.

Moving back to a reference 6900 XT all being well, so the days of balls to the wall benching are about to end.


----------



## J7SC

LtMatt said:


> Well men, it happened (if anyone cares ). My Indian summer is over, as Mantiz (I think from Luxx forum) has beaten my No1 FireStrike Extreme score with Alderlake.
> 
> I am in the process of selling my Toxic, so the score is safe and not sure I could beat it anyway without using silly voltage on both CPU and GPU.
> 
> Moving back to a reference 6900 XT all being well, so the days of balls to the wall benching are about to end.


Well, you held it for 1.5 months which is saying something. Also, the difference is only just under 120 points, which could lead to a renewed benchmark itch  

Of course, there's life after benching....and building / running a nice systems w/ a 5950X and reference 6900XT is nothing to sneeze at , and less stressful.


----------



## LtMatt

J7SC said:


> Well, you held it for 1.5 months which is saying something. Also, the difference is only just under 120 points, which could lead to a renewed benchmark itch
> 
> Of course, there's life after benching....and building / running a nice systems w/ a 5950X and reference 6900XT is nothing to sneeze at , and less stressful.


Very true my man.


----------



## snakeeyes111

LtMatt said:


> Well men, it happened (if anyone cares ). My Indian summer is over, as Mantiz (I think from Luxx forum) has beaten my No1 FireStrike Extreme score with Alderlake


Wait for my 12900k result. Hope next weekend i can bench. 









Result not found







www.3dmark.com





Graphicscore is better, maybe i can reach him


----------



## weleh

Jesus, 12900K is a beast for 3D Mark...


----------



## Maulet//*//

I disensembled my Red Devil Ultimate 6900XTX and it was as easy as it gets, lots of screw, alas. 
Unfortunately I didn't have liquid metal inside the siringe.... So rebuild with Arctic MX-4... also on some hurt pads, may it be as unorthodox as it may be 

back to bf2042 testing


----------



## ZealotKi11er

weleh said:


> Jesus, 12900K is a beast for 3D Mark...


That is why I got it. At stock with no XMP I score 19.3K which I could not do with OCed 5950x @ 4.9/4.8GHz


----------



## LtMatt

Maulet//*// said:


> I disensembled my Red Devil Ultimate 6900XTX and it was as easy as it gets, lots of screw, alas.
> Unfortunately I didn't have liquid metal inside the siringe.... So rebuild with Arctic MX-4... also on some hurt pads, may it be as unorthodox as it may be
> 
> back to bf2042 testing


BF2042 seems like RDNA2.
Battlefield 2042 | 6900 XT vs RTX 3090 | 1080p 1440p 4K 5K Comparison - YouTube


----------



## Maulet//*//

Nice  
(with all reservations because the YT account is called 6900XT and the scene oh so non representative (or on purpose?)


----------



## LtMatt

Maulet//*// said:


> Nice
> (with all reservations because the YT account is called 6900XT and the scene oh so non representative (or on purpose?)


He has a Nvidia channel too and this was tested with his friend (running 3090) side by side in the same server. 

Tbf it's pretty hard to bench this kind of game with all the variables of what can happen online so this is about as accurate as it gets.


----------



## Maulet//*//

That's true. Maybe a 1 versus bots match could be used for benching, as SP is lacking...


----------



## lawson67

LtMatt said:


> Well men, it happened (if anyone cares ). My Indian summer is over, as Mantiz (I think from Luxx forum) has beaten my No1 FireStrike Extreme score with Alderlake.
> 
> I am in the process of selling my Toxic, so the score is safe and not sure I could beat it anyway without using silly voltage on both CPU and GPU.
> 
> Moving back to a reference 6900 XT all being well, so the days of balls to the wall benching are about to end.


Why are you selling your toxic and returning to a reference card Matt?


----------



## kairi_zeroblade

LtMatt said:


> Well men, it happened (if anyone cares ). My Indian summer is over, as Mantiz (I think from Luxx forum) has beaten my No1 FireStrike Extreme score with Alderlake.
> 
> I am in the process of selling my Toxic, so the score is safe and not sure I could beat it anyway without using silly voltage on both CPU and GPU.
> 
> Moving back to a reference 6900 XT all being well, so the days of balls to the wall benching are about to end.


you can beat his score by disabling SMT, and add up another 2k on that CPU score..and you can probably as well increase the overclock that way..

edit: ohh it was FS Extreme..thought it was TS..


----------



## LtMatt

lawson67 said:


> Why are you selling your toxic and returning to a reference card Matt?


I want to move back to an air cooled card as with my Phanteks P600S case, I have to keep taking the top part of the case off (noise dampening cover) to let the rad exhaust the heat.

It's a bit of a pain and I'd rather just keep the cover on 24//7 and use an air cooled GPU.

I had a reference card initially and I really liked it, even though it was not great for overclocking. So for gaming I just want to move back to that.

Had a Merc Black before and that was a good card and decent cooler, but I want the reference design. Managed to snag one, but have to hope the delta between edge and junction is not too bad.


----------



## Bobb3rdown

Result not found







www.3dmark.com





Some playing around with mpt and firestrike.
64112 graphics score on air. And not bouncing off 110c like it does in timespy.

Apparently with the final version of mpt they fixed the bug for voltage. Can now set max voltage to 1.2v verified with gpuz. Also got new thermal pads gelid 12k and gelid gc extreme. Not sure when I'll take the card apart. But it's gonna happen at some point.


----------



## Bobb3rdown

The bykski block cam out for the toxic cards and nitro+ se 6900xt. Snagged one up. Not many in stock. Only 10 when I ordered.


----------



## Ajdaho pl

kairi_zeroblade said:


> was backreading and whew..drama..though I did saw a ton of german peeps from Luxx submit 3dmark scores for rankings with the "tessalation payload modified" watermark from 3dmark and HWBot stupidly accepts them..lol..


can i Please some example ?


----------



## kairi_zeroblade

Ajdaho pl said:


> can i Please some example ?


The one that I saw a few days ago took off his post on HWBot..gonna check it out if I can see more..


----------



## Bobb3rdown

The EX1 said:


> Played around a bit more with the Air Toxic. I upped the min memory SoC voltage from 800 to 825mv and the memory stabilized from 2110 to 2120. Bumped SoC to 850 and the memory held at 2130. Anyone know of any dangers bumping SoC up for 24/7 use?


Did you notice in the mpt final edition you can up the voltage to 1.2v? I can get 2650mhz out of it and that's all. She won't go any higher. In the 100s at that mark though.


----------



## CS9K

Bobb3rdown said:


> Did you notice in the mpt final edition you can up the voltage to 1.2v? I can get 2650mhz out of it and that's all. She won't go any higher. In the 100s at that mark though.


The SOC max voltage?


----------



## Bobb3rdown

CS9K said:


> The SOC max voltage?


Gfx max under voltage control parameters

Soc max is 1150


----------



## CS9K

Bobb3rdown said:


> Gfx max under voltage control parameters
> 
> Soc max is 1150


Interesting. I saw someone mention this, but didn't realize.

I did give it a try, and while the driver appeared to act normal when I set 1200mV in Adrenalin, the instant I put it under any 3d load, the driver realizes something is wrong and does the normal "safe mode" 500MHz fallback. I got excited for a moment


----------



## J7SC

...after some delay, I finally got all the components for the TT Core P8 dual work-play system for the home office hooked up (3950X, 6900XT + 5950X, 3090 Strix). That was one complex build project , what with two of everything, ie. start / reset, USB etc. Quick peek below, before final RGB clean-up and better pics...


----------



## Bobb3rdown

CS9K said:


> Interesting. I saw someone mention this, but didn't realize.
> 
> I did give it a try, and while the driver appeared to act normal when I set 1200mV in Adrenalin, the instant I put it under any 3d load, the driver realizes something is wrong and does the normal "safe mode" 500MHz fallback. I got excited for a moment


Interesting. I'm wondering if this is an xtxh die but a low binned one? I definitely see some throttling around the 90c junction mark. Not a huge amount. But definitely some. The voltage obviously has a mind of its own if you trick it it will peg at 1.175. But voltage doesn't throttle. Just the core clocks by about 50-100mhz


----------



## CS9K

Bobb3rdown said:


> Interesting. I'm wondering if this is an xtxh die but a low binned one? I definitely see some throttling around the 90c junction mark. Not a huge amount. But definitely some. The voltage obviously has a mind of its own if you trick it it will peg at 1.175. But voltage doesn't throttle. Just the core clocks by about 50-100mhz


Hmm, I wonder. My experience with MPT just now mirrored every other way I've tried to modify voltage or memory speed: the 500MHz limp mode (except for temp-based vmin, that worked).

With how neurotic modern Radeon drivers are, it will for sure take bios-modding before we get full control of XTX GPU's.


----------



## J7SC

CS9K said:


> Hmm, I wonder. My experience with MPT just now mirrored every other way I've tried to modify voltage or memory speed: the 500MHz limp mode (except for temp-based vmin, that worked).
> 
> *With how neurotic modern Radeon drivers are*, it will for sure take bios-modding before we get full control of XTX GPU's.


...I wonder what > this will bring in terms of neurotic / driver access tie-down (note graphics driver reference further down)


----------



## Bobb3rdown

CS9K said:


> Hmm, I wonder. My experience with MPT just now mirrored every other way I've tried to modify voltage or memory speed: the 500MHz limp mode (except for temp-based vmin, that worked).
> 
> With how neurotic modern Radeon drivers are, it will for sure take bios-modding before we get full control of XTX GPU's.


ive played around with a 6600 and 6600xt and was never able to do that before. the final version is also a bit different layout. by all accounts (2) ive seen in hand the toxic air-cooled is an xtx chip. but this makes me wonder. under the frequency tab my gfx max clock is 2660mhz. ill be dammed if i can get passed that. even with 1.2v and 330w + 15%


----------



## Bobb3rdown

Can anyone with a xtxh card verify if it says xtxh in cpuz of hwinfo 64? I know there's a couple places that say die type. One is cpuz under graphics code name and hwifo64 under summary gpu codename as well. Thanks.


----------



## Ajdaho pl

kairi_zeroblade said:


> The one that I saw a few days ago took off his post on HWBot..gonna check it out if I can see more..


You write earlier that you found *tons* of results from LUXX with a tessellation error, and in fact you found 1 result, posted by mistake by ApolloX, because he checked the tessellation error and by mistake posted a link to the test with modified tessellation, which, by the way, you snitch to UL and HWbot, and they did not do anything anyway, because the result was clearly marked as not confirmed, due to a tessellation error  
The sad thing is that no one at LUXX even thinks about hiding what we have come to in terms of OC Navi, no one thought about hiding how to bypass the voltage limit in MPT, and yet there are always some envious people who have not come to anything themselves, only accuse others


no one has ever gotten better overclock results from envy and hatred


----------



## Maulet//*//

@Bobb3rdown PU-z/Advanced tab/AMD BIOS: Bootup Message NAVI21 Gaming XTXH D41213 L9319OA1.SMG 2021


----------



## kairi_zeroblade

Ajdaho pl said:


> which, by the way, you snitch to UL and HWbot, and they did not do anything anyway, because the result was clearly marked as not confirmed, due to a tessellation error


I just have a spare bench up my sleeve, that's it....I don't hate anyone because of EPEEN numbers..lol..with how things are looking up right now, its not even fun doing benches anymore..


----------



## Bobb3rdown

Maulet//*// said:


> @Bobb3rdown PU-z/Advanced tab/AMD BIOS: Bootup Message NAVI21 Gaming XTXH D41213 L9319OA1.SMG 2021


Appreciate it. The toxic air-cooled is indeed a xtx die. NAVI21 Gaming XTX d412.


----------



## CS9K

Bobb3rdown said:


> Appreciate it. The toxic air-cooled is indeed a xtx die. NAVI21 Gaming XTX d412.


My reference GPU shows the same.


----------



## Bobb3rdown

CS9K said:


> My reference GPU shows the same.


I scored 44 206 in Fire Strike. Getting some good scores though. Managed to get 91st. Still haven't tuned the ram yet. 3200mhz cl14 running at 3800mhz cl14. Just dropped the tras to 32 from the xmp 14-14-14-34.


----------



## SoloCamo

Waiting on my 6900XT to arrive as replacement for my soon to be sold Radeon VII. Went with a reference model - any tips to someone who has been using GCN based AMD gpu's since 2012 (7970ghz edition, 290x, rx580 8gb, Vega 56, Vega 64 and Radeon VII)? 

I play at 4k and see this is the cards weaker point vs the 3080 / 3090. Is it very bandwidth constrained? Is memory oc'ing worth it? I'm actually more interesting in undervolting as well just to keep temps / power draw lower if I can retain stock clocks.


----------



## Maulet//*//

I also play at 4K - 32" monitor (60hz) and with the 6900XTX BF2042 at ultra is exceptional and smooth, but... I had to downclock to stock frequencies because of game or pc hangs. (Red Devil Ultimate is factory overclocked to 2400 something etc). For comparison I just played BF5 at Ultra + the GPU overclocked (Radeon profile) (on top of stock overclock) and no hangups. buttersmooth.I thought I would have to liquid metal it, but reapplied MX4 on die and pads and it is lll ok. Maybe the hangs are due to BF2042 beta status


----------



## The EX1

Bobb3rdown said:


> I scored 44 206 in Fire Strike. Getting some good scores though. Managed to get 91st. Still haven't tuned the ram yet. 3200mhz cl14 running at 3800mhz cl14. Just dropped the tras to 32 from the xmp 14-14-14-34.


The highest I could get an XTX card was 2775-2800mhz and I have had 4 of them. One reference, one Red Devil, and now two air toxics. I never upped the voltage to 1.2v. My Red Devil is a golden XTX IMO. It can bench at 2820mhz, 400w, 2150 mem with fast timings at stock volts.

My second Toxic was better than the first. 15% PL and it is running at 2675mhz and 2140 on memory.

Are you going to run locked 1.2v 24/7?


----------



## Bobb3rdown

The EX1 said:


> The highest I could get an XTX card was 2775-2800mhz and I have had 4 of them. One reference, one Red Devil, and now two air toxics. I never upped the voltage to 1.2v. My Red Devil is a golden XTX IMO. It can bench at 2820mhz, 400w, 2150 mem with fast timings at stock volts.
> 
> My second Toxic was better than the first. 15% PL and it is running at 2675mhz and 2140 on memory.
> 
> Are you going to run locked 1.2v 24/7?


Not likely. With the temps I'm getting. I'm dropping the power 10%. Just upping the power and voltage in benchmarks. What do you set your base power draw at on the toxic? I've been doing 330w. And 375w max

If all it needs is more power draw I won't bother upping the voltage. Just don't really want to push it to far.


----------



## lestatdk

Bobb3rdown said:


> ive played around with a 6600 and 6600xt and was never able to do that before. the final version is also a bit different layout. by all accounts (2) ive seen in hand the toxic air-cooled is an xtx chip. but this makes me wonder. under the frequency tab my gfx max clock is 2660mhz. ill be dammed if i can get passed that. even with 1.2v and 330w + 15%


My card has the same problem. It's like hitting a brick wall. Even with temps staying low and MPT power values cranked way up so it's not even close to power limited. But MPT says 2660 and that's as far as it goes. 
I've given up trying to OC it further and currently running without MPT as well.


----------



## CS9K

lestatdk said:


> My card has the same problem. It's like hitting a brick wall. Even with temps staying low and MPT power values cranked way up so it's not even close to power limited. But MPT says 2660 and that's as far as it goes.
> I've given up trying to OC it further and currently running without MPT as well.


That's the silicon lottery for ya :<


----------



## lestatdk

CS9K said:


> That's the silicon lottery for ya :<


Sure, but why is it at the exact value in MPT ? That's not a coincidence I think


----------



## CS9K

lestatdk said:


> Sure, but why is it at the exact value in MPT ? That's not a coincidence I think


I don't have that issue, nor does a lot of other people. I would say that it is coincidence, in your case.


----------



## The EX1

Bobb3rdown said:


> Not likely. With the temps I'm getting. I'm dropping the power 10%. Just upping the power and voltage in benchmarks. What do you set your base power draw at on the toxic? I've been doing 330w. And 375w max
> 
> If all it needs is more power draw I won't bother upping the voltage. Just don't really want to push it to far.


Those clocks are without MPT changes. Just 15% increased PL. Your PL demand will change on the workload though. Firestrike runs at faster clocks than TS for me. The Toxic clocks I listed are during gaming. 2600-2675mhz is where the card hovers, then it hits a frequency wall. With 375w set in MPT, the card will still only use 330w in games. Much more power and the card becomes thermally limited anyway. This is plenty fast for a "daily driver" card.



lestatdk said:


> Sure, but why is it at the exact value in MPT ? That's not a coincidence I think


That is just a coincidence. All of my 6900 XTs have the same 2660 value set in MPT. I think that is just a baked in limit so that the stock card doesn't overboost and crash? My cards clock anywhere from 2600-2800mhz depending on the silicon.


----------



## Bobb3rdown

The EX1 said:


> Those clocks are without MPT changes. Just 15% increased PL. Your PL demand will change on the workload though. Firestrike runs at faster clocks than TS for me. The Toxic clocks I listed are during gaming. 2600-2675mhz is where the card hovers, then it hits a frequency wall. With 375w set in MPT, the card will still only use 330w in games. Much more power and the card becomes thermally limited anyway. This is plenty fast for a "daily driver" card.
> 
> 
> 
> That is just a coincidence. All of my 6900 XTs have the same 2660 value set in MPT. I think that is just a baked in limit so that the stock card doesn't overboost and crash? My cards clock anywhere from 2600-2800mhz depending on the silicon.


Makes sense. I'm sure I've seen some speeds like that in games as well. Honestly it performs great at -10 PL. Boosting to 2300ish. At 250w. Still plenty of frames and stays under 80c hotspot with 60% fan speed. I'm still amazed at how quiet the fans are on this card.


----------



## Blameless

SoloCamo said:


> I play at 4k and see this is the cards weaker point vs the 3080 / 3090. Is it very bandwidth constrained?


Performance generally falls off faster at higher resolutions than the competing parts, except for those edge cases where the 3080s are just running out of VRAM. In many, but not all, cases this appears to be a bandwidth constraint.



SoloCamo said:


> Is memory oc'ing worth it?


It's worth it in the sense that whatever you do get is essentially free performance. The GDDR6 on these cards is rarely temperature limited and doesn't increase power consumption when OCed (unless you're overvolting the memory, which is a waste unless you're under water and don't need to care about power budget at all) enough to not want to OC it.

However, most Navi 21 parts won't see huge gains from memory OCing because the memory rarely has much headroom, even on those cards that aren't capped at 2150.



SoloCamo said:


> I'm actually more interesting in undervolting as well just to keep temps / power draw lower if I can retain stock clocks.


You can undervolt most samples quite significantly while still OCing.


----------



## jonRock1992

LtMatt said:


> Are you now on the LC BIOS? What resolution are you using?


Stock Red Devil Ultimate bios. I can run over 2200MHz on the memory with default timings. I was just wondering how that compares to 2150MHz fast timings in games. I'm using a 1440p 270Hz monitor.


----------



## SoloCamo

Blameless said:


> Performance generally falls off faster at higher resolutions than the competing parts, except for those edge cases where the 3080s are just running out of VRAM. In many, but not all, cases this appears to be a bandwidth constraint.
> 
> 
> 
> It's worth it in the sense that whatever you do get is essentially free performance. The GDDR6 on these cards is rarely temperature limited and doesn't increase power consumption when OCed (unless you're overvolting the memory, which is a waste unless you're under water and don't need to care about power budget at all) enough to not want to OC it.
> 
> However, most Navi 21 parts won't see huge gains from memory OCing because the memory rarely has much headroom, even on those cards that aren't capped at 2150.
> 
> 
> 
> You can undervolt most samples quite significantly while still OCing.


Thanks man. It's crazy to think I'm coming from a Radeon VII that held over 1800mhz core and 1200mhz HBM2 (1.2tb/s bandwidth vs 512gb/s for the 6900xt) at around 990mV w/ less than 50% rpm on the stock air cooler. Stock voltage the card was 1170mV. Really looking forward to the performance jump of this card.


----------



## tolis626

jonRock1992 said:


> Stock Red Devil Ultimate bios. I can run over 2200MHz on the memory with default timings. I was just wondering how that compares to 2150MHz fast timings in games. I'm using a 1440p 270Hz monitor.


Pardon my random intrusion, but what monitor are you running? The only 1440p 240Hz one I know of is the Samsung G7, but as far as I know these don't take very kindly to overclocking. Hell, I have an older Samsung CHG70 (in desperate need of replacement btw), which is 1440p 144Hz and won't reliably overclock even 10Hz higher. Even going to 150Hz will either not work or require that I drop to 6-bit color. Pretty pathetic.


SoloCamo said:


> Thanks man. It's crazy to think I'm coming from a Radeon VII that held over 1800mhz core and 1200mhz HBM2 (1.2tb/s bandwidth vs 512gb/s for the 6900xt) at around 990mV w/ less than 50% rpm on the stock air cooler. Stock voltage the card was 1170mV. Really looking forward to the performance jump of this card.


Well, GCN was severely bandwidth dependent. Basically the more you threw at it, the better. Even these insane Vega memory subsystems showed performance improvements from memory overclocking. Navi in general is a different beast. And with infinity cache, bandwidht requirements have changed dramatically. They still benefit from more, but they aren't desperate for it, so to speak, like GCN cards.

Now, considering your Radeon VII has similar performance to a 5700XT, well, you're in for a treat.


----------



## jonRock1992

tolis626 said:


> Pardon my random intrusion, but what monitor are you running? The only 1440p 240Hz one I know of is the Samsung G7, but as far as I know these don't take very kindly to overclocking. Hell, I have an older Samsung CHG70 (in desperate need of replacement btw), which is 1440p 144Hz and won't reliably overclock even 10Hz higher. Even going to 150Hz will either not work or require that I drop to 6-bit color. Pretty pathetic.
> 
> Well, GCN was severely bandwidth dependent. Basically the more you threw at it, the better. Even these insane Vega memory subsystems showed performance improvements from memory overclocking. Navi in general is a different beast. And with infinity cache, bandwidht requirements have changed dramatically. They still benefit from more, but they aren't desperate for it, so to speak, like GCN cards.
> 
> Now, considering your Radeon VII has similar performance to a 5700XT, well, you're in for a treat.


I'm using the Asus XG27AQM. Unfortunately mine came with a dead pixel. Actually, this is the second one that I got with a dead pixel, but I'm probably going to keep it because the dead pixel is in the top left corner, and it has very little backlight bleed/IPS glow.


----------



## ZealotKi11er

tolis626 said:


> Pardon my random intrusion, but what monitor are you running? The only 1440p 240Hz one I know of is the Samsung G7, but as far as I know these don't take very kindly to overclocking. Hell, I have an older Samsung CHG70 (in desperate need of replacement btw), which is 1440p 144Hz and won't reliably overclock even 10Hz higher. Even going to 150Hz will either not work or require that I drop to 6-bit color. Pretty pathetic.
> 
> Well, GCN was severely bandwidth dependent. Basically the more you threw at it, the better. Even these insane Vega memory subsystems showed performance improvements from memory overclocking. Navi in general is a different beast. And with infinity cache, bandwidht requirements have changed dramatically. They still benefit from more, but they aren't desperate for it, so to speak, like GCN cards.
> 
> Now, considering your Radeon VII has similar performance to a 5700XT, well, you're in for a treat.


I dont think GCN for gaming was memory bound. The issue with GCN was utilization. In some games you see GCN doing extremely well but in others it would lack behind. Best feature Vega has that is missing from RDNA is HBCC.


----------



## CS9K

ZealotKi11er said:


> I dont think GCN for gaming was memory bound. The issue with GCN was utilization. In some games you see GCN doing extremely well but in others it would lack behind. Best feature Vega has that is missing from RDNA is HBCC.


Likewise, cc @Blameless, I don't think it's so much the infinity cache holding the RX 6900 XT back at 4k, so much as it is the current _size_ of the infinity cache that is holding it back. The bad news is, that some people much smarter than I, said that 128mb is _just_ a bit too small to work on an entire 4k frame as efficiently as the cache does at lower resolutions, and that would explain why increasing memory speed and reducing timings/latency helps so much at 4k, as it has to utilize memory much heavier than it would at a lower resolution.

The good news is, regardless of whether RDNA3's top end SKU will have 256MB or 512MB of cache total, it will _at least_ have double what the RX 6900 XT has, which will alleviate the cache bottleneck at 4k that we are experiencing now.


----------



## J7SC

...RDNA3 w/ HBM3 would be nice (and ni$e)


----------



## CS9K

J7SC said:


> ...RDNA3 w/ HBM3 would be nice (and ni$e)


With a gigantic cache, it shouldn't need it 😈 

But, that is speculation on my part, I don't know how it will perform. Fingers crossed, though!


----------



## J7SC

CS9K said:


> With a gigantic cache, it shouldn't need it 😈
> 
> But, that is speculation on my part, I don't know how it will perform. Fingers crossed, though!


...lot's of rumours (and some related patent filings), but 'a future' RDNA gen will be tile / chiplet based and both HBM and bigger infinity cache would be on board in the top models, though that could also refer to the pro / workstation model.


----------



## Blameless

CS9K said:


> Likewise, cc @Blameless, I don't think it's so much the infinity cache holding the RX 6900 XT back at 4k, so much as it is the current _size_ of the infinity cache that is holding it back. The bad news is, that some people much smarter than I, said that 128mb is _just_ a bit too small to work on an entire 4k frame as efficiently as the cache does at lower resolutions, and that would explain why increasing memory speed and reducing timings/latency helps so much at 4k, as it has to utilize memory much heavier than it would at a lower resolution.


I tend to see the Infinity Cache as why the card can perform at all, given the limited main memory bandwidth. After all, it's generally competitive with parts with 50-100% more bandwidth.

But yes, there does come a point when performance falls off because the card has to wait on it's main pool of VRAM too often. However, it can be hard to tell the difference between a shader and a memory bandwidth limitation, without benching the impact of different memory clocks/timing tables, as shader performance often scales directly with resolution and Navi 21 is relatively weak here vs. GA102.



J7SC said:


> ...lot's of rumours (and some related patent filings), but 'a future' RDNA gen will be tile / chiplet based and both HBM and bigger infinity cache would be on board in the top models, though that could also refer to the pro / workstation model.


Time will tell, of course, but I don't expect HBM to show up again in the consumer space any time soon, unless they need some halo product to match a top part from NVIDIA. It's probably cheaper and more effective to rely on increasing cache sizes alone for the gaming focused parts.

I'm pretty sure RDNA3 will have a larger 2D cache (256MiB per tile looks likely), but I strongly suspect succeeding generations will probably leverage V-cache for giant pools of L3, without inflating die size. 768MiB-1GiB per tile might be plausible for RDNA4, but I digress.


----------



## J7SC

Blameless said:


> I tend to see the Infinity Cache as why the card can perform at all, given the limited main memory bandwidth. After all, it's generally competitive with parts with 50-100% more bandwidth.
> 
> But yes, there does come a point when performance falls off because the card has to wait on it's main pool of VRAM too often. However, it can be hard to tell the difference between a shader and a memory bandwidth limitation, without benching the impact of different memory clocks/timing tables, as shader performance often scales directly with resolution and Navi 21 is relatively weak here vs. GA102.
> 
> 
> 
> Time will tell, of course, but I don't expect HBM to show up again in the consumer space any time soon, unless they need some halo product to match a top part from NVIDIA. It's probably cheaper and more effective to rely on increasing cache sizes alone for the gaming focused parts.
> 
> I'm pretty sure RDNA3 will have a larger 2D cache (256MiB per tile looks likely), but I strongly suspect succeeding generations will probably leverage V-cache for giant pools of L3, without inflating die size. 768MiB-1GiB per tile might be plausible for RDNA4, but I digress.


Leaving out the HBM usage question (for consumer cards, beyond pro / workstation / enterprise), the RDNA_'x' tile / chiplet I read about seemed pretty trick, especially re. the interconnect between the tiles (which the recent AMD patent filing refereed to, if memory serves). Intel and apparently NVidia are also working on tile GPUs, and with multiple tiles per card, the respective apps will see the GPU as a single unit, rather than the multiple discreet units Crossfire and SLI worked with, which of course necessitated separate driver profiles - one of the major adoption issues with that tech. 

As an aside, I have a 2x 2080 Ti system that runs the rarer CFR (checkerboard) SLI on several apps that happen to work with it (including FS2020) and IMO, CFR was a big step I wished they would have pursued further, not least as there's no micro-stutter etc.

But back to the RDNA-x...Infinity Cache will be key, and AMD's multi-year experience with Ryzen CPU chiplets will also come in handy for RDNA_x tiles, as will the 3D / stacked VCache advancements. The key question will be how exactly to integrate the tiles which must include a common memory pool somewhere along the line at one end... I wonder if in fact AMD will have two types of Infinity Cache (or whatever the right terminology would be) at the other end as part of the interconnect, a larger common one and a smaller per tile. So much fun to speculate about...


----------



## lawson67

tolis626 said:


> Pardon my random intrusion, but what monitor are you running? The only 1440p 240Hz one I know of is the Samsung G7, but as far as I know these don't take very kindly to overclocking. Hell, I have an older Samsung CHG70 (in desperate need of replacement btw), which is 1440p 144Hz and won't reliably overclock even 10Hz higher. Even going to 150Hz will either not work or require that I drop to 6-bit color. Pretty pathetic.
> 
> Well, GCN was severely bandwidth dependent. Basically the more you threw at it, the better. Even these insane Vega memory subsystems showed performance improvements from memory overclocking. Navi in general is a different beast. And with infinity cache, bandwidht requirements have changed dramatically. They still benefit from more, but they aren't desperate for it, so to speak, like GCN cards.
> 
> Now, considering your Radeon VII has similar performance to a 5700XT, well, you're in for a treat.


I prefer my CHG70 over the G7 which is way to curved for my liking however i only use it for browsing the net etc for gaming i am hooked up to my 65" LG OLED CX via HMDI


----------



## LtMatt

jonRock1992 said:


> Stock Red Devil Ultimate bios. I can run over 2200MHz on the memory with default timings. I was just wondering how that compares to 2150MHz fast timings in games. I'm using a 1440p 270Hz monitor.


Nice. I've never tried without the fast timing tbf. 

Can you test it in a few games?


----------



## ptt1982

My predictions regarding the memory bandwidth bottleneck of 6900xt and its future.

I think the card works great in 2021 and upcoming 2022 titles with maxed settings using FSR at ultra quality (or a similar 75% smart upscaler) in 4K. This is the sweet spot for the card. The 16GB VRAM should be able to handle the large amount of ultra high quality textures of future titles. 

Essentially, maybe in 2023-2024, 6900xt might fall into mixed medium-high settings using 1440p upscaled to 4K, which will increase the life again perhaps until 2025-2026. By that time, the FSR2.0/XeSS2.0/NIS2.0 has probably been released and is closer to today's DLSS 2.0 Quality mode. 

RT is probably a real thing in 2025-2026 and rx8000/rtx5000 series are a good upgrade at that point, and you can most likely double the amount of VRAM. 4K120hz high settings w/ RT could be doable in modern titles using smart upscalers and dynamic resolutions.


----------



## lawson67

LtMatt said:


> Nice. I've never tried without the fast timing tbf.
> 
> Can you test it in a few games?


I set 2200mhz default timings on my Red Devil Ultimate and ran SOTR benchmark in 4k on my OLED and where i was normally getting over 100fps at 2100mhz fast timings i was now getting 50fps, i had no artifacts just error correction was kicking in like mad destroying my FPS, might not be that same case with Jon's card tho


----------



## jonRock1992

lawson67 said:


> I set 2200mhz default timings on my Red Devil Ultimate and ran SOTR benchmark in 4k on my OLED and where i was normally getting over 100fps at 2100mhz fast timings i was now getting 50fps, i had no artifacts just error correction was kicking in like mad destroying my FPS, might not be that same case with Jon's card tho





LtMatt said:


> Nice. I've never tried without the fast timing tbf.
> 
> Can you test it in a few games?


I have that game in my steam library. I'll give it a test sometime today.


----------



## SoloCamo

tolis626 said:


> Well, GCN was severely bandwidth dependent. Basically the more you threw at it, the better. Even these insane Vega memory subsystems showed performance improvements from memory overclocking. Navi in general is a different beast. And with infinity cache, bandwidht requirements have changed dramatically. They still benefit from more, but they aren't desperate for it, so to speak, like GCN cards.
> 
> Now, considering your Radeon VII has similar performance to a 5700XT, well, you're in for a treat.


Yea, it's why I actually ended up dropping down from 1900mhz core to 1800mhz on my air cooled Radeon VII. The voltage needed to hold it was not worth it. Meanwhile I was lucky and my HBM2 did 1200mhz with no problem and showed me far more gains in games at 4k than the 100mhz on the core. This also allowed me to never really run the junction temp anywhere near near 100*c while keeping the rpms at 2700 or below at all times with 990mV. Was running far better than stock and clocks were consistent all while using quite a bit less power.

I'm excited to see that the 6900xt's stock cooler seems to be a huge improvement though in both cooling capability and actual noise. I used to run a 290x factory cooler (and Vega 64 factory cooler) so the Radeon VII was a huge improvement over those as is. Considering I've been managing 4k since 2014 this will be the single biggest gpu jump I've made. Only time it was bigger was going from a AGPx1950pro 256mb card to a pci-e GTX550ti.


----------



## CS9K

J7SC said:


> But back to the RDNA-x...Infinity Cache will be key, and AMD's multi-year experience with Ryzen CPU chiplets will also come in handy for RDNA_x tiles, as will the 3D / stacked VCache advancements. The key question will be how exactly to integrate the tiles which must include a common memory pool somewhere along the line at one end... I wonder if in fact AMD will have two types of Infinity Cache (or whatever the right terminology would be) at the other end as part of the interconnect, a larger common one and a smaller per tile. So much fun to speculate about...


If the patents are to be believed, the single blob of infinity cache _is_ the interconnect between the chiplets. As in, both chiplets will be wired to a single cache pool, which is also wired into the I/O controller. If that turns out to be the case, that completely solves the problem of finding a way to "connect the tiles fast and efficiently enough", and it is a quite brilliant solution to the problem, if it turns out to be true.

I'd love to buy a beverage of choice for the brilliant fool that said, "use the cache as the interconnect? Brilliant!"


----------



## tolis626

jonRock1992 said:


> I'm using the Asus XG27AQM. Unfortunately mine came with a dead pixel. Actually, this is the second one that I got with a dead pixel, but I'm probably going to keep it because the dead pixel is in the top left corner, and it has very little backlight bleed/IPS glow.


Hmmm... I'll have to check it out. Ideally I'd like to go with IPS over VA, but these things can get real expensive real quick. Thanks for answering mate!


ZealotKi11er said:


> I dont think GCN for gaming was memory bound. The issue with GCN was utilization. In some games you see GCN doing extremely well but in others it would lack behind. Best feature Vega has that is missing from RDNA is HBCC.


Oh, it was both. As GCN dies grew, both problems grew with them. I had a 390x, which was basically the same as the 290x but with more RAM. No problems with utilization (not severe ones, at least), but boy did it benefit from memory overclocking. Too bad mine was a dud for mem OC. But the one time I got to work on Vega cards, I realized that HBM didn't fix the bandwidth dependency. It made it better, sure, but still, you wanted as high memory clocks as possible.

That said, you needed higher resolutions for it to matter more. At lower resolutions, utilization was more of an issue, but these cards scaled amazingly with resolution. And when you increased memory clocks they could shine. Too bad they had other problems too. Like, 350W for a GPU is cool and all today, but I was doing it in 2015. That 390x, that furnace of a GPU, would gulp down 375W when overclocked to the max. And I was using it close to that daily (the card's still working fine after all these years too!). There was even a point where the heating in my house was not working and it was a particularly cold stretch of winter (like 10C, it's freezing by Greek standards, don't laugh at us), so I ended up using my PC to warm up my room. So I just fired up Heaven and it got cozy and warm. Good times.


lawson67 said:


> I prefer my CHG70 over the G7 which is way to curved for my liking however i only use it for browsing the net etc for gaming i am hooked up to my 65" LG OLED CX via HMDI


Yeah, the G7 is way too curved. I despise the 27" model, but I do like the 32" one. Honestly, I'd go for better monitors, but these get very expensive and I don't need them. From my monitor, I just need the performance (for shooters and such), and 1440p 240Hz is about the best you can get, with the G7 being, AFAIK, by far the cheapest way to get it without dropping to 1080p. But when I move to a bigger house (hopefully soon) I will also get a CX or its newer equivalent for my more relaxed, single player games and movies. I was hesitant at first, but a friend of mine got the 65" one and holy crap it is glorious.

Also, apart from the lovely subtle curve and acceptable color accuracy, I ended up absolutely despising the CHG70. Terrible overdrive/ghosting making it unusable at <100 FPS (not an issue with the 6900XT), wonky Freesync implementation with narrow FPS range, flickering and having to disable it when watching movies videos because it makes the already bad ghosting even worse, bad QC/reliability, mediocre brightness, gimicky and useless HDR/local dimming, mediocre pixel response times, problems with burn in, black crush issues... The list goes on and on, to the point that the only reason I haven't yet picked up the G7 is because it's Samsung and my experience with them has been so terrible with the CHG70. Also, I don't know if it's an issue with my particular monitor, but it's noticeably less smooth when gaming than a friend's MSI ultrawide 144Hz monitor. The performance is there on paper, but the end result is inferior.


SoloCamo said:


> Yea, it's why I actually ended up dropping down from 1900mhz core to 1800mhz on my air cooled Radeon VII. The voltage needed to hold it was not worth it. Meanwhile I was lucky and my HBM2 did 1200mhz with no problem and showed me far more gains in games at 4k than the 100mhz on the core. This also allowed me to never really run the junction temp anywhere near near 100*c while keeping the rpms at 2700 or below at all times with 990mV. Was running far better than stock and clocks were consistent all while using quite a bit less power.
> 
> I'm excited to see that the 6900xt's stock cooler seems to be a huge improvement though in both cooling capability and actual noise. I used to run a 290x factory cooler (and Vega 64 factory cooler) so the Radeon VII was a huge improvement over those as is. Considering I've been managing 4k since 2014 this will be the single biggest gpu jump I've made. Only time it was bigger was going from a AGPx1950pro 256mb card to a pci-e GTX550ti.


If you had the reference 290x, anything else is going to be cool and quiet by comparison. I'll just leave it at that.


----------



## ZealotKi11er

tolis626 said:


> Hmmm... I'll have to check it out. Ideally I'd like to go with IPS over VA, but these things can get real expensive real quick. Thanks for answering mate!
> 
> Oh, it was both. As GCN dies grew, both problems grew with them. I had a 390x, which was basically the same as the 290x but with more RAM. No problems with utilization (not severe ones, at least), but boy did it benefit from memory overclocking. Too bad mine was a dud for mem OC. But the one time I got to work on Vega cards, I realized that HBM didn't fix the bandwidth dependency. It made it better, sure, but still, you wanted as high memory clocks as possible.
> 
> That said, you needed higher resolutions for it to matter more. At lower resolutions, utilization was more of an issue, but these cards scaled amazingly with resolution. And when you increased memory clocks they could shine. Too bad they had other problems too. Like, 350W for a GPU is cool and all today, but I was doing it in 2015. That 390x, that furnace of a GPU, would gulp down 375W when overclocked to the max. And I was using it close to that daily (the card's still working fine after all these years too!). There was even a point where the heating in my house was not working and it was a particularly cold stretch of winter (like 10C, it's freezing by Greek standards, don't laugh at us), so I ended up using my PC to warm up my room. So I just fired up Heaven and it got cozy and warm. Good times.
> 
> Yeah, the G7 is way too curved. I despise the 27" model, but I do like the 32" one. Honestly, I'd go for better monitors, but these get very expensive and I don't need them. From my monitor, I just need the performance (for shooters and such), and 1440p 240Hz is about the best you can get, with the G7 being, AFAIK, by far the cheapest way to get it without dropping to 1080p. But when I move to a bigger house (hopefully soon) I will also get a CX or its newer equivalent for my more relaxed, single player games and movies. I was hesitant at first, but a friend of mine got the 65" one and holy crap it is glorious.
> 
> Also, apart from the lovely subtle curve and acceptable color accuracy, I ended up absolutely despising the CHG70. Terrible overdrive/ghosting making it unusable at <100 FPS (not an issue with the 6900XT), wonky Freesync implementation with narrow FPS range, flickering and having to disable it when watching movies videos because it makes the already bad ghosting even worse, bad QC/reliability, mediocre brightness, gimicky and useless HDR/local dimming, mediocre pixel response times, problems with burn in, black crush issues... The list goes on and on, to the point that the only reason I haven't yet picked up the G7 is because it's Samsung and my experience with them has been so terrible with the CHG70. Also, I don't know if it's an issue with my particular monitor, but it's noticeably less smooth when gaming than a friend's MSI ultrawide 144Hz monitor. The performance is there on paper, but the end result is inferior.
> 
> If you had the reference 290x, anything else is going to be cool and quiet by comparison. I'll just leave it at that.


I was told that Vega 64 in many cases was very poorly utilized. Even RDNA1/2 utilization is not perfect.


----------



## LtMatt

Back to MBA, undervolting, a 250W power limit and pure gaming.


----------



## ZealotKi11er

Use 2 PCIE power cables pls.


----------



## LtMatt

ZealotKi11er said:


> Use 2 PCIE power cables pls.


😂 I tell people to do that for a living.

I’ve no intention Of even overclocking this GPU so I’d rather just use one cable as my PSU cables look like a dogs dinner.

Now my Toxic is gone back to one cable.


----------



## SoloCamo

tolis626 said:


> If you had the reference 290x, anything else is going to be cool and quiet by comparison. I'll just leave it at that.


Oh yes, and I use to work that fan hard. Ran my 290x oc'ed it's whole life and pushed it at 4k. Ran the memory at 1500mhz to match the 390x. Forgot what my core clocks were but considering I bought the card within an hour of release back in 2013 it actually was a pretty good oc'er. Sapphire version, came with Battlefield 4. Still works fine and only recently was replaced in my living room / backup pc by my Vega 64. Fan bearing is starting to get noisy though unfortunately. Still the best gpu purchase I've made as far as usefulness goes. 390x is still a good card with it's 8gb and is perfectly usable to this day. Matches a 580 when both are oc'ed.


----------



## J7SC

CS9K said:


> If the patents are to be believed, the single blob of infinity cache _is_ the interconnect between the chiplets. As in, both chiplets will be wired to a single cache pool, which is also wired into the I/O controller. If that turns out to be the case, that completely solves the problem of finding a way to "connect the tiles fast and efficiently enough", and it is a quite brilliant solution to the problem, if it turns out to be true.
> 
> I'd love to buy a beverage of choice for the brilliant fool that said, "use the cache as the interconnect? Brilliant!"


I like the Guinness ad !  ...No matter what the exact cache process, I'm excited about tile based GPUs. Ultimately, it will be a question of L1, L2 and L3 cache distribution and interconnects as app requests will have to be split up and than reunited after GPU processing. Perhaps they will indeed skip a level of cache, but that's simply too far out of my wheelhouse to sensibly comment on at this stage.

With that in mind, the recent roll-out of Win11 had some initial negative consequences for AMD chiplet CPUs /L3 (now apparently fixed) which does highlight that it will be not only a question of onboard cache and hardwired steps, but also software...see also Alder Lake onboard P/E core scheduler _and_ Win 11 scheduler. As long as it all will play 'Crysis'. I'm happy to consider the tile-gen GPUs...that said, as you and others suggested, the immediate next step for RDNA3(?) will likely be closer to the current flagship w/ gen 2 ray tracing and a sizable jump in the size of Infinity Cache.


----------



## tolis626

SoloCamo said:


> Oh yes, and I use to work that fan hard. Ran my 290x oc'ed it's whole life and pushed it at 4k. Ran the memory at 1500mhz to match the 390x. Forgot what my core clocks were but considering I bought the card within an hour of release back in 2013 it actually was a pretty good oc'er. Sapphire version, came with Battlefield 4. Still works fine and only recently was replaced in my living room / backup pc by my Vega 64. Fan bearing is starting to get noisy though unfortunately. Still the best gpu purchase I've made as far as usefulness goes. 390x is still a good card with it's 8gb and is perfectly usable to this day. Matches a 580 when both are oc'ed.


I still fondly remember back then, when people online and reviewers were concerned about the card's longevity given how hot the reference model used to run. 8 years on, I think it's safe to say they were wrong to worry. As for the 390x, well, mine was an MSI Gaming X or something and I had it for 4.5 years, overclocked the snot out of it, ran it constantly pushed close to its limits (regularly 350W under load), then sold it to a friend who undervolted it (actually it ran at under 200W, which is insane) and used it for some light gaming and crypto mining, and then he sold it to a guy who is now running it slightly overclocked with no problems. The card only required a repaste for the last guy (although I had repasted it with Gelid GC Extreme and replaced the stock thermal pads with Fujipoly ones). Insane!

I share your sentiment. The 390x was among the best hardware purchases I've ever made. I was considering the then faster 980, but thankfully my budget didn't allow for it so I went with the 390x and sucked up the performance deficit. Turns out, the 980 is nigh useless in many modern games, where the 390x is still plenty capable in 1080p and sometimes 1440p.


----------



## jonRock1992

So the SOTTR benchmark is a really useful tool for dialing in memory OC. I tested my GPU with 2700MHz min / 2800MHz max core clocks and various memory configurations. I used the benchmark with the Highest preset, SMAA2TX, Exclusive Fullscreen, and with V-Sync disabled. 2150MHz fast-timings is the best for my particular gpu. 10MHz less and 10MHz more resulted in slightly less performance with fast-timings. The best I could do with default-timings was very close to 2150MHz fast-timings, and this required 2170MHz. The AMD auto OC for memory recommend 2220MHz, and this proved to be very unstable in this benchmark resulting in nearly 30FPS less than with 2150MHz fast-timings.

Here is 2150MHz fast-timings:









Here is 2170MHz default-timings:









Here is 2220MHz default-timings:


----------



## CS9K

jonRock1992 said:


> So the SOTTR benchmark is a really useful tool for dialing in memory OC.


I have found Unigine Superposition, "4k Optimized" preset, to be extremely consistent with showing changes with memory overclocks. The benchmark is _ridiculously_ memory-access heavy at 4k (my GPU at OC settings shows consistent 80% memory controller duty cycle), and the scores are _very_ consistent run-to-run


----------



## ZealotKi11er

With 2220 you are hitting too many error, hence lower core. Its a build in function just like what Nvidia has.


----------



## 99belle99

Since people are talking about older AMD cards I thought I would share with you mine. I had a R9 290 then moved on to a R9 Fury X then a Vega 56 never got the Radeon VII as all the talk was wait for Navi. So I then got a Rx 5700 XT and currently have the Rx 6900 XT. The mad thing is all the cards were reference.


----------



## lawson67

jonRock1992 said:


> So the SOTTR benchmark is a really useful tool for dialing in memory OC. I tested my GPU with 2700MHz min / 2800MHz max core clocks and various memory configurations. I used the benchmark with the Highest preset, SMAA2TX, Exclusive Fullscreen, and with V-Sync disabled. 2150MHz fast-timings is the best for my particular gpu. 10MHz less and 10MHz more resulted in slightly less performance with fast-timings. The best I could do with default-timings was very close to 2150MHz fast-timings, and this required 2170MHz. The AMD auto OC for memory recommend 2220MHz, and this proved to be very unstable in this benchmark resulting in nearly 30FPS less than with 2150MHz fast-timings.
> 
> Here is 2150MHz fast-timings:
> View attachment 2532828
> 
> 
> Here is 2170MHz default-timings:
> View attachment 2532829
> 
> 
> Here is 2220MHz default-timings:
> View attachment 2532830





jonRock1992 said:


> So the SOTTR benchmark is a really useful tool for dialing in memory OC. I tested my GPU with 2700MHz min / 2800MHz max core clocks and various memory configurations. I used the benchmark with the Highest preset, SMAA2TX, Exclusive Fullscreen, and with V-Sync disabled. 2150MHz fast-timings is the best for my particular gpu. 10MHz less and 10MHz more resulted in slightly less performance with fast-timings. The best I could do with default-timings was very close to 2150MHz fast-timings, and this required 2170MHz. The AMD auto OC for memory recommend 2220MHz, and this proved to be very unstable in this benchmark resulting in nearly 30FPS less than with 2150MHz fast-timings.
> 
> Here is 2150MHz fast-timings:
> View attachment 2532828
> 
> 
> Here is 2170MHz default-timings:
> View attachment 2532829
> 
> 
> Here is 2220MHz default-timings:
> View attachment 2532830


Yep worked out some time ago how good SOTR benchmark is for tuning Vram , i used it to tune my Vram on my RX 6800 XT and then on my RX 6900 XT, i get best FPS using fast timings between 2112mhz and 2126mhz if i go over 2126mhz on Vram i start to get mass fps loss, these are my results at 4k ultra settings 102 fps and at 1440p Ultra 180 fps with my card GPU at 2870mhz and Vram at 2112mhz

4k Ultra fast timings 2112mhz









1440P Ultra fast timings 2112mhz


----------



## SoloCamo

When all you reference fast timings at 2150 (or lower) - is there a reference guide for dialing it in on these or a "guaranteed stable" setup? If I understand correctly, pretty much all but the worst cards will do 2150 out of the box on stock timings? Anything to improve high res performance is what I'm after. If I can get a few extra fps at 4k over stock and lower power use I'll be ecstatic. If shipping is correct my card will be here Friday so I hope the silicon lottery treats me well.



tolis626 said:


> I share your sentiment. The 390x was among the best hardware purchases I've ever made. I was considering the then faster 980, but thankfully my budget didn't allow for it so I went with the 390x and sucked up the performance deficit. Turns out, the 980 is nigh useless in many modern games, where the 390x is still plenty capable in 1080p and sometimes 1440p.


Same, when I went for the 290x it was gtx780 or the 780ti right around the corner (well it wasn't announced yet but we all knew it was coming as the stop gap between the OG Titan). 780 & 780ti with 3gb aged very poorly overall. I kept my 290x as long as I could and people laughed and said "the card will be useless far before the extra vram comes into play). In fact I replaced my 290x with a 580 8gb temporarily (before going Vega 64) because I couldn't run max textures on BFV without constant stuttering, even at lower than 4k. 

Same crap I was told going 4790k over 4690k (cpu will be useless by the time it's HT became handy) and I kept that cpu from 2014 to mid 2021. The i5 would have had to been replaced years earlier.


----------



## Bobb3rdown

Result not found







www.3dmark.com





Some 1c ambient temps last night. Broke into the top50 for 5800x/6900xt.


----------



## CS9K

SoloCamo said:


> When all you reference fast timings at 2150 (or lower) - is there a reference guide for dialing it in on these or a "guaranteed stable" setup? If I understand correctly, pretty much all but the worst cards will do 2150 out of the box on stock timings? Anything to improve high res performance is what I'm after. If I can get a few extra fps at 4k over stock and lower power use I'll be ecstatic. If shipping is correct my card will be here Friday so I hope the silicon lottery treats me well.



Set your clock speed to something high but not ridiculous (2550 or 2600 should do), keep the voltage slider all the way to the right
Set your fan curve
Set the power target to +15%
Set memory to 2100 w/fast timings (be sure to save all of this as a profile in Adrenalin
Have HWINFO64 up and running, make sure you don't hit your PPT limit during the run (reduce clock speed if you do)
Load up Unigine Superposition, set it on the "4k Optimized" preset, run it 5 times, discard outliers, average the three middle
increase memory speed by 10MHz, run it twice more, increase speed by 10MHz, run it twice more, etc
If you see performance plateau, or plateau and then decrease, then the safe memory speed to run is the setting _before_ the plateau


----------



## SoloCamo

CS9K said:


> Set your clock speed to something high but not ridiculous (2550 or 2600 should do), keep the voltage slider all the way to the right
> Set your fan curve
> Set the power target to +15%
> Set memory to 2100 w/fast timings (be sure to save all of this as a profile in Adrenalin
> Have HWINFO64 up and running, make sure you don't hit your PPT limit during the run (reduce clock speed if you do)
> Load up Unigine Superposition, set it on the "4k Optimized" preset, run it 5 times, discard outliers, average the three middle
> increase memory speed by 10MHz, run it twice more, increase speed by 10MHz, run it twice more, etc
> If you see performance plateau, or plateau and then decrease, then the safe memory speed to run is the setting _before_ the plateau


Awesome, thanks.


----------



## Bobbydo

jonRock1992 said:


> Has anyone here tried out battlefield 2042 yet? I'm getting serious stability issues with that game. Lots of stutters and I had to lower my overclock by 70MHz and increase voltage. This game makes Timespy GT2 look like nothing.


I did, Everytime I overclock even 20mhz it crashes. But I just leave it in default and my gpu goes around 2700mhz.


----------



## Bobbydo

Kawaz said:


> long time lurker.
> 
> Finally broke that 25k, even though it took 1.3V


Can you please show the settings you used in and software and mpt?


----------



## ZealotKi11er

99belle99 said:


> Since people are talking about older AMD cards I thought I would share with you mine. I had a R9 290 then moved on to a R9 Fury X then a Vega 56 never got the Radeon VII as all the talk was wait for Navi. So I then got a Rx 5700 XT and currently have the Rx 6900 XT. The mad thing is all the cards were reference.


I have not used AMD GPU for daily machine since 290X. Reason is because Nvidia GPUs are so garbage to tinker with so I just use 3080 for gaming while 6900XT sits on my test bench.


----------



## Kawaz

Bobbydo said:


> Can you please show the settings you used in and software and mpt?


I just broke 26k 


Unfortunately i have changed the settings by now. Ill post final when im done tweaking for Superposition.

Clockspeeds are really going now


----------



## Kawaz

There it is, the actual 3ghz in a bench (the run completed)


----------



## CS9K

Kawaz said:


> There it is, the actual 3ghz in a bench (the run completed)


Hot damn, dude! That is awesome!


----------



## chispy

I still see a lot of out of line scores in benchmarks, clocks and temps do not match the scores ( way too high scores for such clocks and high temps ) and makes absolutely no sense , hence why i stopped posting my benchmarks here :/ , this is sad , very sad. Credibility has gone out the window on what the 6900xt should be really scoring. I'm on my fourth 6900xt and this Gem ASRock 6900xtxh that i have now can run 3000Mhz/2160Mhz real clocks on water chiller at -21c , with all the tricks on mpt fclk , etc... with evc2 for real voltage tunning overclock to the max cannot get near some of this scores. I call bs. I have been an extreme overclocker for 20 years running liquid nitrogen on cpus , gpus and mems and know enough of this gpus and what are they capable of , so im not talking out of my behind i talk based in a lot of experience with this 6900xt cards.

By the way i was the first one to write an overclocking guide for the 6900xt at hwbot and behind the scenes we the top extreme overclockers have been talking about this out of line and impossible scores. My last post here on this thread as there seems to be a group of people deceiving everyone else with their false scores , but not me or the really top exteme overclocking guys at hwbot. People needs to wake up and stopped been so naive. Enjoy it your out of line scores. if it walks like a duck , smell like a duck , kuack like a duck it is certainly 100% a duck  . I'm out it here. Rant over , but is the real sad truth ...


----------



## ZealotKi11er

chispy said:


> I still see a lot of out of line scores in benchmarks, clocks and temps do not match the scores ( way too high scores for such clocks and high temps ) and makes absolutely no sense , hence why i stopped posting my benchmarks here :/ , this is sad , very sad. Credibility has gone out the window on what the 6900xt should be really scoring. I'm on my fourth 6900xt and this Gem ASRock 6900xtxh that i have now can run 3000Mhz/2160Mhz real clocks on water chiller at -21c , with all the tricks on mpt fclk , etc... with evc2 for real voltage tunning overclock to the max cannot get near some of this scores. I call bs. I have been an extreme overclocker for 20 years running liquid nitrogen on cpus , gpus and mems and know enough of this gpus and what are they capable of , so im not talking out of my behind i talk based in a lot of experience with this 6900xt cards.
> 
> By the way i was the first one to write an overclocking guide for the 6900xt at hwbot and behind the scenes we the top extreme overclockers have been talking about this out of line and impossible scores. My last post here on this thread as there seems to be a group of people deceiving everyone else with their false scores , but not me or the really top exteme overclocking guys at hwbot. People needs to wake up and stopped been so naive. Enjoy it your out of line scores. if it walks like a duck , smell like a duck , kuack like a duck it is certainly 100% a duck  . I'm out it here. Rant over , but is the real sad truth ...


Because overclocking RDNA2 is not about going extreme. Most 3090 top scores are people just going with LN2 while scores for 6900 XT are mostly just water cooling. What are you scoring in Time Spy with 3000 clk?


----------



## Kawaz

chispy said:


> I still see a lot of out of line scores in benchmarks, clocks and temps do not match the scores ( way too high scores for such clocks and high temps ) and makes absolutely no sense , hence why i stopped posting my benchmarks here :/ , this is sad , very sad. Credibility has gone out the window on what the 6900xt should be really scoring. I'm on my fourth 6900xt and this Gem ASRock 6900xtxh that i have now can run 3000Mhz/2160Mhz real clocks on water chiller at -21c , with all the tricks on mpt fclk , etc... with evc2 for real voltage tunning overclock to the max cannot get near some of this scores. I call bs. I have been an extreme overclocker for 20 years running liquid nitrogen on cpus , gpus and mems and know enough of this gpus and what are they capable of , so im not talking out of my behind i talk based in a lot of experience with this 6900xt cards.
> 
> By the way i was the first one to write an overclocking guide for the 6900xt at hwbot and behind the scenes we the top extreme overclockers have been talking about this out of line and impossible scores. My last post here on this thread as there seems to be a group of people deceiving everyone else with their false scores , but not me or the really top exteme overclocking guys at hwbot. People needs to wake up and stopped been so naive. Enjoy it your out of line scores. if it walks like a duck , smell like a duck , kuack like a duck it is certainly 100% a duck  . I'm out it here. Rant over , but is the real sad truth ...


Well, we all know there was/is an issue with 6900xt and timespy (tesselation bug) at this point. As for my own scores, and clocks. The 3ghz passes 4k optimized. Have to dial back on 1080EX to 3050 in wattman. As for timespy i have to dial back to 2940. Still made me break the 26k. As for people cracking 27k and above i dont see how. Cant even get close with pretty similar settings. 
But as for my own scores they match up with similar cards both in 3dmark and Unigine benches. Im 1st for 6900xt on HWbot, and second place is at 14050 with 3000mhz max in wattman. 
So at least for my part i stand behind my scores.


----------



## ZealotKi11er

Kawaz said:


> Well, we all know there was/is an issue with 6900xt and timespy (tesselation bug) at this point. As for my own scores, and clocks. The 3ghz passes 4k optimized. Have to dial back on 1080EX to 3050 in wattman. As for timespy i have to dial back to 2940. Still made me break the 26k. As for people cracking 27k and above i dont see how. Cant even get close with pretty similar settings.
> But as for my own scores they match up with similar cards both in 3dmark and Unigine benches. Im 1st for 6900xt on HWbot, and second place is at 14050 with 3000mhz max in wattman.
> So at least for my part i stand behind my scores.


I got 26.25K with 2870MHz. I can probably get to 27K with more core OC.


----------



## SoloCamo

Got my 6900xt. What a jump in performance over my Radeon VII at 4k. Quiet as can be too considering it's a reference cooler. AMD has really stepped their game up. Haven't started tuning it but the itch is there. Maps I was getting 80-90fps on 4k in BFV are now at 140fps+ and my card at default settings seems to hang over 2500mhz.. I've got the fastest combo in the world in Firestrike Ultra with a 10900 (non k) and 6900xt. Helps that I'm only one of two entries for that hardware config though


----------



## ZealotKi11er

SoloCamo said:


> Got my 6900xt. What a jump in performance over my Radeon VII at 4k. Quiet as can be too considering it's a reference cooler. AMD has really stepped their game up. Haven't started tuning it but the itch is there. Maps I was getting 80-90fps on 4k in BFV are now at 140fps+ and my card at default settings seems to hang over 2500mhz.. I've got the fastest combo in the world in Firestrike Ultra with a 10900 (non k) and 6900xt. Helps that I'm only one of two entries for that hardware config though


Yeah, the cooler and noise of 6900 XT are miles better than Radeon 7. I personally could not use Radeon 7 at stock settings. Radeon 7 still has it place.


----------



## Bobb3rdown

AMD Radeon RX 6900 XT video card benchmark result - AMD Ryzen 7 5800X,Micro-Star International Co., Ltd. MAG X570 TOMAHAWK WIFI (MS-7C84) (3dmark.com) 

#98 firestrike hall of fame #78 for single gpu


----------



## Kawaz

ZealotKi11er said:


> I got 26.25K with 2870MHz. I can probably get to 27K with more core OC.


I believe it. I just dont seem to be able to match cards with similar clocks. However yours is definately better than mine in Timespy for sure.


----------



## Kawaz

New 4k optimized run, pushing them clocks even higher.


----------



## tolis626

Kawaz said:


> New 4k optimized run, pushing them clocks even higher.


My man, I envy you. Throwing 1.34V in a 6900XT? If I had balls like yours, I'd either be driving a Ferrari or I'd be dead. Jesus man... Nice. I'll just stay here and enjoy mine running at 1.15V, knowing my place like the wuss that I am.


----------



## Bobb3rdown

tolis626 said:


> My man, I envy you. Throwing 1.34V in a 6900XT? If I had balls like yours, I'd either be driving a Ferrari or I'd be dead. Jesus man... Nice. I'll just stay here and enjoy mine running at 1.15V, knowing my place like the wuss that I am.


It will take 1.2v and 380w no problem


----------



## ZealotKi11er

Something hugle wrong with HUB results. I just swapped my 3080 for 6900 XT and I am getting 10-15 fps more at 4K Ultra.


----------



## bloot

Yeah, they seem to be the only ones to get such bad results on radeon cards.

Battlefield 2042 im Technik-Test: Benchmarks in Full HD, WQHD & Ultra HD, Frametimes und VRAM - ComputerBase 

Battlefield 2042 PC Benchmarks, Performance, and Settings | Tom's Hardware (tomshardware.com)


----------



## ZealotKi11er

They probably jumped the gun and did not use the latest driver.


----------



## tolis626

Bobb3rdown said:


> It will take 1.2v and 380w no problem


What will? My card? Yeah, right. It's already hitting over 105C during TimeSpy at 1.15V and 375W. I'm not saying you're wrong, I know the hardware can take it, it's just that either my cooler, case airflow, mounting or all of the above can't handle so much heat. Unless I improve upon those, most importantly remounting the cooler and repasting, this is as far as I go. Which isn't terrible, because my card is rather a dud for overclocking, so I see no point stressing the hardware with all that extra heat for attempts at more performance that won't come.

PS : My card is also an XTX, so no 1.2V for me unless I do the MPT trickery. 1.175V is as far as it'll go without it.


----------



## Bobb3rdown

tolis626 said:


> What will? My card? Yeah, right. It's already hitting over 105C during TimeSpy at 1.15V and 375W. I'm not saying you're wrong, I know the hardware can take it, it's just that either my cooler, case airflow, mounting or all of the above can't handle so much heat. Unless I improve upon those, most importantly remounting the cooler and repasting, this is as far as I go. Which isn't terrible, because my card is rather a dud for overclocking, so I see no point stressing the hardware with all that extra heat for attempts at more performance that won't come.
> 
> PS : My card is also an XTX, so no 1.2V for me unless I do the MPT trickery. 1.175V is as far as it'll go without it.


for sure. timespy is a killer. i basically have the same card in the toxic air-cooled. ive found that its more of the power draw that increases the junction so high. at least on mine i usually have over a 20c delta between the edge and junction. on a cold night though it can take it and stay in the 90s


----------



## FatFingerGamer

Ajdaho pl said:


> You put Your post under mine, but it's probably not directed at me, because I don't know what you mean?


i said PM my friend. (private message aka conversation) if it were directed at you i would have quoted you or addressed you directly. i made a generalized public post. lol sorry i would have responded sooner but i was banned because of that post. it is what it is tho lol. definitely not directed at you though. 👾


----------



## snakeeyes111

Result not found







www.3dmark.com





Got it


----------



## Kawaz

tolis626 said:


> My man, I envy you. Throwing 1.34V in a 6900XT? If I had balls like yours, I'd either be driving a Ferrari or I'd be dead. Jesus man... Nice. I'll just stay here and enjoy mine running at 1.15V, knowing my place like the wuss that I am.


Liquid metal, lots of fans, and 560/480/360 rads and norwegean winter and the temps arent to bad either.


----------



## SoloCamo

ZealotKi11er said:


> They probably jumped the gun and did not use the latest driver.


Yea, he was on a older driver at the time. My performance is more in line with what they show for the 3090 and that's at stock.

Side note: Undervolting / performance discrepancies. So for example, in Forza Horizon 5, I kept power limit at stock, dropped voltage to 1125 from 1175, have memory at 2100 w/ fast timing and kept core at 2479. This usually has me at 2400mhz or above and my power use is around 235w. FPS is usually around 80-120fps at 4k all ultra.

On the other hand, the same settings in BFV will have me at 2300mhz and power use hitting the 255w limit. FPS is anywhere from 110-160fps depending on map at 4k all ultra.

Both show pretty much 100% gpu usage so why the extra power draw with lower clocks on BFV? Seems like Navi doesn't listen to my inputs as consistently as my Vega cards did or is it just me?


----------



## tolis626

Bobb3rdown said:


> for sure. timespy is a killer. i basically have the same card in the toxic air-cooled. ive found that its more of the power draw that increases the junction so high. at least on mine i usually have over a 20c delta between the edge and junction. on a cold night though it can take it and stay in the 90s


Well, the mounting on mine must be pretty borked. I don't only get over 20C delta, but I also get edge temps over 75C. And that's with fans at pretty much 100%. My case airflow isn't helping either, the Evolv X doesn't do the card any favors. So unlike you, I get terrible thermals if I push the PL. Still, the performance I'm getting at 330W is more than satisfactory, so I'm in no hurry to tinker with it. 


Kawaz said:


> Liquid metal, lots of fans, and 560/480/360 rads and norwegean winter and the temps arent to bad either.


Yeah, ok, I'd still be afraid to push it that hard. 

Also, Greek winter is more like Norwegian summer or a nice warm Norwegian spring day. Today is a rather cold day for late November by Greek standards, but high temp was like 18C and low at 9-10C. With these kind of temps, you'll see Greeks turn on the heating in their homes and wear thick winter clothing. I always joke about it and tell them that, if a Scandinavian sees us acting like that with these temps, we'll probably end up being the laughistock over there. I'll go a bit off topic and say that I always found this fascinating. With how sunny Greece usually is, there's a lot of people from Northern countries that will swim at our beaches when on vacation here, even when it's cold by our standards. I was in Crete a few weeks ago. It had rained a few days before, so the weather was rather cool, but still in the 20-25C range. We went to the beach for a walk, and you'd consistently see Greeks being outside the water with thick clothes, while tourists from the north were swimming and chilling on the sand. Such a world of difference.


----------



## schidddy

anyone managed to get the bios flash working? i have a LD 6900XT and wanted to flash the ultimate BIOS on it. i Get a subsystem ID Missmatch on AMvbFlash


----------



## Bobb3rdown

snakeeyes111 said:


> Result not found
> 
> 
> 
> 
> 
> 
> 
> www.3dmark.com
> 
> 
> 
> 
> 
> Got it


congrats


----------



## Blameless

SoloCamo said:


> Both show pretty much 100% gpu usage so why the extra power draw with lower clocks on BFV? Seems like Navi doesn't listen to my inputs as consistently as my Vega cards did or is it just me?


CPU/GPU usage is generally only reported in the sense of non-clock gated cycles. You have 100% of cycles doing something and still have a huge spread in power consumption/current draw across different loads.

From the lightest load games I can think of to the highest the spread is more than double for the latter. Synthetic loads can almost double that again, if tuned for the hardware. All will show "100%" load because all cycles are being utilized, but a game that isn't very demanding on the architecture might pull 180w with no power limit, while OCCT or the Ray Tracing Feature test (before they capped it in drivers) could pull ~500w at the same settings.

Basically, BFV is doing more work per clock than Forza. This is not unusual.

Personally, I calibrate my 24/7 OCs to a reasonable worst case load. Time Spy Extreme graphics test #2 is usually good for this. If it doesn't power or temperature throttle here, it won't hit the limits of any game I've ever heard of. Most games are about 70-100w less, at the same clocks/(pre-droop) voltage.


----------



## Skinnered

Returned my Sapphire EE LC, was too noisy.
Maybe I had a bad sample, I dunno.
I bought a cheap seond hand reference and will put an Alphacool Eiswolf 2, 360 on it.

Alphacool Eiswolf 2 AIO - 360mm Radeon RX 6800/6800XT/6900 Reference Design with Backplate

Somebody have experience with this Aio?
Did you gain much more oc?

This reference tops out between -2500-2550 mhz core and 2116 mem fast timings
Mem. can go higher, but perf won't increase, maybe its not bandwith limited at this core frequency?
Will I gain much core frequency, say 2700 or higher? I know its difficult to predict.

Another questio, does anybody have problems with Gta remaster -dx12? I get an ue4 error during leveloading and it ctd.
Also BF2042 crash with strange desktop image corruption.
Was with 21.40.03 drivers.


----------



## Skinnered

Kawaz said:


> New 4k optimized run, pushing them clocks even higher.


Wow, I guess this is a xtxh and not a reference?
Imagine this at gaming.
I would be happy to reach 2800

Btw, does someone have or know where I can find some good mpt settings for a reference 6900? I am going on water for it.


----------



## CS9K

schidddy said:


> anyone managed to get the bios flash working? i have a LD 6900XT and wanted to flash the ultimate BIOS on it. i Get a subsystem ID Missmatch on AMvbFlash


No, bios flashing for RX 6900 XT "XTX" GPU's is not possible yet. Only certain "XTXH" CPU's can be flashed with a certain other "XTXH" bios. Bios unlock/modification is not possible yet.



Skinnered said:


> Wow, I guess this is a xtxh and not a reference?
> Imagine this at gaming.
> I would be happy to reach 2800
> 
> Btw, does someone have or know where I can find some good mpt settings for a reference 6900? I am going on water for it.


To be honest, the only setting that one _needs_ to change is the power limit (W) on the "Power" tab of MorePowerTool. Once you are on water and know that your paste+pad job is good, then crank that setting up to whatever works out to 400W total, and see how far your clock speeds will go.

There _are_ other settings in MPT that one can change, but the performance improvements are fractions of a percent for each of those. Not only that, they take up a LOT of time to stability-check vs the performance you would get out of changing them.


----------



## Kawaz

Skinnered said:


> Wow, I guess this is a xtxh and not a reference?
> Imagine this at gaming.
> I would be happy to reach 2800
> 
> Btw, does someone have or know where I can find some good mpt settings for a reference 6900? I am going on water for it.


Ive ran SOTTR with over 3ghz. its a xtxh yeah. XFX black limited. Mostly only watt needed for daily. This is with my OC settings. 1.34V and 2350fclk, 1250 SocV and lots of other settings. And as shown 520 watts. Even more in timespy.


----------



## Bobb3rdown

What's everyone's thoughts on this mount. Factory paste. Now the real question kryoknaut or gc extreme? Have both at my disposal. Have new pads as well but unsure if I should bother.


----------



## CS9K

Bobb3rdown said:


> View attachment 2533775
> 
> View attachment 2533774
> 
> 
> What's everyone's thoughts on this mount. Factory paste. Now the real question kryoknaut or gc extreme? Have both at my disposal. Have new pads as well but unsure if I should bother.


That is a solid paste job, I'd say. Looks like they used some nice, thick goop. There _are_ a few thin spots that would eventually pump out, but it doesn't look bad by any means. 










I would personally stick with GC-Exteme. It's thicker, and much more of a pain in the ass to spread out initially, but it won't pump out near as fast as Kryonaut will. IMO: Save the kryonaut for application on to metal heat spreaders, and slap a nice, even layer of GC-Extreme onto the GPU die here


----------



## Bobb3rdown

CS9K said:


> That is a solid paste job, I'd say. Looks like they used some nice, thick goop. There _are_ a few thin spots that would eventually pump out, but it doesn't look bad by any means.
> 
> View attachment 2533776
> 
> 
> I would personally stick with GC-Exteme. It's thicker, and much more of a pain in the ass to spread out initially, but it won't pump out near as fast as Kryonaut will. IMO: Save the kryonaut for application on to metal heat spreaders, and slap a nice, even layer of GC-Extreme onto the GPU die here


Cool will give it a shot. Tbf a super easy cooler to pull. 4 screws then the 4 for the die. And 3 connectors.


----------



## Skinnered

Kawaz said:


> Ive ran SOTTR with over 3ghz. its a xtxh yeah. XFX black limited. Mostly only watt needed for daily. This is with my OC settings. 1.34V and 2350fclk, 1250 SocV and lots of other settings. And as shown 520 watts. Even more in timespy.


Hmm, ive two 8 pins pcie connectors. I think 520 W will be tough to feed. Using 370 now.
Will play with it when my Eiswolf 2 arrives

Thanks for mentioning the most important mpt settings also to CS9K.


----------



## J7SC

Bobb3rdown said:


> View attachment 2533775
> 
> View attachment 2533774
> 
> 
> What's everyone's thoughts on this mount. Factory paste. Now the real question kryoknaut or gc extreme? Have both at my disposal. Have new pads as well but unsure if I should bother.


The mount looks decent enough (mine had a blank spot w/ a thumbprint on it  ). I also recommend Gelid GC- Extreme as it is a bit thicker, noting that I'm not really partial to any brand (per below); it just depends on the exact piece of equipment to be cooled, and even its mounting orientation. Kryonaut is fine also, but it seems not too last as long re. moisture and displays easier pump-out, at least IMO.

While you're having the card apart, you might also want to consider TG 10 thermal putty for the VRAM (Digi-key just got another recent shipment late last week)...it 'conforms' very easily to the available space, nooks and crannies. With thermal putty and an additional heatsink (below), I dropped all temps significantly. For power delivery, I used the correct-thickness Thermalright thermal pads.


----------



## Bobbydo

Hello guys. Can you please help me. I just finished building my first custom loop. And looks like air is trapped inside. GPU waterblock. I can seem to get it out. I tried shaking it and putting the case sideways etc ...

Big Bubble in GPU chipset and near the water entrance. I'm afraid I might break my gpu already.


----------



## CS9K

Bobbydo said:


> Hello guys. Can you please help me. I just finished building my first custom loop. And looks like air is trapped inside. GPU waterblock. I can seem to get it out. I tried shaking it and putting the case sideways etc ...
> 
> Big Bubble in GPU chipset and near the water entrance. I'm afraid I might break my gpu already.


_Some_ air inside of your loop is okay. To get out most of the bubbles, turn your pump to full speed, and gently tilt your case a few degrees side to side and back and forth. 

The remaining bubbles that are stuck to various surfaces around your loop, will diffuse into the water with time, don't worry too much about them.


----------



## Bobbydo

CS9K said:


> _Some_ air inside of your loop is okay. To get out most of the bubbles, turn your pump to full speed, and gently tilt your case a few degrees side to side and back and forth.
> 
> The remaining bubbles that are stuck to various surfaces around your loop, will diffuse into the water with time, don't worry too much about them.


I did that but the bubble near the chipset ain't moving.


----------



## CS9K

Bobbydo said:


> I did that but the bubble near the chipset ain't moving.


Tilt your case "back" onto the back feet, and keep tilting, jiggle the case _carefully_ to see if you can encourage it to move on. I wouldn't tilt the case 90 degrees back, but go at least 45 degrees back with the pump on full. If you have the case panel off, have someone else, or yourself, gently tap on your GPU to see if the bubble moves.


----------



## Bobbydo

CS9K said:


> Tilt your case "back" onto the back feet, and keep tilting, jiggle the case _carefully_ to see if you can encourage it to move on. I wouldn't tilt the case 90 degrees back, but go at least 45 degrees back with the pump on full. If you have the case panel off, have someone else, or yourself, gently tap on your GPU to see if the bubble moves.



Nothing. 😑


----------



## J7SC

Bobbydo said:


> View attachment 2533785
> 
> View attachment 2533784
> 
> 
> Hello guys. Can you please help me. I just finished building my first custom loop. And looks like air is trapped inside. GPU waterblock. I can seem to get it out. I tried shaking it and putting the case sideways etc ...
> 
> Big Bubble in GPU chipset and near the water entrance. I'm afraid I might break my gpu already.


I'm having the exact same problem with the lower right of my new dual GPU build (below). It won't really harm anything, but it is kind of annoying. What is worse, I got rid of it, but it is baaaack, not least as the loops are long, rad space is very big and tubes have up-and-down turns. So rinse and repeat:

I carefully dismounted the GPU and turned it just a few degrees over horizontal, watching the bubble disappear - taking great care not to twist the fittings or tubes to avoid any leaks / spillage (have done_ that_ before, w/ liquids running in-between the block, PCB and back-plate ). I also use a separate system's Molex to power the loop in question, so no actual power to the mobo, gpu etc (the latter should have the PCIe removed). Rinse and repeat...

BTW, it can take a week or so before all the bubbles are truly out o the rest of the system...


----------



## Bobbydo

J7SC said:


> I'm having the exact same problem with the lower right of my new dual GPU build (below). It won't really harm anything, but it is kind of annoying. What is worse, I got rid of it, but it is baaaack, not least as the loops are long, rad space is very big and tubes have up-and-down turns. So rinse and repeat:
> 
> I carefully dismounted the GPU and turned it just a few degrees over horizontal, watching the bubble disappear - taking great care not to twist the fittings or tubes to avoid any leaks / spillage (have done_ that_ before, w/ liquids running in-between the block, PCB and back-plate ). I also use a separate system's Molex to power the loop in question, so no actual power to the mobo, gpu etc (the latter should have the PCIe removed). Rinse and repeat...
> 
> BTW, it can take a week or so before all the bubbles are truly out o the rest of the system...
> 
> View attachment 2533787


Thanks. I will try and see.


----------



## J7SC

Bobbydo said:


> Thanks. I will try and see.


...Air bubbles tend to be more of a tricky issue w/ vertically mounted GPUs where even titling and gently shaking the system (the usual fix) won't necessarily work; the GPU has to be positioned just a bit _over its horizontal_ plane to dislodge.

Good luck in your 'operation bubble deflation'.


----------



## Bobb3rdown

Dropped my junction temp by 10c in AC Valhalla even running at a lower fan speed. Was running static 65% and hitting 82c now sitting at 50% static and sitting at 72c with a 60c edge temp. Will play with firestrike tonight since I know what it hits in that.


----------



## ZealotKi11er

6900 XT LC @ 2800/2300 crushes 3080 TUF +125/+500 in BF2042. One get 60-70 fps, other gets 90-100 fps.


----------



## schidddy

i set my TDC to 400 but it just doesnt want to take any higher then 350w, any suggestion, Soc ist at 60, clock ist arround 2708mhz at its highest, AMD driver set to something like 2760 mhz. Temps are 52edge and 75 center on a Liquid Devil 6900XT


----------



## lawson67

I just fitted a new M/B today "Dark Hero" fired it all back up everything fine for an hour or more then a loud bang from PSU and smoke from PSU then nothing, hope its not blown everything up like GPU CPU ram etc, dont know went wrong cant really wire nothing up wrong all fitted plugs, only thing i could think i may have done that could of coursed a problem was i put a 3 way fan splitter on the case fan hub, but that wired to sata anyhow and surly not gonna overload PSU , only other thing that could of happenened was water dropped into PSU when i drained the Loop, really fed up anyhow


----------



## jonRock1992

lawson67 said:


> I just fitted a new M/B today "Dark Hero" fired it all back up everything fine for an hour or more then a loud bang from PSU and smoke from PSU then nothing, hope its not blown everything up like GPU CPU ram etc, dont know what when wrong cant really wire nothing up wrong all fitted plugs, only thing i could think i may have done that could of coursed a problem was i put a 3 way fan splitter on the case fan hub, but that wired to sata anyhow and surly not gonna overload PSU , only other thing that could of happenened was water dropped into PSU when i drained the Loop, really fed up anyhow


I'm so sorry to hear that. I wish you the best of luck. Hopefully it's just a PSU issue. I've ran a two-way splitter on the cpu fan header, and I've run a three-way splitter on the high-amp fan header on the dark hero. Didn't have any issues with that. I'm running a fan hub now though. Hopefully everything is well with your components. PM me if you need help with anything.


----------



## lawson67

jonRock1992 said:


> I'm so sorry to hear that. I wish you the best of luck. Hopefully it's just a PSU issue. I've ran a two-way splitter on the cpu fan header, and I've run a three-way splitter on the high-amp fan header on the dark hero. Didn't have any issues with that. I'm running a fan hub now though. Hopefully everything is well with your components. PM me if you need help with anything.


I had the 3 way splitter on the case fan hub, its powered by sata power, it has 6 outputs i tagged a 3 way splitter onto one of them but i cant belive that had anything to do with it though?, only thing i can think of is some water got passed the paper towels and dropped into PSU when i drained the loop but thought it would of blown up before an hour like at first boot, strange indeed, amazon cant get me a replacement PSU until Tuesday so wont know how bad it all is until then


----------



## schidddy

to be honest my LD 6900XT does max 2740 set on driver resulting in about ingame 2660 mhz and a power draw of 330Watts
mpt set PL to 400, TDC 400 and soc to 60
Anything highr then 2740mhz in driver gives me driver crash.
Any Suggestion to get that card a bit further? Seems defintily below average
Temps are 52° edge and 82junction temp (that seems a bit high for Liquid Devil?


----------



## Bobbydo

J7SC said:


> ...Air bubbles tend to be more of a tricky issue w/ vertically mounted GPUs where even titling and gently shaking the system (the usual fix) won't necessarily work; the GPU has to be positioned just a bit _over its horizontal_ plane to dislodge.
> 
> Good luck in your 'operation bubble deflation'.


Thanks but I couldn't get rid of it. I even disassembled it and redid it again. But same issue. My card is 6900 xt liquid devil ultimate. And junction tempreture seems to hit around 87c max. But GPU temp is just 53c max under stress and load. How can I fix hotspot issue? I had 6900 xt sapphire extreme and Asus 6900 xt Rog Strix LC Top Gaming. And both had hotspot tem 75c max.


----------



## ZealotKi11er

Bobbydo said:


> Thanks but I couldn't get rid of it. I even disassembled it and redid it again. But same issue. My card is 6900 xt liquid devil ultimate. And junction tempreture seems to hit around 87c max. But GPU temp is just 53c max under stress and load. How can I fix hotspot issue? I had 6900 xt sapphire extreme and Asus 6900 xt Rog Strix LC Top Gaming. And both had hotspot tem 75c max.


Might have to repaste.


----------



## Bobbydo

ZealotKi11er said:


> Might have to repaste.


Has anyone done this to this card?


----------



## SoloCamo

Blameless said:


> CPU/GPU usage is generally only reported in the sense of non-clock gated cycles. You have 100% of cycles doing something and still have a huge spread in power consumption/current draw across different loads.
> 
> From the lightest load games I can think of to the highest the spread is more than double for the latter. Synthetic loads can almost double that again, if tuned for the hardware. All will show "100%" load because all cycles are being utilized, but a game that isn't very demanding on the architecture might pull 180w with no power limit, while OCCT or the Ray Tracing Feature test (before they capped it in drivers) could pull ~500w at the same settings.
> 
> Basically, BFV is doing more work per clock than Forza. This is not unusual.
> 
> Personally, I calibrate my 24/7 OCs to a reasonable worst case load. Time Spy Extreme graphics test #2 is usually good for this. If it doesn't power or temperature throttle here, it won't hit the limits of any game I've ever heard of. Most games are about 70-100w less, at the same clocks/(pre-droop) voltage.


Fair enough, thanks for the explanation. Going to continue to tinker with it to maximize noise & efficiency. Seem to have a decent setup going right now and even at stock, the performance is such a jump for me and the games I play that it's going to be a bit before I need any extra headroom that higher clocks would provide.


----------



## J7SC

Bobbydo said:


> Thanks but I couldn't get rid of it. I even disassembled it and redid it again. But same issue. My card is 6900 xt liquid devil ultimate. And junction tempreture seems to hit around 87c max. But GPU temp is just 53c max under stress and load. How can I fix hotspot issue? I had 6900 xt sapphire extreme and Asus 6900 xt Rog Strix LC Top Gaming. And both had hotspot tem 75c max.


Re. the bubble thing, you might as well wait for a week plus w/ daily running in the _final _loop setup as said GPU bubble will come back until all the much smaller / tiny bubbles in the liquid stream have dissipated (they do look cool and tell you about flow speed) . As I mentioned before, it took several tries on one machine (the other, the 6900XT shown here, is ok now). Waiting to fix the GPU bubble until all else is finalized also makes sense if you end up repasting and/or otherwise open the loop again.

Re. hotspot and junction, it also comes down to the rest of your loop, ie. rad size, pumps etc. I just ran a few tests on the 'work' system that has the 6900XT in the work and play dual build, and temps are per below. My 6900XT is a 3x8 pin regular 'XTX', and the run below has a slight undervolt and not-quite-maxed MPT PL (all else in MPT on default). As mentioned earlier, I did use Gelid GC Extreme for the die, thermal putty for the VRAM and Thermalright pads for the power delivery - and added a big heatsink on the back which definitely helps also: The backplate w/heatsink now has a 'blob' of thermal putty on the back of the GPU which transfers heat well. For temp context, this CPU/GPU loop has 1200x60+ rad space, dual D5 pumps and Artic P12s in push-pull. Ambient was 24 C.


----------



## Bobbydo

J7SC said:


> Re. the bubble thing, you might as well wait for a week plus w/ daily running in the _final _loop setup as said GPU bubble will come back until all the much smaller / tiny bubbles in the liquid stream have dissipated (they do look cool and tell you about flow speed) . As I mentioned before, it took several tries on one machine (the other, the 6900XT shown here, is ok now). Waiting to fix the GPU bubble until all else is finalized also makes sense if you end up repasting and/or otherwise open the loop again.
> 
> Re. hotspot and junction, it also comes down to the rest of your loop, ie. rad size, pumps etc. I just ran a few tests on the 'work' system that has the 6900XT in the work and play dual build, and temps are per below. My 6900XT is a 3x8 pin regular 'XTX', and the run below has a slight undervolt and not-quite-maxed MPT PL (all else in MPT on default). As mentioned earlier, I did use Gelid GC Extreme for the die, thermal putty for the VRAM and Thermalright pads for the power delivery - and added a big heatsink on the back which definitely helps also: The backplate w/heatsink now has a 'blob' of thermal putty on the back of the GPU which transfers heat well. For temp context, this CPU/GPU loop has 1200x60+ rad space, dual D5 pumps and Artic P12s in push-pull. Ambient was 24 C.
> View attachment 2533802


What are the temps under load? Mine is also like this but under stress test GPU temps Doesn't go above 56c. But hotspot goes up to 87c. 

I have 2 rads 360. 1 goes to GPU and from GPU to 2nd rad and then to cpu and so on.


----------



## CS9K

Bobbydo said:


> What are the temps under load? Mine is also like this but under stress test GPU temps Doesn't go above 56c. But hotspot goes up to 87c.
> 
> I have 2 rads 360. 1 goes to GPU and from GPU to 2nd rad and then to cpu and so on.
> 
> View attachment 2533803


You have plenty of radiator and your tubing looks good. While your hotspot Delta-T is slightly higher than most of us with custom water loops see, it isn't the worst we've seen here in this thread.

Don't worry _too_ much about your hotspot temperature until it gets up to 100C. If it ever does get up to that temperature, then you will want to think about re-applying thermal paste to the water block. I don't want to you consider disassembling the GPU since it came with the water block installed from the factory. I don't know how warranty replacements work in your country, but in some places, removing the heatsink means no warranty, which is a shame :<


----------



## Bobbydo

CS9K said:


> You have plenty of radiator and your tubing looks good. While your hotspot Delta-T is slightly higher than most of us with custom water loops see, it isn't the worst we've seen here in this thread.
> 
> Don't worry _too_ much about your hotspot temperature until it gets up to 100C. If it ever does get up to that temperature, then you will want to think about re-applying thermal paste to the water block. I don't want to you consider disassembling the GPU since it came with the water block installed from the factory. I don't know how warranty replacements work in your country, but in some places, removing the heatsink means no warranty, which is a shame :<


 I really appreciate the help. Thank you 😀


----------



## J7SC

Bobbydo said:


> What are the temps under load? Mine is also like this but under stress test GPU temps Doesn't go above 56c. But hotspot goes up to 87c.
> 
> I have 2 rads 360. 1 goes to GPU and from GPU to 2nd rad and then to cpu and so on.
> 
> View attachment 2533803


As @CS9K already suggested, nothing stands out about your loop which screams big 'problem' (as an aside, I'm somewhat fascinated by what looks like 3 sticks of system RAM). It is also very much advisable to keep warranty issues in mind before opening a card up (impact on warranty varies somewhat by jurisdiction). I'm not sure about your card, but XTXH cards generally seem to have a slightly lower 'max hotspot' (95C ?) compared to regular XTH cards (110C + -), but I wouldn't worry unless you persistently hit the 90s+. Just keep an eye on it if you add much higher power limits via MPT because that really adds a lot of heat. The air bubble will only have a negligible effect.

It's also worth noting that 6900XTs generally run a bit hotter on GPU / Hotspot than for example their 3090 counterparts, which in turn downclock more per temp unit via boost/throttle algorithms, in case you have had a lot of NVidia cards.. Per below, the cards are next to each other in the same case and have very similar voluminous custom loops, thermal putty applications, heatsinks etc. I switched the 3090 vBios from 520W to 450W to get a more compatible reading under similar loads (Superposition, 3DM PR etc). The 3090 does have 24 GB of double-sided GDDR6_X _VRAM which does get hotter than even equivalent GDDR6.


----------



## Kawaz

Skinnered said:


> Hmm, ive two 8 pins pcie connectors. I think 520 W will be tough to feed. Using 370 now.
> Will play with it when my Eiswolf 2 arrives
> 
> Thanks for mentioning the most important mpt settings also to CS9K.


My card is also a dual 8pin. Its a XFX Merc 319 Black Limited. But still 520 watts on pcie dual 8pins is still pretty modest.


----------



## ZealotKi11er

Kawaz said:


> My card is also a dual 8pin. Its a XFX Merc 319 Black Limited. But still 520 watts on pcie dual 8pins is still pretty modest.


Yeah, not sure why with Nvidia they limit PCIE 8-PIN to 150W.


----------



## Skinnered

If nobody has experience with the Eiswolf 2, can they say how much more oc they archieved when they went on water, custom or aio for a reference rx 6900 xt?
I want to have an idea if its usefull for increased perf say 10+ %


----------



## Skinnered

Kawaz said:


> My card is also a dual 8pin. Its a XFX Merc 319 Black Limited. But still 520 watts on pcie dual 8pins is still pretty modest.


 Ah thats good to know.


----------



## bloot

J7SC said:


> As @CS9K already suggested, nothing stands out about your loop which screams big 'problem' (as an aside, I'm somewhat fascinated by what looks like 3 sticks of system RAM). It is also very much advisable to keep warranty issues in mind before opening a card up (impact on warranty varies somewhat by jurisdiction). I'm not sure about your card, but XTXH cards generally seem to have a slightly lower 'max hotspot' (95C ?) compared to regular XTH cards (110C + -), but I wouldn't worry unless you persistently hit the 90s+. Just keep an eye on it if you add much higher power limits via MPT because that really adds a lot of heat. The air bubble will only have a negligible effect.
> 
> It's also worth noting that 6900XTs generally run a bit hotter on GPU / Hotspot than for example their 3090 counterparts, which in turn downclock more per temp unit via boost/throttle algorithms, in case you have had a lot of NVidia cards.. Per below, the cards are next to each other in the same case and have very similar voluminous custom loops, thermal putty applications, heatsinks etc. I switched the 3090 vBios from 520W to 450W to get a more compatible reading under similar loads (Superposition, 3DM PR etc). The 3090 does have 24 GB of double-sided GDDR6_X _VRAM which does get hotter than even equivalent GDDR6.
> View attachment 2533822


6900XT power reported on monitoring software is only for the core+mem, you have to add another 20% for the total board power, in this case it would be aorund 520W for the 6900XT (GPU Asic Power +~20%)


----------



## ZealotKi11er

bloot said:


> 6900XT power reported on monitoring software is only for the core+mem, you have to add another 20% for the total board power, in this case it would be aorund 520W for the 6900XT (GPU Asic Power +~20%)


It full power but after VRM. What you need to do it take into account VRM efficiency.


----------



## jellis142

Should be picking up a Nitro+ SE later on today, but had a question.

I have scoured the internet, and so far, no straightforward answer. I have an X570 Steel Legend, but as far as I'm aware, the slot I have measured out turned out to be only x4 (Gen 4). I'm guessing this will have a performance hit? It's been awhile, but if I have to, I can move to the first slot. It just won't be as _aesthetic_ as I had in my mind.

For some reason, it's just frustrating to get an easily spelled out "How much bandwidth this certain card requires to avoid a bottleneck over the slot".

*Happy Monday regardless!*


----------



## Bobbydo

Guys, mine just hit 100c hotspot temp. Is there any video of repasting 6900xt liquid devil ultimate?


----------



## zGunBLADEz

ZealotKi11er said:


> Yeah, not sure why with Nvidia they limit PCIE 8-PIN to 150W.


standard/cable length/ configurations.. mostly because of current/resistance fire hazard universal config lol

if you have shorter runs of cables -or- low gauge "thick wire" and you pushing current over "stock spec" with unlocked bios and such
as long the "psu" can handle it on the rail.. and the cables.... you would have 0 issues something else will give up first lol


----------



## 99belle99

Bobb3rdown said:


> View attachment 2533775
> 
> View attachment 2533774
> 
> 
> What's everyone's thoughts on this mount. Factory paste. Now the real question kryoknaut or gc extreme? Have both at my disposal. Have new pads as well but unsure if I should bother.


Wow that looks to be an amazing cooler. What card is it?


----------



## CS9K

zGunBLADEz said:


> standard/cable length/ configurations.. mostly because of current/resistance fire hazard universal config lol
> 
> if you have shorter runs of cables -or- low gauge "thick wire" and you pushing current over "stock spec" with unlocked bios and such
> as long the "psu" can handle it on the rail.. and the cables.... you would have 0 issues something else will give up first lol


Came to say this, @ZealotKi11er 

Even Seasonic's Titanium PSU's come with PCIe cables strung with 18ga 80C cable, which tbh is a bit of a disappointment given how overbuilt the PSU is. That said, I'm sure that wasn't an engineer's decision, but a bean-counter's. 

I made my own 16ga cables for my reference RX 6900 XT, just to feel a little bit better about things. Placebo? Probably.


----------



## CS9K

bloot said:


> 6900XT power reported on monitoring software is only for the core+mem, you have to add another 20% for the total board power, in this case it would be aorund 520W for the 6900XT (GPU Asic Power +~20%)


HWINFO64 reports power usage for the ASIC/core, SOC, memory and others, all individually, in _addition_ to a TGP value.


----------



## Kawaz

ZealotKi11er said:


> Yeah, not sure why with Nvidia they limit PCIE 8-PIN to 150W.


cause its official spec on all 8pins. Amd too. Both have ways to go around it. But its usually harder on nvidia. Usually takes a bios flash.


----------



## J7SC

ZealotKi11er said:


> Yeah, not sure why with Nvidia they limit PCIE 8-PIN to 150W.


*...because that's the spec*


CS9K said:


> HWINFO64 reports power usage for the ASIC/core, SOC, memory and others, all individually, in _addition_ to a TGP value.


...yup, vendors would not want to sell certified electronic equipment which is set to beyond spec (150W limit per 8 pin) for insurance reasons etc. This does not mean that you personally can't jam a heck of a lot more through it, especially w/ thicker 16 gauge and a decent PSU w/ good OCP etc.

I'm quite happy w/ my temps at the posted wattage via stress testing for the 6900XT. While there's still headroom temp-wise (plus I use 3 separate PCIe 8pin feeds from the PSU), this card is at a 24/7 max level (if not slightly beyond) I feel comfortable with. I do have MPT profiles that add more PL headroom still in case I feel the 'winter temp benching' itch .

Btw, for the Strix 3090, the EVGA KPE 520W and 1kW XOC secondary vBios do not report power consumption correctly on Asus Strix in HWInfo etc, though otherwise work fine. There's also a 1kW Asus XOC that reports correctly, but that doesn't have r_BAR. In any case, the 1kW rarely pull more than 650W-700W in XOC, via clamps (no that that isn't plenty )


----------



## ZealotKi11er

Kawaz said:


> cause its official spec on all 8pins. Amd too. Both have ways to go around it. But its usually harder on nvidia. Usually takes a bios flash.


Yes but you cant let people go over if they want to... Even for PCIE they dont even allow spec which is 75W so my board is limited to 150+150+50 = 350 instead of 150+150+75 = 375. What is even more funny is that with MSI AB I get the option for 117% power but cant even use 110%.


----------



## bloot

CS9K said:


> HWINFO64 reports power usage for the ASIC/core, SOC, memory and others, all individually, in _addition_ to a TGP value.


My point is, unlike on Nvidia cards, there's no total board power monitoring value for these cards. If you set your 6900XT on default power limit, you'll see 255W monitored at most, whilst its TBP is 300W as per AMD specs. Add 18% and you'll get those 300W.


----------



## J7SC

Bobbydo said:


> Guys, mine just hit 100c hotspot temp. Is there any video of repasting 6900xt liquid devil ultimate?


...is that on stock vBios (ie. no extra PL via MPT) ? If not, what is your MPT PL ? HWInfo screenshot w/ 3DM, Superposition 4K+?

also 'somewhat related':



Spoiler


----------



## CS9K

bloot said:


> My point is, unlike on Nvidia cards, there's no total board power monitoring value for these cards. If you set your 6900XT on default power limit, you'll see 255W monitored at most, whilst its TBP is 300W as per AMD specs. Add 18% and you'll get those 300W.


I promise you I'm not here to invoke XKCD, but as best I can tell, "GPU Power" is effectively TGP.

GPU is a 3070Ti FE (running Folding @ Home currently)
HWINFO64 v7.14-4610


----------



## CS9K

Kawaz said:


> cause its official spec on all 8pins. Amd too. Both have ways to go around it. But its usually harder on nvidia. Usually takes a bios flash.


Funnily enough, the 3090 FE will run both of its 8-pin PCIe power connectors at around 162W each with only the power target limit raised to its maximum, no other tinkering needed


----------



## Bobb3rdown

99belle99 said:


> Wow that looks to be an amazing cooler. What card is it?


Sapphire Toxic Air-cooled. Same cooler as the Nitro+ SE


----------



## Bobbydo

J7SC said:


> ...is that on stock vBios (ie. no extra PL via MPT) ? If not, what is your MPT PL ? HWInfo screenshot w/ 3DM, Superposition 4K+?
> 
> also 'somewhat related':
> 
> 
> 
> Spoiler


Yeah in stock. In stock it goes up to 2764mhz but itself. And power. 428w I saw last.

Should I repaste it with liquid metal and change pads too or just repasting?

See, I just opened heaven benchmark and it already gone to 105c hotspot. Less than 2 minutes.


----------



## Skinnered

Skinnered said:


> If nobody has experience with the Eiswolf 2, can they say how much more oc they archieved when they went on water, custom or aio for a reference rx 6900 xt?
> I want to have an idea if its usefull for increased perf say 10+ %


Somebody can answer this?
I havent found useable info on this yet.
If refs get 2700 mhz or more on water it will usefull for me.


----------



## J7SC

Bobbydo said:


> Yeah in stock. In stock it goes up to 2764mhz but itself. And power. 428w I saw last.
> 
> Should I repaste it with liquid metal and change pads too or just repasting?
> 
> See, I just opened heaven benchmark and it already gone to 105c hotspot. Less than 2 minutes.
> 
> View attachment 2533915


First, make sure you know about warranty conditions in your jurisdiction re. opening the card up. Overall, liquid metal is a double-edge sword, given that you need to use conformal coating of some sort, even more so w/ a vertically mounted GPU. I would probably use Gelid GC Extreme instead.

On the pads etc, I already mentioned the TG Thermal putty (if you can get it). Else, make sure to take measurements of your existing (factory) pad thicknesses (could be different sizes for VRAM, power) then order up some Thermalright, Gelid or even Fuji pads in the right thickness. I actually use a bit of MX5 on top of the Thermalright pads on the power segment as well.



Skinnered said:


> Somebody can answer this?
> I havent found useable info on this yet.
> If refs get 2700 mhz or more on water it will usefull for me.


I've not used the Eiswolf 2 myself as I do custom loops, but if I would go for an AIO, that one would be on my short list, not least as it has quick-disconnects with which you could expand later.


----------



## kril89

So I’ll be getting my Gigabyte AORUS Waterforce tomorrow. I’ve seen people say it’s a great card and others say it’s not. I’m guessing it’s still all down to silicon lottery.

But I’m coming from a i7 4790k/Vega 64 to a 5950X/6900XT so no matter what I’ll be getting a huge FPS boost.


----------



## Bobbydo

J7SC said:


> First, make sure you know about warranty conditions in your jurisdiction re. opening the card up. Overall, liquid metal is a double-edge sword, given that you need to use conformal coating of some sort, even more so w/ a vertically mounted GPU. I would probably use Gelid GC Extreme instead.
> 
> On the pads etc, I already mentioned the TG Thermal putty (if you can get it). Else, make sure to take measurements of your existing (factory) pad thicknesses (could be different sizes for VRAM, power) then order up some Thermalright, Gelid or even Fuji pads in the right thickness. I actually use a bit of MX5 on top of the Thermalright pads on the power segment as well.
> 
> 
> 
> I've not used the Eiswolf 2 myself as I do custom loops, but if I would go for an AIO, that one would be on my short list, not least as it has quick-disconnects with which you could expand later.


I honestly don't know the thickness. Can you help please? Should I just go with 1mm or less? Also there is from 6 to 12w/mk


----------



## Bobbydo

kril89 said:


> So I’ll be getting my Gigabyte AORUS Waterforce tomorrow. I’ve seen people say it’s a great card and others say it’s not. I’m guessing it’s still all down to silicon lottery.
> 
> But I’m coming from a i7 4790k/Vega 64 to a 5950X/6900XT so no matter what I’ll be getting a huge FPS boost.


It's not I had one. They're stupid. I would rather go with Sapphire or Powercolor or Asus or even xfx than gigabyte.
But you might be lucky and get that 1 great card in a 100.


----------



## jonRock1992

I'm doing a small bench session right now. I'm only 7 f'ing points away from 26K GPU score 🤣

Update: Spoke too soon! The run I was doing as I made this post broke 26k!!


----------



## Skinnered

> ="J7SC, post: 28899584, member:
> I've not used the Eiswolf 2 myself as I do custom loops, but if I would go for an AIO, that one would be on my short list, not least as it has quick-disconnects with which you could expand later.


Did you gain much oc and is it a ref.,?


----------



## J7SC

Bobbydo said:


> I honestly don't know the thickness. Can you help please? Should I just go with 1mm or less? Also there is from 6 to 12w/mk


I don't have that card, so I don't know the thickness. But it HAS TO BE the same thickness which is in there now on the stock EK block / backplate. If you cannot find out what the correct stock pad thickness is from Powercolor or EK via email, use a digital tool to measure the existing ones on there.

To have correct contact between block and die / VRAM / phases and block, you should use the specified pad thickness. The exception is thermal putty for the VRAM (or even phases) I referenced before as it automatically conforms.



Skinnered said:


> Did you gain much oc and is it a ref.,?


My 6900XT is not reference but a Gigabyte Gaming OC (XTX, 3x8 pin custom PCB) with a big custom loop. AMD cards tend to gain less w/ extra cooling than NVidia, but I did pick up a couple of speed bins - and more importantly, the ability to add much more MPT PL while dropping Hotspot (and other temps) by 30 C plus.


----------



## kril89

Bobbydo said:


> It's not I had one. They're stupid. I would rather go with Sapphire or Powercolor or Asus or even xfx than gigabyte.
> But you might be lucky and get that 1 great card in a 100.


how come?


----------



## Bobbydo

kril89 said:


> how come?


It broke few days after. Coil whine was baaaaaad.


----------



## jonRock1992

I'm done benching for the night. I'm surprised at how well my GPU does with an unlocked voltage. I went from not being able to break 25k to surpassing 26k gpu score. I ended up getting 26153 @1287mV with the latest driver. I don't want to go any higher on the voltage yet though. Huge thanks to @lawson67 and @ZealotKi11er for some good tips.

AMD Radeon RX 6900 XT video card benchmark result - AMD Ryzen 7 5800X,ASUSTeK COMPUTER INC. ROG CROSSHAIR VIII DARK HERO (3dmark.com)


----------



## kril89

Bobbydo said:


> It broke few days after. Coil whine was baaaaaad.


So you had a bad card which means everyone will have a bad card. But we will see what happens with mine.


----------



## Bobb3rdown

Bobbydo said:


> Yeah in stock. In stock it goes up to 2764mhz but itself. And power. 428w I saw last.
> 
> Should I repaste it with liquid metal and change pads too or just repasting?
> 
> See, I just opened heaven benchmark and it already gone to 105c hotspot. Less than 2 minutes.
> 
> View attachment 2533915


That's pretty warm tbf. Something I would expect of an air-cooled card. I have to run around 350w at 2650mhz to get that hot on air.


----------



## bloot

CS9K said:


> I promise you I'm not here to invoke XKCD, but as best I can tell, "GPU Power" is effectively TGP.
> 
> GPU is a 3070Ti FE (running Folding @ Home currently)
> HWINFO64 v7.14-4610
> 
> View attachment 2533908


GPU Power is the total card consumption on Nvidia cards that's correct, but you can not measure it via sofwtare on AMD gpus because AMD does not provide a sensor for the entire card.


----------



## ZealotKi11er

bloot said:


> GPU Power is the total card consumption on Nvidia cards that's correct, but you can not measure it via sofwtare on AMD gpus because AMD does not provide a sensor for the entire card.


For AMD its also for the entire card. The difference is that it is measured after the VRM not before so you have to account how efficient the VRM are. AMD says 255W card ~ 300W so ~ 17%.


----------



## bloot

ZealotKi11er said:


> For AMD its also for the entire card. The difference is that it is measured after the VRM not before so you have to account how efficient the VRM are. AMD says 255W card ~ 300W so ~ 17%.


So, if we get a maximum of 255W via software at stock power on an 6900/6800XT, but we do know these cards are 300W, then the software does not reflect the real power consumption, you must add that 17-18% to get the real total card power. That's why I pointed it, because the 6900XT had way higher power than the 3090 in J7SC captures, whilst the 3090 showed the real power consumption, the 6900XT did not.


----------



## bloot

You can read more about the misleading software power consumption reporting on the Igor's Lab 6800XT review The big Radeon RX 6800 (XT) overclocking and mod guide | Community | Page 4 | igor'sLAB (igorslab.de)


----------



## J7SC

bloot said:


> So, if we get a maximum of 255W via software at stock power on an 6900/6800XT, but we do know these cards are 300W, then the software does not reflect the real power consumption, you must add that 17-18% to get the real total card power. That's why I pointed it, because the 6900XT had way higher power than the 3090 in J7SC captures, whilst the 3090 showed the real power consumption, the 6900XT did not.


...that makes me even more happy re.the new custom cooling system performance. I will dial MPT PL back a bit though for an 'effective' 440 W because even at stock /air, it ran a bit hotter than the 3090, also at stock / air. All that aside, the next time I bench, I''ll put clamps on to measure things.


----------



## ZealotKi11er

3090 does use about the same power as 6900 XT at stock. The difference is G6X is very power hungry. If Nvidia had gone with 16-18Gbps G6, they would have had 3090 at 280-300W.


----------



## J7SC

ZealotKi11er said:


> 3090 does use about the same power as 6900 XT at stock. The difference is G6X is very power hungry. If Nvidia had gone with 16-18Gbps G6, they would have had 3090 at 280-300W.


Neither is a contender for 'lowest power consumption of the year' awards...I dug this GPUz up, from the first few days when I got the Strix 3090 (early February this year)...this was on air, stock bios, max PL > almost 500 W


----------



## CS9K

ZealotKi11er said:


> 3090 does use about the same power as 6900 XT at stock. The difference is G6X is very power hungry. If Nvidia had gone with 16-18Gbps G6, they would have had 3090 at 280-300W.


This hypothetical 3090 would probably have been so catastrophically memory-bandwidth-limited that the poor thing wouldn't have been able to touch the RX 6900 XT


----------



## Bobbydo

I didn't use any tools at all to make tubes. I used the Stove for the heat and my hands nothing else and honestly they turned out perfect. My question is why people buy all those tools?


----------



## kairi_zeroblade

Bobbydo said:


> I didn't use any tools at all to make tubes. I used the Stove for the heat and my hands nothing else and honestly they turned out perfect. My question is why people buy all those tools?


wow looks clean, I thought those were soft tubes, because of the bends..nicely done!!


----------



## J7SC

Per earlier posts on the subject of actual / effective PL-W, I dialed the 24/7 MPT setting back by about 60 W (the new one is below on the left, after four consecutive SuperPos 4K runs, ambient again at 24 C)...scores actually improved a bit but, within margin of error...


----------



## The EX1

ZealotKi11er said:


> 6900 XT LC @ 2800/2300 crushes 3080 TUF +125/+500 in BF2042. One get 60-70 fps, other gets 90-100 fps.


What happens when you drop the memory to regular speeds like 2100-2150? Does the driver let you set it that low? I guess the timings on the 18Gbps chips would hold it back. I’m curious to see how much of that gaming performance is coming from the memory.



schidddy said:


> to be honest my LD 6900XT does max 2740 set on driver resulting in about ingame 2660 mhz and a power draw of 330Watts
> mpt set PL to 400, TDC 400 and soc to 60
> Anything highr then 2740mhz in driver gives me driver crash.
> Any Suggestion to get that card a bit further? Seems defintily below average
> Temps are 52° edge and 82junction temp (that seems a bit high for Liquid Devil?


 Unless you have the Red Devil Ultimate with XTXH silicon, 2740 is on the high side for regular XTX cards.


----------



## schidddy

The EX1 said:


> Unless you have the Red Devil Ultimate with XTXH silicon, 2740 is on the high side for regular XTX cards.


thanks EX1, 2740mhz is ment to be he driver setting, Card reaches as high as 2660mhz clk while gaming and takes 330 to 350w according to Wattman. Its a normal Liquid Devil 6900XT 
So you so think its no use the repaste and use liquid metal to get the high junction temperaturs down? its about 84 while edge temp is max 52 at that setting.


----------



## ZealotKi11er

The EX1 said:


> What happens when you drop the memory to regular speeds like 2100-2150? Does the driver let you set it that low? I guess the timings on the 18Gbps chips would hold it back. I’m curious to see how much of that gaming performance is coming from the memory.
> 
> 
> Unless you have the Red Devil Ultimate with XTXH silicon, 2740 is on the high side for regular XTX cards.


Going from 2300 to 2400 with FT did not see much gain so probably more core heavy.


----------



## zGunBLADEz

CS9K said:


> Came to say this, @ZealotKi11er
> 
> Even Seasonic's Titanium PSU's come with PCIe cables strung with 18ga 80C cable, which tbh is a bit of a disappointment given how overbuilt the PSU is. That said, I'm sure that wasn't an engineer's decision, but a bean-counter's.
> 
> I made my own 16ga cables for my reference RX 6900 XT, just to feel a little bit better about things. Placebo? Probably.


yeah i dont understand in high end psus how they end using those type of cables. they charging premium price as it is already why cheap out on some wire lol...
it also dont help users are adding extensions and more length to an existing "not so quality cable" and do 3090 IE with unlocked bioses and such. more cable added more resistance, more resistance more heat. *even so the extension is "thicker wire" you still are adding more resistance to the already so so cable from factory lol...*

Also, the PSU doesnt care what he throws thru the rail"s" "he have his own limit on the 12v+ line wattage for example most high end psus now a days will throw all they can thru one single port they dont even care lol" he will try to supply the demand for power to the "asking source" to his very capabilities until 2 things happens ocp or goes on fire lol.. the limits are put somewhere else in this case the gpu maker or universal pci ex spec sheet or you running something you are not supposed to be running "XOC bios" "hard modded" "tweaked" to bypass power imposed ..


----------



## zGunBLADEz

btw guys old school recommendation.. i dont pay attention to the hot spot at all..
same as the vrms back then or any chip... actual reading that we get inside it can be up to 20c+
but you cant read it before so it didnt bother you that much now it does bcuz is there in your face...also that reading itself is spotty/sporadic

even so my gpu fully loaded stays in low 30s on regular usage and hot spot usually is like +10c ish more still dont care about that reading lol


----------



## jonRock1992

zGunBLADEz said:


> yeah i dont understand in high end psus how they end using those type of cables. they charging premium price as it is already why cheap out on some wire lol...
> it also dont help users are adding extensions and more length to an existing "not so quality cable" and do 3090 IE with unlocked bioses and such. more cable added more resistance, more resistance more heat. *even so the extension is "thicker wire" you still are adding more resistance to the already so so cable from factory lol...*
> 
> Also, the PSU doesnt care what he throws thru the rail"s" "he have his own limit on the 12v+ line wattage for example most high end psus now a days will throw all they can thru one single port they dont even care lol" he will try to supply the demand for power to the "asking source" to his very capabilities until 2 things happens ocp or goes on fire lol.. the limits are put somewhere else in this case the gpu maker or universal pci ex spec sheet or you running something you are not supposed to be running "XOC bios" "hard modded" "tweaked" to bypass power imposed ..


Screw overpriced PSU's. I'm literally using a $150 1000W Aresgame PSU, and I'm not having any problems. I'm even overclocking both CPU and GPU and running 600W GPU power limit in Timespy. Only time will tell how well it holds up lol.


----------



## CS9K

jonRock1992 said:


> Screw overpriced PSU's. I'm literally using a $150 1000W Aresgame PSU, and I'm not having any problems. I'm even overclocking both CPU and GPU and running 600W GPU power limit in Timespy. _Only time will tell how well it holds up lol._


And there's why I _do_ worry about Power Supply quality. Cheap internal components can wreck your face down the road, despite appearing to work properly in the meantime.

That doesn't mean everyone should go out and buy Titanium rated PSU's with 12-year warranties... No, not at all.

But in a modern gaming rig with a _really_ thirsty GPU, one should at least get a Gold rated PSU from a brand and/or OEM with a good reputation, with _at least_ a 10-year warranty.

In the meteorology field, we call it "low probability; high impact". Using a PSU with cheap/sub-par internal components is going to be fine... riiiight up until it _isn't_. I believe the modern version of this goes something like "f*ck around and find out". I don't mess around when it comes to $1000+ power-hungry components. Your PSU is the heart of the system; pray that your PC doesn't have a heart attack.



zGunBLADEz said:


> btw guys old school recommendation.. i dont pay attention to the hot spot at all..
> same as the vrms back then or any chip... actual reading that we get inside it can be up to 20c+
> but you cant read it before so it didnt bother you that much now it does bcuz is there in your face...also that reading itself is spotty/sporadic
> 
> even so my gpu fully loaded stays in low 30s on regular usage and hot spot usually is like +10c-ish more still dont care about that reading lol


I agree. I notice that people get WAY too wrapped around the axle about hotspot temps, both the max temp and the Delta-T over "core" temp.

With a properly working GPU, hotspot temp should really only be worried about if one has a higher-than-average Delta-T, in which case, RMA or re-paste the GPU and move on with life


----------



## SoloCamo

kril89 said:


> So I’ll be getting my Gigabyte AORUS Waterforce tomorrow. I’ve seen people say it’s a great card and others say it’s not. I’m guessing it’s still all down to silicon lottery.
> 
> But I’m coming from a i7 4790k/Vega 64 to a 5950X/6900XT so no matter what I’ll be getting a huge FPS boost.


As someone who recently went from a 4.6ghz 4790k paired w/ 32gb c10 DDR3 2400 and a Radeon VII (Vega 64 before that) you are in for quite an experience. I knew the 6900xt would be a big jump for my 1800mhz locked / 1200mhz HBM2 Radeon VII but it was huge. And that's not even considering the jump I got going to a 10900 w/ my Radeon VII even at 4k. And I previously thought my Radeon VII was still a pretty huge jump from my Vega 64 with the 4790k.



jonRock1992 said:


> Screw overpriced PSU's. I'm literally using a $150 1000W Aresgame PSU, and I'm not having any problems. I'm even overclocking both CPU and GPU and running 600W GPU power limit in Timespy. Only time will tell how well it holds up lol.


For now, but I've also had them last a year and take out a whole system out of the blue. Cheap 1,000w psu's are built like decent 600-700w psu's at best.

Right now I'm running a EVGA SuperNOVA 750 P2 (platinum rated) which makes me nervous on a 10900 (power unlocked) and oc'ed 6900xt but it seems to run cool so far.


----------



## Bobbydo

SoloCamo said:


> As someone who recently went from a 4.6ghz 4790k paired w/ 32gb c10 DDR3 2400 and a Radeon VII (Vega 64 before that) you are in for quite an experience. I knew the 6900xt would be a big jump for my 1800mhz locked / 1200mhz HBM2 Radeon VII but it was huge. And that's not even considering the jump I got going to a 10900 w/ my Radeon VII even at 4k. And I previously thought my Radeon VII was still a pretty huge jump from my Vega 64 with the 4790k.
> 
> 
> 
> For now, but I've also had them last a year and take out a whole system out of the blue. Cheap 1,000w psu's are built like decent 600-700w psu's at best.
> 
> Right now I'm running a EVGA SuperNOVA 750 P2 (platinum rated) which makes me nervous on a 10900 (power unlocked) and oc'ed 6900xt but it seems to run cool so far.


Hi Guys. I replaced thermal paste and pads too for my 6900 xt liquid devil ultimate. But the tempretures are still so high. GPU Around 80c and junction tempreture goes up to 110c. What am I doing wrong? CPU temps never go above 65c. The pump is Tuning at full speed. Like 5500rpm. 3 fans at 1400rpm and 3 at 1500rpm. 2, 360 radiators. . Is it the fans or the pump?

Did I put the GPU tubes right? Not wrong?


----------



## ZealotKi11er

80C does seem very hot. Should be 50-60C +20-30 for hotspot.


----------



## Bobbydo

ZealotKi11er said:


> 80C does seem very hot. Should be 50-60C +20-30 for hotspot.


I know. The lowest temp now is 30. But usually 33-39c idle. But CPU is Normal. Max tempreture is 70 under intense stress. For like 5 minutes.


----------



## CS9K

Bobbydo said:


> Hi Guys. I replaced thermal paste and pads too for my 6900 xt liquid devil ultimate. But the tempretures are still so high. GPU Around 80c and junction tempreture goes up to 110c. What am I doing wrong? CPU temps never go above 65c. The pump is Tuning at full speed. Like 5500rpm. 3 fans at 1400rpm and 3 at 1500rpm. 2, 360 radiators. . Is it the fans or the pump?
> 
> Did I put the GPU tubes right? Not wrong?
> View attachment 2534046


Your problem may be that the thermal pads you used are too thick, or are not as _squishy_ as the pads that were originally on the GPU. Grab a torch/flashlight and peek between the block and the PCB, to see if your GPU core is making solid contact with the block. 

You would not be the first person to run in to mounting issues due to too-thick thermal pads, if that is what is the cause. 

For example: With the EK block on the reference RX 6900 XT, almost all pads are too thick and too firm to use; in that case I suggest that people use the EK pads that came with the block, and to order an extra set, as they are thin enough that you would not want to re-use them like you can other pads.


----------



## J7SC

Bobbydo said:


> Hi Guys. I replaced thermal paste and pads too for my 6900 xt liquid devil ultimate. But the tempretures are still so high. GPU Around 80c and junction tempreture goes up to 110c. What am I doing wrong? CPU temps never go above 65c. The pump is Tuning at full speed. Like 5500rpm. 3 fans at 1400rpm and 3 at 1500rpm. 2, 360 radiators. . Is it the fans or the pump?
> 
> Did I put the GPU tubes right? Not wrong?
> View attachment 2534046


...kind of hard to tell from the photo angle, but your GPU 'inlet' is the one on the left from this perspective (it is on most blocks)? Also, you might want to add more fluids to your reservoir as its level currently sits below the GPU inlet / outlet plane. That won't fix all your temp issues, but will be helpful w/ bubble control later.

...next, you mentioned that your GPU was pulling 428 W ? Is that via HWInfo read-outs and with additional MPT ? As discussed yesterday, the HWInfo readout for AMD GPUs of this genre is understating things a bit.

...per other posts, I do however also think that the pads might not allow for solid die contact. As recommended yesterday, if you can't use putty instead of pads, make sure to measure the thickness of the OEM pads before replacing them - and using softer ones will probably help in your case.


----------



## Bobb3rdown

What kind of pump you using? Those are temps I'm getting with an air cooler. Maybe up the fan and pump speed? What your ambient temps in the room?


----------



## Bobbydo

Bobb3rdown said:


> What kind of pump you using? Those are temps I'm getting with an air cooler. Maybe up the fan and pump speed? What your ambient temps in the room?


In the room. Less than 20c for sure. It's Russia and winter. I'm using Barrow Pump. The speed is around 5700rpm. Fans are around 1500rpm max..

Just when gaming on bf2024. 83c and 105c in hotspot.


J7SC said:


> ...kind of hard to tell from the photo angle, but your GPU 'inlet' is the one on the left from this perspective (it is on most blocks)? Also, you might want to add more fluids to your reservoir as its level currently sits below the GPU inlet / outlet plane. That won't fix all your temp issues, but will be helpful w/ bubble control later.
> 
> ...next, you mentioned that your GPU was pulling 428 W ? Is that via HWInfo read-outs and with additional MPT ? As discussed yesterday, the HWInfo readout for AMD GPUs of this genre is understating things a bit.
> 
> ...per other posts, I do however also think that the pads might not allow for solid die contact. As recommended yesterday, if you can't use putty instead of pads, make sure to measure the thickness of the OEM pads before replacing them - and using softer ones will probably help in your case.


No, just in stock no mpt. I repasted it like 3 times. Looks like it's making good contact. Because themal paste keeps going to the sides. Maybe 1500rpm fans not enough?


----------



## J7SC

Bobbydo said:


> In the room. Less than 20c for sure. It's Russia and winter. I'm using Barrow Pump. The speed is around 5700rpm. Fans are around 1500rpm max..
> 
> Just when gaming on bf2024. 83c and 105c in hotspot.
> 
> 
> No, just in stock no mpt. I repasted it like 3 times. Looks like it's making good contact. Because themal paste keeps going to the sides. Maybe 1500rpm fans not enough?


I'm not familiar with the Barrow pump as I usually just use (dual) D5s. But in any case, w/o data it is kind of hard to track things down. In stock format / without MPT, can you please post two runs w/ HWInfo screenies for both runs described below, using ie. Superposition 4K/8K...scores and clocks are not important at this stage.

The first run should be your regular 'oc' on GPU and VRAM but WITHOUT the PL slider at +15% (so at '0' % extra)
The second run should be as above, but WITH PL slider at 15% (so at 115%). Also, via GPUz, can you confirm that it is a XTXH card (has an impact on max Hotspot temp in vbios).

...looking to see all the temps, max W & A


----------



## The EX1

schidddy said:


> thanks EX1, 2740mhz is ment to be he driver setting, Card reaches as high as 2660mhz clk while gaming and takes 330 to 350w according to Wattman. Its a normal Liquid Devil 6900XT
> So you so think its no use the repaste and use liquid metal to get the high junction temperaturs down? its about 84 while edge temp is max 52 at that setting.


Your clock speed is perfectly in line with a healthy XTX chip so I wouldn't worry. Your hotspot/junction temp does have a 30C delta if your edge is indeed 52C. That means your block mount could be better. However, 84C junction is perfectly acceptable. You won't see a performance gain by opening the card up and dropping your junction into the 70s. I would run the card as is and not have to worry about fighting with Powercolor's warranty if you open it up. Plus, EK's pads can be weird if you try to reuse them. You'll likely end up having to replace thermal pads too if you decide to repaste.



Bobbydo said:


> In the room. Less than 20c for sure. It's Russia and winter. I'm using Barrow Pump. The speed is around 5700rpm. Fans are around 1500rpm max..
> 
> Just when gaming on bf2024. 83c and 105c in hotspot.
> 
> 
> No, just in stock no mpt. I repasted it like 3 times. Looks like it's making good contact. Because themal paste keeps going to the sides. Maybe 1500rpm fans not enough?


Can't answer that question unless we know what your fluid temps are.


----------



## schidddy

Bobbydo said:


> In the room. Less than 20c for sure. It's Russia and winter. I'm using Barrow Pump. The speed is around 5700rpm. Fans are around 1500rpm max..
> 
> Just when gaming on bf2024. 83c and 105c in hotspot.
> 
> 
> No, just in stock no mpt. I repasted it like 3 times. Looks like it's making good contact. Because themal paste keeps going to the sides. Maybe 1500rpm fans not enough?


their is defintiv something wrong with the contact to the bloock. Im running a 420 Rad and 480 Rad with 5900x and a Liquid devil and no fan has to go above 900RPM to cool the GPU to 52°
My Room temp is 23°
my watertemps are highest 31° while gaming
Your CPU temps are alright so the problem must be with the block mounting. Try to get some pads with the same thickness as the originals
whats your watertemp mate?

Edit:
thank your very much EX1 for your opionen


----------



## schidddy

doublepost, sorry


----------



## Bobbydo

J7SC said:


> I'm not familiar with the Barrow pump as I usually just use (dual) D5s. But in any case, w/o data it is kind of hard to track things down. In stock format / without MPT, can you please post two runs w/ HWInfo screenies for both runs described below, using ie. Superposition 4K/8K...scores and clocks are not important at this stage.
> 
> The first run should be your regular 'oc' on GPU and VRAM but WITHOUT the PL slider at +15% (so at '0' % extra)
> The second run should be as above, but WITH PL slider at 15% (so at 115%). Also, via GPUz, can you confirm that it is a XTXH card (has an impact on max Hotspot temp in vbios).
> 
> ...looking to see all the temps, max W & A


I cannot I'm afraid to burn my gpu. The temps go to 115c. It's really expensive.


----------



## Bobbydo

schidddy said:


> their is defintiv something wrong with the contact to the bloock. Im running a 420 Rad and 480 Rad with 5900x and a Liquid devil and no fan has to go above 900RPM to cool the GPU to 52°
> My Room temp is 23°
> my watertemps are highest 31° while gaming
> Your CPU temps are alright so the problem must be with the block mounting. Try to get some pads with the same thickness as the originals
> whats your watertemp mate?
> 
> Edit:
> thank your very much EX1 for your opionen


I used 1mm. I will buy 0.5 tomorrow and try.
I honestly don't know water temp but it's for sure under 40c. 100%


----------



## Bobbydo

The EX1 said:


> Your clock speed is perfectly in line with a healthy XTX chip so I wouldn't worry. Your hotspot/junction temp does have a 30C delta if your edge is indeed 52C. That means your block mount could be better. However, 84C junction is perfectly acceptable. You won't see a performance gain by opening the card up and dropping your junction into the 70s. I would run the card as is and not have to worry about fighting with Powercolor's warranty if you open it up. Plus, EK's pads can be weird if you try to reuse them. You'll likely end up having to replace thermal pads too if you decide to repaste.
> 
> 
> 
> Can't answer that question unless we know what your fluid temps are.


I honestly don't know water temp but it's for sure under 40c. 100%


----------



## schidddy

Bobbydo said:


> I honestly don't know water temp but it's for sure under 40c. 100%


you can also put a simple thermometer in your watertube


----------



## ZealotKi11er

Does the temp spike there right away or takes time to get there?


----------



## CS9K

Bobbydo said:


> I used 1mm. I will buy 0.5 tomorrow and try.
> I honestly don't know water temp but it's for sure under 40c. 100%


Oof, yeah, probably way too thick.

_If_ the EK block on the Liquid Devil, has similar clearances that my aftermarket EK block has to my reference RX 6900 XT, then even 0.5mm pads from Thermal Grizzly are too thick.

That said, 0.5mm pads will probably work, but I would cut the pad down to size for each memory module and each inductor, so that they will compress a little.


----------



## Bobbydo

schidddy said:


> you can also put a simple thermometer in your watertube


That's smart 🤓 ok


ZealotKi11er said:


> Does the temp spike there right away or takes time to get there?


Less than 10 seconds. It's like. It goes first to 60, 1 second- 81, then it goes down to 70 and hoes up to 85c. And then just up.


----------



## Bobbydo

CS9K said:


> Oof, yeah, probably way too thick.
> 
> _If_ the EK block on the Liquid Devil, has similar clearances that my aftermarket EK block has to my reference RX 6900 XT, then even 0.5mm pads from Thermal Grizzly are too thick.
> 
> That said, 0.5mm pads will probably work, but I would cut the pad down to size for each memory module and each inductor, so that they will compress a little.


Then 0.2mm? Or 0.5 from Arctic?


----------



## CS9K

Bobbydo said:


> Then 0.2mm?


Buy 0.5mm and see what happens. If it comes to it and you have to compress them a little before mounting them, then 0.5mm will be close enough without having to stack 0.2mm pads.


----------



## Bobbydo

CS9K said:


> Buy 0.5mm and see what happens. If it comes to it and you have to compress them a little before mounting them, then 0.5mm will be close enough without having to stack 0.2mm pads.


Thank you. I will see tomorrow.


----------



## ZealotKi11er

Bobbydo said:


> That's smart 🤓 ok
> 
> 
> Less than 10 seconds. It's like. It goes first to 60, 1 second- 81, then it goes down to 70 and hoes up to 85c. And then just up.


That is definitely a mounting issue.


----------



## J7SC

This whole discussion reminds of that special paper inserted between a GPU die (or for that matter CPU IHS) and the block w/ VRAM and other pads mounted to check contact patch / patterns. What was that paper called again ?


----------



## schidddy

as far as i know it just called contact paper, you ment that one?



IC Contact Test and Analysis Kit – Innovation Cooling


----------



## J7SC

schidddy said:


> as far as i know it just called contact paper, you ment that one?
> 
> 
> 
> IC Contact Test and Analysis Kit – Innovation Cooling


Yup, that's the one  (I kept on googling impact paper...)


----------



## Bobbydo

ZealotKi11er said:


> That is definitely a mounting issue.


Does anyone know what's the name of the RGB connector in 6900 xt liquid devil ultimate? I found mine burned somehow. Please help. I need to replace it.


----------



## schidddy

does it have pinholes on both sides?


Bobbydo said:


> Does anyone know what's the name of the RGB connector in 6900 xt liquid devil ultimate? I found mine burned somehow. Please help. I need to replace it.
> 
> View attachment 2534097
> 
> 
> View attachment 2534096


----------



## Bobbydo

No only one side, 1mm approximately


----------



## The EX1

Bobbydo said:


> Thank you. I will see tomorrow.


The pads from EK are 1.00mm on all Liquid Devil and Vector RD blocks. Your block mount has to be the issue.Thinner pads could mean that other components like the memory and VRM won’t make good contact anymore. When mounting your block, make sure the card is flat and is not resting on the I/O shield. Tighten the core screws first in a star pattern and then proceed with the rest of the screws. You need good mounting pressure and a thick thermal paste like GC Extreme.


----------



## Bobbydo

The EX1 said:


> The pads from EK are 1.00mm on all Liquid Devil and Vector RD blocks. Your block mount has to be the issue.Thinner pads could mean that other components like the memory and VRM won’t make good contact anymore. When mounting your block, make sure the card is flat and is not resting on the I/O shield. Tighten the core screws first in a star pattern and then proceed with the rest of the screws. You need good mounting pressure and a thick thermal paste like GC Extreme.


I saw the pads. They're way thinner than 1mm. But I don't know.


----------



## Bobbydo

Thanks guy. I contacted ek web and got this:


----------



## Bobbydo

0.5mm pads and thermal paste. What do you think guys? Is it good?


----------



## ZealotKi11er

Much better.


----------



## Bobbydo

ZealotKi11er said:


> Much better.


Nope, I honestly need help. I don't know what's wrong. I got 2200rpm ekweb fans ekweb radiator. And I don't understand. Everything is in stock. even some undervolting.


----------



## Bobbydo

This is idle now. Nothing running. Fans at 100% pump 100%. Pump goes 5700rpm. Fans 2200rpm.

Note: could it be the liquid? Because I reuse it after repasting. I don't use new liquid.


----------



## ZealotKi11er

Did this card come with stock cooler?


----------



## Bobbydo

ZealotKi11er said:


> Did this card come with stock cooler?


It's liquid devil. There was no cooler


----------



## Bobbydo

This time tempreture goes up steadily. In 3 minutes it gone to 78c and hotspot. 97c.


----------



## lestatdk

Bobbydo said:


> This is idle now. Nothing running. Fans at 100% pump 100%. Pump goes 5700rpm. Fans 2200rpm.
> 
> Note: could it be the liquid? Because I reuse it after repasting. I don't use new liquid.
> 
> View attachment 2534161


With my pump at 100% and fans currently 44% my idle temp is 3 degrees above room temperature. Something is definitely wrong with your temps. 

You can re-use the liquid, but I personally prefer to replace it when I take apart the loop no matter the reason. 

What pump is it ? 5700rpm is a lot . Mine maxes at 4600 or so. Is the flow OK in the system ( might be hard to see with clear liquid ) . If the flow is OK the next step would be to check the cooling block fitment.


----------



## Bobbydo

The water is moving. When I shake it a little. I see bubbles moving to the reservoir. I'm so tired of this. Honestly I feel like I should just go to AIO.

PC is off and tempretures are 43c 🤦🏾‍♂️😐

Ps: water tempretures is 34.1c out from GPU.


----------



## CS9K

Bobbydo said:


> Nope, I honestly need help. I don't know what's wrong. I got 2200rpm ekweb fans ekweb radiator. And I don't understand. Everything is in stock. even some undervolting.
> View attachment 2534160


Your temperatures are _better_, but still are not ideal.

What paste are you using, and _how_ are you pasting the GPU?


----------



## Bobbydo

CS9K said:


> Your temperatures are _better_, but still are not ideal.
> 
> What paste are you using, and _how_ are you pasting the GPU?


Arctic mx-5. Both CPU and GPU temps are high this time. GPU hotspot upto 97c. Core temp is80c. CPU doesn't go above 80c.

I psate GPU by spreading the thermal paste evenly. I built PCs. Before when it comes to thermal pasting. Only this is my first custom loop made by me.


----------



## CS9K

Bobbydo said:


> Arctic mx-5. Both CPU and GPU temps are high this time. GPU hotspot upto 97c. Core temp is80c. CPU doesn't go above 80c


Okay. I would leave your computer as it is for now. If you can, try to get hold of some Gelid GC-Extreme thermal paste.

Once you have the paste, let me know, and by then I'll have the video published for how to do a paste-job with GC-Extreme.

It'll look something like the boyfriend's 3080 FE here in the end:









Terrible picture, I know, the video isn't near as potato. "Flat" is all anyone should take from this picture.


----------



## Bobbydo

CS9K said:


> Okay. I would leave your computer as it is for now. If you can, try to get hold of some Gelid GC-Extreme thermal paste.
> 
> Once you have the paste, let me know, and by then I'll have the video published for how to do a paste-job with GC-Extreme.
> 
> It'll look something like the boyfriend's 3080 FE here in the end:
> View attachment 2534169
> 
> 
> Terrible picture, I know, the video isn't near as potato. "Flat" is all anyone should take from this picture.


I just ordered it. I will get it in 2 hours or so. Delivery here is pretty fast. I was born in Brooklyn and lived there till the past few years.. Fastest delivery is next day. Russia is pretty amazing. Despite the cold.


----------



## J7SC

Bobbydo said:


> Arctic mx-5. Both CPU and GPU temps are high this time. GPU hotspot upto 97c. Core temp is80c. CPU doesn't go above 80c.
> 
> I psate GPU by spreading the thermal paste evenly. I built PCs. Before when it comes to thermal pasting. Only this is my first custom loop made by me.


I use Arctic MX5 a lot, though for GPU dies prefer the Gelid GC Extreme. It certainly also pays to step back a bit and take a look at your whole loop, w/o getting frustrated. Clearly, making sure you get a good mount with correct-thickness pads is a must - _but I also wonder if you haven't got some sort of blockage elsewhere_ in your loop, perhaps even a giant trapped bubble.

If you can (re. whatever may be on the back of your system), I would disconnect the PC, lay it on its back, do the re-pasting and re-padding you planned but also then start up the 'final' setup while the computer and GPU are laying on its back (may be just up a few degrees from horizontal Just watch the reservoir has enough liquid covering the outflow fitting to the pump and the reservoir doesn't leak... ), then let it run for a bit.


----------



## cfranko

@Bobbydo I had a similiar issue to yours, however mine was not as bad as yours, I would get 65c current temp and 80c junction temp at 300 watts which is pretty high for watercooling if you ask me. I tried Cooler Master Mastergel Maker paste and MX-4 both gave terrible temperatures. Then I used liquid metal and my temps have been great since the last 2 months. Your issue seems like a contact issue rather than a thermal paste issue but if you have the courage to try liquid metal then you can, it’s a bit risky


----------



## CS9K

Bobbydo said:


> I just ordered it. I will get it in 2 hours or so. Delivery here is pretty fast. I was born in Brooklyn and lived there till the past few years.. Fastest delivery is next day. Russia is pretty amazing. Despite the cold.


Excellent. If you can see between the GPU and the Water Block, before you re-paste, take a _really_ good look between the block and the PCB. Are there any areas where the card "bulges" at all (at the Inductors, for example) due to the thermal pads being too thick. Are the pads on the memory too thick? Check every angle you can to see if you can find a reason why the block and the core may not be making good contact with the block.

Here's a SUPER quick and dirty video of how I do paste jobs with Gelid GC-Extreme. Slow and steady wins the race. Use the pressure of the spreader to let the paste flow. It will push excess paste to the side _and_ create a little "roll" of paste on the spreader, which you can then put elsewhere to spread.

You want enough paste so that you can't see the core/die through it, but not much more than that, and the layer NEEDS to be as even as possible. You will get a feel of how the paste spreads once you begin; seeing it can only explain so much.

But! Do the paste last! Once you finish spreading the paste, re-attach the PCB to the block and tighten down the core support.

*edit* You beat me to it, it looks like, but here's a quick&dirty video anyway:


----------



## Bobbydo

Temps seem to improve a bit. I'm not sure about liquid metal. I saw videos it very risky. What is your temps now?


----------



## J7SC

CS9K said:


> Excellent. If you can see between the GPU and the Water Block, before you re-paste, take a _really_ good look between the block and the PCB. Are there any areas where the card "bulges" at all (at the Inductors, for example) due to the thermal pads being too thick. Are the pads on the memory too thick? Check every angle you can to see if you can find a reason why the block and the core may not be making good contact with the block.
> 
> Here's a SUPER quick and dirty video of how I do paste jobs with Gelid GC-Extreme. Slow and steady wins the race. Use the pressure of the spreader to let the paste flow. It will push excess paste to the side _and_ create a little "roll" of paste on the spreader, which you can then put elsewhere to spread.
> 
> You want enough paste so that you can't see the core/die through it, but not much more than that, and the layer NEEDS to be as even as possible. You will get a feel of how the paste spreads once you begin; seeing it can only explain so much.
> 
> But! Do the paste last! Once you finish spreading the paste, re-attach the PCB to the block and tighten down the core support.
> 
> (Video soon, I am an idiot sandwich)


...not entirely sure what an 'idiot sandwich' is (as I look in the mirror )...anyhow, here's a quick comp between stock PL and air-cooling, MPT PL on air (fans maxed), and then watercooled results w/ MPT PL (w/ caveats on software monitoring vs true peak consumption). GPU clocks were't fully maxed in all runs as the loop is still bleeding air a bit, but close enough. The key with w-cooling is that you can run much higher PL and sustain high effective clocks... Also note the posted differences in ambient temp. I'm trying to arrive at a table of sorts that shows the marginal cost in temps for a marginal increase in PL W envelope...need of course much more data, but this is a start.


----------



## cfranko

Bobbydo said:


> View attachment 2534201
> 
> View attachment 2534204
> 
> View attachment 2534203
> 
> View attachment 2534202
> 
> View attachment 2534200
> 
> 
> Temps seem to improve a bit. I'm not sure about liquid metal. I saw videos it very risky. What is your temps now?


After liquid metal my current gpu temperature rarely exceeds 50c and hotspot is around 63. Water temperature is about 40c


----------



## CS9K

Bobbydo said:


> View attachment 2534201
> 
> View attachment 2534204
> 
> View attachment 2534203
> 
> View attachment 2534202
> 
> View attachment 2534200
> 
> 
> Temps seem to improve a bit. I'm not sure about liquid metal. I saw videos it very risky. What is your temps now?


Check out my video above. I tried to catch you before you re-pasted. You can't just apply GC-Extreme like you would thermal paste onto a CPU. GC-Extreme is very picky with how it is applied due to how viscous and dense it is, and I _highly_ suggest applying it as I did in the video. 

As for temperatures, this is what my reference GPU looks like during Time Spy (GT1 just finished, GT2 is playing on-screen); pump is full speed, 2700MHz clock speed setting, 420W limit. This image is about 8 months after I last applied GC-Extreme, so my temperatures are a few degrees warmer than 8 months ago.


----------



## Bobbydo

CS9K said:


> Check out my video above. I tried to catch you before you re-pasted. You can't just apply GC-Extreme like you would thermal paste onto a CPU. GC-Extreme is very picky with how it is applied due to how viscous and dense it is, and I _highly_ suggest applying it as I did in the video.
> 
> As for temperatures, this is what my reference GPU looks like during Time Spy (GT1 just finished, GT2 is playing on-screen); pump is full speed, 2700MHz clock speed setting, 420W limit. This image is about 8 months after I last applied GC-Extreme, so my temperatures are a few degrees warmer than 8 months ago.
> View attachment 2534208


Mine is about the same now. With fans at full speed.


----------



## CS9K

Bobbydo said:


> Mine is about the same now. With fans at full speed.


How many watts is your card pulling at those temperatures? My card was bouncing between 410-420W when my picture was taken.


----------



## Bobbydo

cfranko said:


> After liquid metal my current gpu temperature rarely exceeds 50c and hotspot is around 63. Water temperature is about 40c


Do you think I should apply it? Do you have any video showing how. And it shouldn't touch anything despite Die, right?



CS9K said:


> How many watts is your card pulling at those temperatures? My card was bouncing between 410-420W when my picture was taken.


Around 360w.


----------



## Bobbydo

Bobbydo said:


> Do you think I should apply it? Do you have any video showing how. And it shouldn't touch anything despite Die, right?
> 
> 
> 
> Around 360w.


----------



## CS9K

Bobbydo said:


> Do you think I should apply it? Do you have any video showing how. And it shouldn't touch anything despite Die, right?
> 
> 
> 
> Around 360w.


I would say your card and its paste job is sufficient, then. The rest is now up to your loop. Remember that big bubble that would not work its way out? That could be a sign of flow issues.

Let's assume your pump is brand new. Is it capable of supplying enough head pressure to run your loop by itself? I recall you saying that it wasn't D5 nor DDC style, so I would start there to see if your pump is up to spec to begin with.


----------



## cfranko

Bobbydo said:


> Do you think I should apply it? Do you have any video showing how. And it shouldn't touch anything despite Die, right?
> 
> 
> 
> Around 360w.


----------



## Bobbydo

Bobbydo said:


> View attachment 2534211


I had 6900xt Asus Strix LC Top and Sapphire Extreme. They crashed 23.6k each. at 55c. I thought this card was better.


----------



## jonRock1992

cfranko said:


> After liquid metal my current gpu temperature rarely exceeds 50c and hotspot is around 63. Water temperature is about 40c


Same. These temps seem to be par for a good liquid metal application with the 6900XT.


----------



## Bobbydo

jonRock1992 said:


> Same. These temps seem to be par for a good liquid metal application with the 6900XT.


Tomorrow I will use liquid metal. I hope I don't screw it up!


----------



## cfranko

Bobbydo said:


> Tomorrow I will use liquid metal. I hope I don't screw it up!


Make sure to isolate the die with nail polish or liquid electric tape


----------



## Bobbydo

Bobbydo said:


> Tomorrow I will use liquid metal. I hope I don't screw it up!


Any nail polish? And wish is better nail polish or liquid electrical tape?


cfranko said:


> Make sure to isolate the die with nail polish or liquid electric tape


----------



## cfranko

Bobbydo said:


> Any nail polish? And wish is better nail polish or liquid electrical tape?


I only used nail polish but I think liquid electrical tape is safer


----------



## jonRock1992

Bobbydo said:


> Any nail polish? And wish is better nail polish or liquid electrical tape?


I always use liquid electrical tape. Never had an issue. It's personal preference, but I personally think liquid electrical tape is the better option. It dries faster and it's way easier to remove.


----------



## J7SC

jonRock1992 said:


> I always use liquid electrical tape. Never had an issue. It's personal preference, but I personally think liquid electrical tape is the better option. It dries faster and it's way easy easier to remove.


^^ This


----------



## tolis626

Hey guys, quick question. Any of you peeps playing Warzone at 1440p with these cards? I'm asking because I think I may have some sort of issue. See, I mostly get around 180-220 fps, but I sometimes get dips to below 144 fps causing stuttering (bouncing between over and under the Freesync range of my monitor). What I've noticed is that in Warzone, my GPU usage is usually rather low and sporadic (60% one moment, 90% the next, then 50, then 80 etc), and so are my clocks, bouncing between full boost (about 2560MHz) and like 2100MHz, resulting in some jarring interruptions to the buttery smoothness I expect. Weirdly, Modern Warfare multiplayer shows no such issues. FPS is constantly over 200, clocks are practically locked to 2560MHz and GPU usage is over 90% at all times.

Any ideas?


----------



## CS9K

Bobbydo said:


> Any nail polish? And wish is better nail polish or liquid electrical tape?


I will play devil's advocate and caution you on using liquid metal on your GPU. In my opinion, if your loop was functioning properly, you would not need it to run a modest tune at 400W card power limit. You haven't told us much about your pump's health, nor your loop flow, so I don't know what other options to suggest, but I can't help but feel like you would be better off slowing down, take a day to go over every part in your loop to make sure nothing is impeding flow (including a drain, flowtest, and refill), and another look at your GPU block to make absolutely certain that thermal paste is covering the entire die, and that thermal paste has been squeezed out around every side of the die (as it should to a small degree when mounting pressure is adequate). 

Liquid metal is a big step, a big mess, and a pain to clean up if you ever switch back to paste. Likewise, any warranty you might have had would be gone for good after LM.

My reference RX 6900 XT with an EK block could do north of 450W and still be good on thermals, with nothing but the potato EK pads and Gelid GC-Extreme in a 2-radiator loop. Now, I wouldn't run a reference card like my 6900 XT that hard (400W is my daily-driver limit), but my point stands: I feel like _something_ in your loop isn't operating to its full potential, and that taking the step to go liquid metal isn't necessary.

"But that's just like, my opinion, man..."


----------



## J7SC

CS9K said:


> I will play devil's advocate and caution you on using liquid metal on your GPU. In my opinion (...) Now, I wouldn't run a reference card like my 6900 XT that hard (400W is my daily-driver limit), but my point stands: I feel like _something_ in your loop isn't operating to its full potential, and that taking the step to go liquid metal isn't necessary.
> 
> "But that's just like, my opinion, man..."


..and my opinion (at least on cooling pumps):



Spoiler


----------



## Oversemper

Bobbydo said:


> Any nail polish? And wish is better nail polish or liquid electrical tape?


I did two successful LM repastings (on VII and on 6900XT). For isolation both times I've used a special silicon compound designated for isolating electronic radio equipment. It consists of two components which you mix right before the application. It is absolutely transparent. And the most important thing is that it withholds 150+ degrees Celsius which guarantees that it will not deteriorate over time. Nail polish may dry and crack, electrical tape may also dry and get peeled off. My VII is on the third year under LM without any problems, though I don't game on it.


Spoiler: Photos


----------



## Bobbydo

CS9K said:


> I will play devil's advocate and caution you on using liquid metal on your GPU. In my opinion, if your loop was functioning properly, you would not need it to run a modest tune at 400W card power limit. You haven't told us much about your pump's health, nor your loop flow, so I don't know what other options to suggest, but I can't help but feel like you would be better off slowing down, take a day to go over every part in your loop to make sure nothing is impeding flow (including a drain, flowtest, and refill), and another look at your GPU block to make absolutely certain that thermal paste is covering the entire die, and that thermal paste has been squeezed out around every side of the die (as it should to a small degree when mounting pressure is adequate).
> 
> Liquid metal is a big step, a big mess, and a pain to clean up if you ever switch back to paste. Likewise, any warranty you might have had would be gone for good after LM.
> 
> My reference RX 6900 XT with an EK block could do north of 450W and still be good on thermals, with nothing but the potato EK pads and Gelid GC-Extreme in a 2-radiator loop. Now, I wouldn't run a reference card like my 6900 XT that hard (400W is my daily-driver limit), but my point stands: I feel like _something_ in your loop isn't operating to its full potential, and that taking the step to go liquid metal isn't necessary.
> 
> "But that's just like, my opinion, man..."


I respect that. And it really does make a lot of sense


----------



## Bobbydo

Could this be the problem? The pump not under the reservoir?

The tanks is empy because I'm just filling it up.


----------



## CS9K

Bobbydo said:


> Could this be the problem? The pump not under the reservoir?
> 
> The tanks is empy because I'm just filling it up.
> 
> View attachment 2534268
> 
> View attachment 2534267
> 
> View attachment 2534265
> 
> View attachment 2534266


That shouldn't be a problem so long as the reservoir (when in normal operation) is filled at a level higher than the pump itself. This is a good time to flow-test the pump though! At full speed, it should quite quickly (almost... vigorously) empty the water out of the reservoir and send it into the loop. 

It may not go _this_ fast, but it should be quick. 

Link goes to timestamp in video. Video is kind of loud, unfortunately.


----------



## Bobbydo

CS9K said:


> That shouldn't be a problem so long as the reservoir (when in normal operation) is filled at a level higher than the pump itself. This is a good time to flow-test the pump though! At full speed, it should quite quickly (almost... vigorously) empty the water out of the reservoir and send it into the loop.
> 
> It may not go _this_ fast, but it should be quick.
> 
> Link goes to timestamp in video. Video is kind of loud, unfortunately.


Yes I know. It's pretty fast. Then what could be the problem? I can change the whole waterblock and buy a new one. If that can help.


----------



## CS9K

Bobbydo said:


> I respect that. And it really does make a lot of sense


I can not tell you how much it means that you took my advice. Thank you so much <3 I only want to see you enjoy your GPU problem-free, and it does mean a lot that you were willing to try different things 💗

I do tech support for a living, and my account age should give you an idea of how long I've been playing with computers. PC gaming is my jam, and I hope we can get your issue figured out 💗



Bobbydo said:


> Yes I know. It's pretty fast. Then what could be the problem? I can change the whole waterblock and buy a new one. If that can help.


Does it continue to be fast once the loop is full of water, though? (thinking back to the big bubble that wouldn't move) A restriction that slows water may not slow air as much as it would water.

Aside from a possible flow restriction somewhere, I don't have any other thoughts other than to try _one_ more thing:

Pull the thermal pads off of the GPU, and _carefully_ re-mount the PCB to the block (you can re-use paste that is on there for this step, as this test is temporary). Leave the power target limit and clock speed at stock, aim a fan at the GPU so air is (hopefully) blowing between the block and card, and start Heaven or Superposition, and see if your temperatures behave differently. You only need to run the benchmark for a few seconds or tens of seconds. Doing this will rule out a mounting issue.

If your core temperatures are suddenly better during that test, then we know it is the thermal pads causing core-to-block mounting issues.


----------



## Bobbydo

Bobbydo said:


> Yes I know. It's pretty fast.





CS9K said:


> I can not tell you how much it means that you took my advice. Thank you so much <3 I only want to see you enjoy your GPU problem-free, and it does mean a lot that you were willing to try different things 💗
> 
> I do tech support for a living, and my account age should give you an idea of how long I've been playing with computers. PC gaming is my jam, and I hope we can get your issue figured out 💗
> 
> 
> Does it continue to be fast once the loop is full of water, though? (thinking back to the big bubble that wouldn't move) A restriction that slows water may not slow air as much as it would water.
> 
> Aside from a possible flow restriction somewhere, I don't have any other thoughts other than to try _one_ more thing:
> 
> Pull the thermal pads off of the GPU, and _carefully_ re-mount the PCB to the block (you can re-use paste that is on there for this step, as this test is temporary). Leave the power target limit and clock speed at stock, aim a fan at the GPU so air is (hopefully) blowing between the block and card, and start Heaven or Superposition, and see if your temperatures behave differently. You only need to run the benchmark for a few seconds or tens of seconds. Doing this will rule out a mounting issue.
> 
> If your core temperatures are suddenly better during that test, then we know it is the thermal pads causing core-to-block mounting issues.


Thanks really for all the follow up and help. 

But now I'm using 0.5mm. I was supposed to use 1mm.


----------



## CS9K

Bobbydo said:


> Thanks really for all the follow up and help.
> 
> But now I'm using 0.5mm. I was supposed to use 1mm.
> View attachment 2534281
> 
> View attachment 2534280


Looking at EK's website, the blocks (reference vs powercolor) look _about_ the same height off of the die/PCB. Look between the block and PCB to see if the 0.5mm pads are compressed by your memory at all.


----------



## SoloCamo

So I've got my uv oc dialed decently I think. I'd imagine this is pretty average right?

Min Freq set to 500 (stock) max set to 2479 (stock) however I set voltage to 1075 (which seems to report 1060 while benching/ingame) - Junction just hovers above 90* at 1100rpm, core in the 60's-70's.

So long story short, holding 2400mhz core / 2100mem w/ fast timings / 1075mV set / PL +15 and rarely going over 270w while temps are ok andd fan speed is only 1100rpm. Can't complain I suppose? Core is usually sitting at 2425mhz.


----------



## CS9K

SoloCamo said:


> So I've got my uv oc dialed decently I think. I'd imagine this is pretty average right?
> 
> Min Freq set to 500 (stock) max set to 2479 (stock) however I set voltage to 1075 (which seems to report 1060 while benching/ingame) - Junction just hovers above 90* at 1100rpm, core in the 60's-70's.
> 
> So long story short, holding 2400mhz core / 2100mem w/ fast timings / 1075mV set / PL +15 and rarely going over 270w while temps are ok andd fan speed is only 1100rpm. Can't complain I suppose? Core is usually sitting at 2425mhz.


If it can survive Time Spy and Port Royal "benchmark" runs for the entire run each at 1075mV, then you have a fantastic bin, and not a thing to complain about! 💗 

My reference RX 6900 XT isn't a particularly blessed sample... Lowest undervolt I could get stable at 2000MHz and 2400MHz was 1145mV, but core clock goes up to 2700MHz at the top end on water, so I guess can't complain either.


----------



## Bobbydo

Are they?


----------



## CS9K

Bobbydo said:


> Are they?
> View attachment 2534286


That pad does look like it has deformed from being compressed, yes. A good thing.

_If_ the PC is in an operable state, try this:
Load up a benchmark, start it, let your temperatures come up to near their max (20-30 seconds), then with your fingers placed to spread the pressure around the entire GPU die, press against the front and back of the GPU to try and make the core contact the die better. If your hotspot temperature drops, then it is a mounting issue.


----------



## Thanh Nguyen

What kinda fps you guys hit in bf2042 1440p ultra setting? I


----------



## SoloCamo

CS9K said:


> If it can survive Time Spy and Port Royal "benchmark" runs for the entire run each at 1075mV, then you have a fantastic bin, and not a thing to complain about! 💗
> 
> My reference RX 6900 XT isn't a particularly blessed sample... Lowest undervolt I could get stable at 2000MHz and 2400MHz was 1145mV, but core clock goes up to 2700MHz at the top end on water, so I guess can't complain either.


Don't have port royal but I will say Time Spy seems to hit way harder than every other benchmark I've tried. Seems to be happy at 1105mV in that yet everything else the card was stable at 1075mV. (Unigine 4k w/ 8x msaa, Firestrike Ultra, hours of gaming at 4k on hard hitting titles, etc)

I got a few completed benchmark runs at 1100mV in Time Spy but a few crashed too back to back, stepping it up to 1105mV seemed solid after a few full benchmark loops.


----------



## SoloCamo

Thanh Nguyen said:


> What kinda fps you guys hit in bf2042 1440p ultra setting? I


Going to be likely cpu bound at 1440p.


----------



## ZealotKi11er

Thanh Nguyen said:


> What kinda fps you guys hit in bf2042 1440p ultra setting? I


120-130. There are dips but those are part for CPU and BF MP


----------



## SoloCamo

ZealotKi11er said:


> 120-130. There are dips but those are part for CPU and BF MP


Just tested this now, I can report about the same averages on a 128player map. 10900 (non k) w/ 32gb DDR4 c18 3600mhz. Stock clocks on the 6900XT for the test. Dropping down to 1080p doesn't yield me any more fps.


----------



## Bobbydo

CS9K said:


> That pad does look like it has deformed from being compressed, yes. A good thing.
> 
> _If_ the PC is in an operable state, try this:
> Load up a benchmark, start it, let your temperatures come up to near their max (20-30 seconds), then with your fingers placed to spread the pressure around the entire GPU die, press against the front and back of the GPU to try and make the core contact the die better. If your hotspot temperature drops, then it is a mounting issue.


I tried nothing changed. 


Thanh Nguyen said:


> What kinda fps you guys hit in bf2042 1440p ultra setting? I


110-140fps on stock. Sometimes it goes down to 80 and then back to 115 or so.


----------



## CS9K

Bobbydo said:


> I tried nothing changed.
> 
> 110-140fps on stock. Sometimes it goes down to 80 and then back to 115 or so.


Thank you for trying, at least. I am sorry that nothing we have tried so far, other than paste choice, has made any difference. I wouldn't blame you if you wanted to try liquid metal. I am not a fan, but that is just me. 

I unfortunately can not think of anything else to try.


----------



## Bobbydo

CS9K said:


> Thank you for trying, at least. I am sorry that nothing we have tried so far, other than paste choice, has made any difference. I wouldn't blame you if you wanted to try liquid metal. I am not a fan, but that is just me.
> 
> I unfortunately can not think of anything else to try.


I think I will change tubes and use soft one. And do you think it could be the pump? It's moving 5700rpm. Max. Should I lower maybe?

My friend bought the same card. He's trying it now. And I will see if tempretures are the same or not.


----------



## CS9K

Bobbydo said:


> I think I will change tubes and use soft one. And do you think it could be the pump? It's moving 5700rpm. Max. Should I lower maybe?
> 
> My friend bought the same card. He's trying it now. And I will see if tempretures are the same or not.


I would see what his results are before changing parts, if that is possible, just to get another data point. If you can prove to yourself that your pump _continues_ to pump water fast once the loop is full of water, then I think your pump is probably okay.


----------



## cfranko

My custom loop leaked inside the GPU PCB, now as soon as my 6900 xt goes under load the whole pc restarts. I left it turned off so that it can dry for a few days I am pretty sad to be honest. After a few days if it still doesn’t work I am gonna check the pcb but I am no technician or electrician so I probably have no luck. I would appreciate any suggestions you guys have worst case scenerio ill buy a new 3090


----------



## CS9K

cfranko said:


> My custom loop leaked inside the GPU PCB, now as soon as my 6900 xt goes under load the whole pc restarts. I left it turned off so that it can dry for a few days I am pretty sad to be honest. After a few days if it still doesn’t work I am gonna check the pcb but I am no technician or electrician so I probably have no luck. I would appreciate any suggestions you guys have worst case scenerio ill buy a new 3090


If, in the end, your GPU does not function, check out the youtube channel for Tech Cemetery, and check out their discord. You may be able to find a technician to send your GPU to, and if so, they could hopefully fix your issue.


----------



## cfranko

CS9K said:


> If, in the end, your GPU does not function, check out the youtube channel for Tech Cemetery, and check out their discord. You may be able to find a technician to send your GPU to, and if so, they could hopefully fix your issue.


I live in turkey, it’s is gonna cost a lot to probably ship my card to the united states if nothing works out I may try to contact the owner of this channel. This was caused all because of my greed, my gpu worked totally fine at 100c hotspot with its air cooler but I did watercooling


----------



## lestatdk

cfranko said:


> I live in turkey, it’s is gonna cost a lot to probably ship my card to the united states if nothing works out I may try to contact the owner of this channel. This was caused all because of my greed, my gpu worked totally fine at 100c hotspot with its air cooler but I did watercooling


Let it dry and remount the air cooler and see if it works. It might not be dead, there's still hope


----------



## cfranko

lestatdk said:


> Let it dry and remount the air cooler and see if it works. It might not be dead, there's still hope


Do you think I should remove the waterblock right now to check the PCB and dry it if its wet or should I let it sit for a few days so it can dry byitself.


----------



## SoloCamo

cfranko said:


> Do you think I should remove the waterblock right now to check the PCB and dry it if its wet or should I let it sit for a few days so it can dry byitself.


I'd remove all coolers and let it dry for a few days. The less time spent with moisture the better.


----------



## lestatdk

cfranko said:


> Do you think I should remove the waterblock right now to check the PCB and dry it if its wet or should I let it sit for a few days so it can dry byitself.


I'd remove it asap and let the PCB dry out. Then remount the air cooler. 
Also , I'd just use regular paste not LM. Just to be on the safe side


----------



## lestatdk

SoloCamo said:


> I'd remove all coolers and let it dry for a few days. The less time spent with moisture the better.


Exactly


----------



## cfranko

I don’t understand why there is a opening between the coldplate and the acrylic on the waterblock. The gpu is vertical mounted so whenever the fitting leaks all the leaked water goes inside the pcb because the waterblock has a huge gap. The person who designed it like this is pretty stupid I guess


----------



## J7SC

cfranko said:


> I don’t understand why there is a opening between the coldplate and the acrylic on the waterblock. The gpu is vertical mounted so whenever the fitting leaks all the leaked water goes inside the pcb because the waterblock has a huge gap. The person who designed it like this is pretty stupid I guess


...as others also posted, the key is not to start it up, but instead disconnect it. I recently had a leak on a vertically mounted GPU when I was klutzy and spilled liquids on it (duh); fortunately the system wasn't running at the time. I took it apart, doused it in 99.9% isopropyl alcohol - including letting that run into /under the die, all the phases, the VRAM chips. The isopropyl displaces /evaporates any moisture that might be trapped under components from the spill. I then let it dry out on a base board heater for 12 hrs (warm, not hot) though a heat gun if used carefully can also be used. The key is not to start a potentially compromised GPU up w/ power connections, even if most liquids these days are based on non-conductive additives and deionized water. Others already suggested to skip the conductive LM for now.



CS9K said:


> I would see what his results are before changing parts, if that is possible, just to get another data point. If you can prove to yourself that your pump _continues_ to pump water fast once the loop is full of water, then I think your pump is probably okay.


I tend to look at every GPU (and CPU) block before mounting it to the PCB and make sure all the screws on it are tight (never had one where that was the case, just as I never had any new rad that still didn't need cleaning out re. crud from manufacturing). Once everything is tight on the blocks, I mount them in a mock-up temp loop (w/o PCB) and check flow pressure and speed. Per spoiler, some crude YT vids of a loop w/ two blocks, a reservoir and a 480/64 testbench rad). As I just filled it, lots of little air bubbles which nevertheless are handy to show the flow. This was powered by an EKWB XTOP Revo dual D5 / single housing. FYI, the sound was from rad fans, not the pump.


Spoiler: loop flow


----------



## cfranko

J7SC said:


> ...as others also posted, the key is not to start it up, but instead disconnect it. I recently had a leak on a vertically mounted GPU when I was klutzy and spilled liquids on it (duh); fortunately the system wasn't running at the time. I took it apart, doused it in 99.9% isopropyl alcohol - including letting that run into /under the die, all the phases, the VRAM chips. The isopropyl displaces /evaporates any moisture that might be trapped under components from the spill. I then let it dry out on a base board heater for 12 hrs (warm, not hot) though a heat gun if used carefully can also be used. The key is not to start a potentially compromised GPU up w/ power connections, even if most liquids these days are based on non-conductive additives and deionized water. Others already suggested to skip the conductive LM for now.
> 
> 
> 
> I tend to look at every GPU (and CPU) block before mounting it to the PCB and make sure all the screws on it are tight (never had one where that was the case, just as I never had any new rad that still didn't need cleaning out re. crud from manufacturing). Once everything is tight on the blocks, I mount them in a mock-up temp loop (w/o PCB) and check flow pressure and speed. Per spoiler, some crude YT vids of a loop w/ two blocks, a reservoir and a 360/60 testbench rad). As I just filled it, lots of little air bubbles which nevertheless are handy to show the flow. This was powered by an EKWB XTOP Revo dual D5 / single housing. FYI, the sound was from rad fans, not the pump.
> 
> 
> Spoiler: loop flow


Will it dry byitself if I let it sit for a few days? I really don’t want to open it up again


----------



## J7SC

cfranko said:


> Will it dry byitself if I let it sit for a few days? I really don’t want to open it up again


...a bit a risk as you don't know if liquids got trapped underneath components, perhaps even washing a bit of conductive LM w/ it. BUT: generally 'yes'....letting it sit for a day or two in a warm and dry place (near a house radiator for example) is usually good enough, though still riskier than the process I outlined above.


----------



## cfranko

J7SC said:


> ...a bit a risk as you don't know if liquids got trapped underneath components, perhaps even washing a bit of conductive LM w/ it. BUT: generally 'yes'....letting it sit for a day or two in a warm and dry place (near a house radiator for example) is usually good enough, though still riskier than the process I outlined above.


Thank you, I don’t think the water washed out the lm because the lm is very tightly squeezed in between the die and the waterblock so I don’t think water went all the way down there I guess ill just wait and see. Can’t stop thinking about it


----------



## CS9K

cfranko said:


> Thank you, I don’t think the water washed out the lm because the lm is very tightly squeezed in between the die and the waterblock so I don’t think water went all the way down there I guess ill just wait and see. Can’t stop thinking about it


@J7SC covered the important points, but I will put an 'I second this' to "better safe than sorry" when dealing with _very_ expensive GPU's 💗


----------



## cfranko

CS9K said:


> @J7SC covered the important points, but I will put an 'I second this' to "better safe than sorry" when dealing with _very_ expensive GPU's 💗


the gpu works completely fine while idle and there are no green lines or other artifacts on the screen. When I put it under load it works for 30 seconds then the pc restarts. I believe this can be solved if I clean the pcb because the gpu isn’t completely dead


----------



## Bobbydo

Bobbydo said:


> I think I will change tubes and use soft one. And do you think it could be the pump? It's moving 5700rpm. Max. Should I lower maybe?





CS9K said:


> Thank you for trying, at least. I am sorry that nothing we have tried so far, other than paste choice, has made any difference. I wouldn't blame you if you wanted to try liquid metal. I am not a fan, but that is just me.
> 
> I unfortunately can not think of anything else to try.






 this is my friend's new card. With 2 rads 12 fans and pro setup and pump.

Temps reached 85c hotspot. I have only 6 fans in total. So maybe it's the waterblock?

His card also has loud coil whine.


----------



## CS9K

Bobbydo said:


> this is my friend's new card. With 2 rads 12 fans and pro setup and pump.
> 
> Temps reached 85c hotspot. I have only 6 fans in total. So maybe it's the waterblock?
> 
> His card also has loud coil whine.


His temperatures are normal, and unfortunately, with the EK block and backplate, the coil whine is normal too; it is just as bad on reference RX 6900 XT's with the EK blocks :<


----------



## Bobbydo

CS9K said:


> His temperatures are normal, and unfortunately, with the EK block and backplate, the coil whine is normal too; it is just as bad on reference RX 6900 XT's with the EK blocks :<


Do you think maybe because I tightened the PCB with waterblock so much? That's why temps are high? See it's like all the thermal paste goes to the side. Should I untighten it a little?


----------



## CS9K

Bobbydo said:


> Do you think maybe because I tightened the PCB with waterblock so much? That's why temps are high? See it's like all the thermal paste goes to the side. Should I untighten it a little?
> View attachment 2534363


I assume the missing thermal pads are on the GPU still? Here is what my RX 6800 XT looked like when I pulled my EK block off of it, to swap the block onto my current RX 6900 XT. Most of the paste was on the block from when I separated it from my RX 6800 XT.











It looks like you have enough mounting pressure between the core and the block. So much so that the paste oozed out and there is no longer enough between the block and the GPU. It may be worth your time to CAREFULLY mount the GPU to the block to see how much room is between the block and memory/VRM's, to make sure my suggestion of 0.5mm pads was not wrong.

Goddamnit. EK sends 1.0mm pads with their reference RX 6900 XT block, and I am a goddamn idiot for not looking that up. I am SO sorry. Use 1.0mm pads, or 0.5mm pads and the Gelid GC-Extreme and just don't tighten down the four core-mounting-scews.

I'm going to stop talking now, because I messed up on the thermal pad size and I just... sigh :<


----------



## Bobbydo

CS9K said:


> I assume the missing thermal pads are on the GPU still? Here is what my RX 6800 XT looked like when I pulled my EK block off of it, to swap the block onto my current RX 6900 XT. Most of the paste was on the block from when I separated it from my RX 6800 XT.
> 
> View attachment 2534370
> 
> 
> 
> It looks like you have enough mounting pressure between the core and the block. So much so that the paste oozed out and there is no longer enough between the block and the GPU. It may be worth your time to CAREFULLY mount the GPU to the block to see how much room is between the block and memory/VRM's, to make sure my suggestion of 0.5mm pads was not wrong.
> 
> Goddamnit. EK sends 1.0mm pads with their reference RX 6900 XT block, and I am a goddamn idiot for not looking that up. I am SO sorry. Use 1.0mm pads, or 0.5mm pads and the Gelid GC-Extreme and just don't tighten down the four core-mounting-scews.
> 
> I'm going to stop talking now, because I messed up on the thermal pad size and I just... sigh :<


🤣🤣 It's cool man. I will reuse 1mm and see.


----------



## J7SC

CS9K said:


> (...)
> I'm going to stop talking now, because I messed up on the thermal pad size and I just... sigh :<


Another reason to 💗 thermal putty !


Spoiler: ...waiting for "other related duties"


----------



## SoloCamo

cfranko said:


> the gpu works completely fine while idle and there are no green lines or other artifacts on the screen. When I put it under load it works for 30 seconds then the pc restarts. I believe this can be solved if I clean the pcb because the gpu isn’t completely dead


Your gamble to take. Don't see the harm in playing it safe and letting it dry completely (ideally taking the cooler off and inspecting it). It may not be dead now, but keep running it like that and it very well could die.


----------



## cfranko

SoloCamo said:


> Your gamble to take. Don't see the harm in playing it safe and letting it dry completely (ideally taking the cooler off and inspecting it). It may not be dead now, but keep running it like that and it very well could die.


The thing is the water leaked while the gpu was running and it shut down while I was playing a game. I didn’t see the leaking water so I thought it shut down because of a driver issue or something then I turned it back on then it shut down again. After that I realized it was leaking. My second attempt of turning it on may have very likely killed it permanently but I am gonna take a look at the pcb today before trying to turn it on again


----------



## Godhand007

So I just noticed that we can increase the voltage on reference RX 6000 cards. I have reference RX 6900XT and I am able to keep it cool enough. Has anyone tested some stable settings with new voltages? I am thinking of putting it under 1.2 mv to get close to at least 2700MHz actual clock.

Also, Does a change to the voltage through MPT impact the bost behavior or extra power consumption (apart from the expected increase due to voltage)?


----------



## The EX1

cfranko said:


> My custom loop leaked inside the GPU PCB, now as soon as my 6900 xt goes under load the whole pc restarts. I left it turned off so that it can dry for a few days I am pretty sad to be honest. After a few days if it still doesn’t work I am gonna check the pcb but I am no technician or electrician so I probably have no luck. I would appreciate any suggestions you guys have worst case scenerio ill buy a new 3090


Coolants have additives that can make make them not dry out. Not sure what coolant you are using, but you may need to take it apart and clean the affected areas with alcohol.

Did your PSU get wet?


----------



## lawson67

@LtMatt is this the card you had?


----------



## LtMatt

lawson67 said:


> @LtMatt is this the card you had?


Yes. I've since bought another one as I regretted selling it. 

New sample has a slightly better core clock, but slightly worse memory.


----------



## cfranko

The EX1 said:


> Coolants have additives that can make make them not dry out. Not sure what coolant you are using, but you may need to take it apart and clean the affected areas with alcohol.
> 
> Did your PSU get wet?


Ok, so I took apart the GPU today and checked the PCB. I couldn’t see any water on the pcb, probably it dried byitself. So I cleaned the PCB with some alcohol just to be safe even though I couldn’t see any water. I reapplied the liquid metal and powered it up. Now the gpu decided to work normally and my temperatures are even better than before, probably because I added more liquid metal and tightened the spring screws as hard as I can. I get 40c edge and 50c junction at 300 watts. Anyway overall it was a very stressful and tiring proccess but I am glad that the gpu works


----------



## Godhand007

Godhand007 said:


> So I just noticed that we can increase the voltage on reference RX 6000 cards. I have reference RX 6900XT and I am able to keep it cool enough. Has anyone tested some stable settings with new voltages? I am thinking of putting it under 1.2 mv to get close to at least 2700MHz actual clock.
> 
> Also, Does a change to the voltage through MPT impact the bost behavior or extra power consumption (apart from the expected increase due to voltage)?


Anyone?


----------



## Bobbydo

J7SC said:


> Another reason to 💗 thermal putty !
> 
> 
> Spoiler: ...waiting for "other related duties"
> 
> 
> 
> 
> View attachment 2534376


I hate this damn card. 😐🤦🏾‍♂️ I did everything I could think of.


----------



## CS9K

Godhand007 said:


> So I just noticed that we can increase the voltage on reference RX 6000 cards. I have reference RX 6900XT and I am able to keep it cool enough. Has anyone tested some stable settings with new voltages? I am thinking of putting it under 1.2 mv to get close to at least 2700MHz actual clock.
> 
> Also, Does a change to the voltage through MPT impact the bost behavior or extra power consumption (apart from the expected increase due to voltage)?


Yes, it completely Takes over the stock boost behavior for voltage. When one sets a voltage via the "temp dependent vmin" trick, _any_ time the card boosts up near your max clock, it feeds it the voltage that you set, which is great for bench runs, but something I would absolutely never daily-drive.

We do not yet have bios flashing/modding capability with Radeon 6000 series GPU's.


----------



## lawson67

LtMatt said:


> Yes. I've since bought another one as I regretted selling it.
> 
> New sample has a slightly better core clock, but slightly worse memory.


I've got one coming on Monday and I've bought a Bykski water block for it


----------



## LtMatt

Interested to get your thoughts folks, I have never seen anything like this before. 

I decided to take my new Toxic apart again to see if I could improve the delta. It was really bad when I first got it, 12-14c at idle, but surprisingly the load delta was not much worse at 20c peaks at 350-375W load.

I managed to improve my idle delta with a liquid metal re-paste. Got it down to 5-8c, which seemed decent considering where I was initially.

I decided to have another crack at the whip and take it apart again, remembering how I had my first Toxic operating at 1-2c idle delta difference.

I checked my initial Liquid metal application and it was was solid, evenly spread. Not too much, not too little, for both GPU die and AIO block. Perfect. No indication that it had all pooled to one side of the die either.

Seeing that application warmed the cockles of my heart, although my optimism was about to receive a swift knee to the groin. 

I re-attached the retention bracket to the AIO block and powered up my system. Opened Radeon Software tuning menu, whispering prayers to the holy lord for a better delta.

The lord thanked me with a 10-11c delta.  It was 12-14c when I first bought it, but I'd got it down to 6-8c with my first re-paste and re-mount job. I saw deltas of 15-20c at 350-375W with my first re-paste job which was okay. Crap! 

Idle Delta









Decided to run Cod Vanguard at 4K max settings as it draws up to 375W at full whack using these clock settings just to see how bad the delta would be.









Was astonished to see a delta of just 7c, and I was studying the overlay like a curtain twitcher on crack and generally it was 5-6c delta in game, at 350W-384W+.









What the hell. How is that possible? How can the idle delta be so large, but the load delta so unbelievably fantastic and 10c better than my old Toxic at the same power settings, albeit that could not clock so high. This is using the stock 360 AIO. 

Answers on a postcard please because I'm truly stumped.  



lawson67 said:


> I've got one coming on Monday and I've bought a Bykski water block for it


Why dude? Your Powercolor was a great sample. I'd honestly be surprised if you get a better sample than what you already had!

Where did you get it from and how much?


----------



## J7SC

LtMatt said:


> Interested to get your thoughts folks, I have never seen anything like this before.
> 
> I decided to take my new Toxic apart again to see if I could improve the delta. It was really bad when I first got it, 12-14c at idle, but surprisingly the load delta was not much worse at 20c peaks at 350-375W load.
> 
> I managed to improve my idle delta with a liquid metal re-paste. Got it down to 5-8c, which seemed decent considering where I was initially.
> 
> I decided to have another crack at the whip and take it apart again, remembering how I had my first Toxic operating at 1-2c idle delta difference.
> 
> I checked my initial Liquid metal application and it was was solid, evenly spread. Not too much, not too little, for both GPU die and AIO block. Perfect. No indication that it had all pooled to one side of the die either.
> 
> Seeing that application warmed the cockles of my heart, although my optimism was about to receive a swift knee to the groin.
> 
> I re-attached the retention bracket to the AIO block and powered up my system. Opened Radeon Software tuning menu, whispering prayers to the holy lord for a better delta.
> 
> The lord thanked me with a 10-11c delta.  It was 12-14c when I first bought it, but I'd got it down to 6-8c with my first re-paste and re-mount job. I saw deltas of 15-20c at 350-375W with my first re-paste job which was okay. Crap!
> 
> Idle Delta
> View attachment 2534460
> 
> 
> Decided to run Cod Vanguard at 4K max settings as it draws up to 375W at full whack using these clock settings just to see how bad the delta would be.
> View attachment 2534462
> 
> 
> Was astonished to see a delta of just 7c, and I was studying the overlay like a curtain twitcher on crack and generally it was 5-6c delta in game, at 350W-384W+.
> View attachment 2534463
> 
> 
> What the hell. How is that possible? How can the idle delta be so large, but the load delta so unbelievably fantastic and 10c better than my old Toxic at the same power settings, albeit that could not clock so high. This is using the stock 360 AIO.
> 
> Answers on a postcard please because I'm truly stumped.
> (...)


I don't have your type of card, so I can't check, but I can think of a couple of things.

First, when the card is idling instead of being under 3D load, does the built-in pump (Asetek?) slow down ? That certainly would explain at least some of it...same if the fans slow down.

Second, it could be on the chip itself. I still have 'CrazyHorse', an Intel 6C/12T 4960X ES / 'Engineering Sample' release right before production. I call it CrazyHorse as a.) it would hit 5 GHz at relatively low core-v and serious under-spec power consumption, and b.) perhaps more importantly in this context, in temp software utilities (no matter which one), it always showed one core at or even below ambient at idle (system has an AIO). That is obviously not possible, and under load, said core temp was right in line with the others...


----------



## Bobbydo

LtMatt said:


> Interested to get your thoughts folks, I have never seen anything like this before.
> 
> I decided to take my new Toxic apart again to see if I could improve the delta. It was really bad when I first got it, 12-14c at idle, but surprisingly the load delta was not much worse at 20c peaks at 350-375W load.
> 
> I managed to improve my idle delta with a liquid metal re-paste. Got it down to 5-8c, which seemed decent considering where I was initially.
> 
> I decided to have another crack at the whip and take it apart again, remembering how I had my first Toxic operating at 1-2c idle delta difference.
> 
> I checked my initial Liquid metal application and it was was solid, evenly spread. Not too much, not too little, for both GPU die and AIO block. Perfect. No indication that it had all pooled to one side of the die either.
> 
> Seeing that application warmed the cockles of my heart, although my optimism was about to receive a swift knee to the groin.
> 
> I re-attached the retention bracket to the AIO block and powered up my system. Opened Radeon Software tuning menu, whispering prayers to the holy lord for a better delta.
> 
> The lord thanked me with a 10-11c delta.  It was 12-14c when I first bought it, but I'd got it down to 6-8c with my first re-paste and re-mount job. I saw deltas of 15-20c at 350-375W with my first re-paste job which was okay. Crap!
> 
> Idle Delta
> View attachment 2534460
> 
> 
> Decided to run Cod Vanguard at 4K max settings as it draws up to 375W at full whack using these clock settings just to see how bad the delta would be.
> View attachment 2534462
> 
> 
> Was astonished to see a delta of just 7c, and I was studying the overlay like a curtain twitcher on crack and generally it was 5-6c delta in game, at 350W-384W+.
> View attachment 2534463
> 
> 
> What the hell. How is that possible? How can the idle delta be so large, but the load delta so unbelievably fantastic and 10c better than my old Toxic at the same power settings, albeit that could not clock so high. This is using the stock 360 AIO.
> 
> Answers on a postcard please because I'm truly stumped.
> 
> 
> Why dude? Your Powercolor was a great sample. I'd honestly be surprised if you get a better sample than what you already had!
> 
> Where did you get it from and how much?


Wait I had this card. I had extreme version. Were you able to use those settings and go timespy? Because it always crashes above 2764mhz max. I regret selling it.


----------



## LtMatt

J7SC said:


> I don't have your type of card, so I can't check, but I can think of a couple of things.
> 
> First, when the card is idling instead of being under 3D load, does the built-in pump (Asetek?) slow down ? That certainly would explain at least some of it...same if the fans slow down.
> 
> Second, it could be on the chip itself. I still have 'CrazyHorse', an Intel 6C/12T 4960X ES / 'Engineering Sample' release right before production. I call it CrazyHorse as a.) it would hit 5 GHz at relatively low core-v and serious under-spec power consumption, and b.) perhaps more importantly in this context, in temp software utilities (no matter which one), it always showed one core at or even below ambient at idle (system has an AIO). That is obviously not possible, and under load, said core temp was right in line with the others...


No, pump runs the same speed regardless. Apparently if it ran slower it could shorten its life according to Asetek. That's third party information though, so may not be true. But I can confirm the pump speed is always maxed out. 


Bobbydo said:


> Wait I had this card. I had extreme version. Were you able to use those settings and go timespy? Because it always crashes above 2764mhz max. I regret selling it.


No, this is game stable only. My previous Toxic could only do 2770 stable in the same games, this one can do 2850. 

Old Toxic could do Timespy stable at 2820Mhz if I had the air con feeding it cold air. 

I've not tested this one yet in the same scenario, but hopefully it can do higher than that.


----------



## cfranko

What is the ideal delta between the hotpsot and coolant temperature?


----------



## Godhand007

CS9K said:


> Yes, it completely Takes over the stock boost behavior for voltage. When one sets a voltage via the "temp dependent vmin" trick, _any_ time the card boosts up near your max clock, it feeds it the voltage that you set, which is great for bench runs, but something I would absolutely never daily-drive.
> 
> We do not yet have bios flashing/modding capability with Radeon 6000 series GPU's.


Thanks for the info. I actually was able to get 2700MHz stable with 1.2 mv. Previously my card could only do 2550Mhz stable (used ~4hrs of TimeSpy/Extreme GT2 for testing both clocks). Hope guys at Igor's can do their magic and give us more control over voltage as I do see a noticeable performance gain with 150Mhz more.

Also, I can run games and score runs on the 3Dmark benchmarks at much higher frequencies but unless it is 4hrs stable on GT2, I don't consider it a 'proper' overclock.


----------



## Bobbydo

Guy, I decided to go with liquid metal. Is thermaltake silver any good?


----------



## cfranko

Bobbydo said:


> Guy, I decided to go with liquid metal. Is thermaltake silver any good?


Yes, the Thermalright Silver King is good.


----------



## LtMatt

Silver King was what I just used.


----------



## CS9K

LtMatt said:


> What the hell. How is that possible? How can the idle delta be so large, but the load delta so unbelievably fantastic and 10c better than my old Toxic at the same power settings, albeit that could not clock so high. This is using the stock 360 AIO.
> 
> Answers on a postcard please because I'm truly stumped.


The way "hotspot" was described to me: The GPU die has an array of thermal sensors across it. "GPU Temp" is usually the average of all of the sensors, and "Hotspot" is simply the warmest of those sensors. 

It's why I tell folks not to get _too_ wrapped around the axle about hotspot temp, unless you're looking at load deltas of 25C+

That said, at idle on the desktop, it's probably putting most/all of those GPU core bits to sleep, and only running the bare minimum of core components to run the desktop. Your water is keeping the idle/off parts of the core nice and cool (hence the low average), and only the active cores are registering as hotspot.


----------



## LtMatt

CS9K said:


> The way "hotspot" was described to me: The GPU die has an array of thermal sensors across it. "GPU Temp" is usually the average of all of the sensors, and "Hotspot" is simply the warmest of those sensors.
> 
> It's why I tell folks not to get _too_ wrapped around the axle about hotspot temp, unless you're looking at load deltas of 25C+
> 
> That said, at idle on the desktop, it's probably putting most/all of those GPU core bits to sleep, and only running the bare minimum of core components to run the desktop. Your water is keeping the idle/off parts of the core nice and cool (hence the low average), and only the active cores are registering as hotspot.


Perhaps, but have you ever seen anyone have a higher delta at idle that under heavy load?


----------



## jonRock1992

Bobbydo said:


> Guy, I decided to go with liquid metal. Is thermaltake silver any good?


I'm not sure, but the best that I have used is Thermalright Silver King, if that's what you meant. I like it more than conductonaut.


----------



## The EX1

Bobbydo said:


> I hate this damn card. 😐🤦🏾‍♂️ I did everything I could think of.


You got your junction temp into the low 80s before which is a perfectly fine temp for the card at 350w+. Why are you continuing to mess with it? Especially with how little air that NZXT case is allowing your front radiator to have.


----------



## cfranko

The EX1 said:


> You got your junction temp into the low 80s before which is a perfectly fine temp for the card at 350w+. Why are you continuing to mess with it? Especially with how little air that NZXT case is allowing your front radiator to have.


80c junction is terrible at 350 watts, when you build a watercooling system the whole point is to get good temps, not sit around like there is nothing wrong when your junction is at 80


----------



## CS9K

LtMatt said:


> Perhaps, but have you ever seen anyone have a higher delta at idle that under heavy load?


I haven't seen anyone pay attention to Delta-T when not under load. 

If you look at my explanation above, it makes perfect sense why Delta-T at idle is how it is; you're comparing an average (core) to a "peak" (hotspot) value.


----------



## Kawaz

Artifacting my way towards that 3100mhz. Fastest pass at 3090 in wattman.


----------



## SoloCamo

CS9K said:


> I haven't seen anyone pay attention to Delta-T when not under load.
> 
> If you look at my explanation above, it makes perfect sense why Delta-T at idle is how it is; you're comparing an average (core) to a "peak" (hotspot) value.


Correct, at least with all the Vega cards I ran that displayed the values.


----------



## J7SC

LtMatt said:


> Perhaps, but have you ever seen anyone have a higher delta at idle that under heavy load?


...it is a bit strange indeed, but your load temps and that delta are beyond excellent which are the more important ones. Depending on your PC case and arrangement, you could try to leave the front-cover off and set the fans to 100% and see if the idle temps / delta changes. Another question would be if there's s.th. 'light' running in Windows in what seems to be 'idle' which uses some extra GPU but not enough to kick up voltages etc.

For the record, the recent HWInfo I posted w/ 358W + 'x' and 2700 > 2660 MHz max had a minimum (idle) delta of 3 C and a max load delta of 19 C.


----------



## LtMatt

CS9K said:


> I haven't seen anyone pay attention to Delta-T when not under load.
> 
> If you look at my explanation above, it makes perfect sense why Delta-T at idle is how it is; you're comparing an average (core) to a "peak" (hotspot) value.


Well yours is the only logical answer I've seen so far tbf. But I have never seen this behaviour before, ever since hotspot became a thing from Vega 64 Liquid onwards and I've had around 2-8 of each top tier card per generation from Vega, to Radeon VII, to 5700 XT (had 8 of those Lol) and onwards. The delta has always been bigger under load, not the other way around so this is definitely a strange anomaly. 


J7SC said:


> ...it is a bit strange indeed, but your load temps and that delta are beyond excellent which are the more important ones. Depending on your PC case and arrangement, you could try to leave the front-cover off and set the fans to 100% and see if the idle temps / delta changes. Another question would be if there's s.th. 'light' running in Windows in what seems to be 'idle' which uses some extra GPU but not enough to kick up voltages etc.
> 
> For the record, the recent HWInfo I posted w/ 358W + 'x' and 2700 > 2660 MHz max had a minimum (idle) delta of 3 C and a max load delta of 19 C.


I agree completely. I'm delighted and I don't plan on touching it now since it is likely I would only make it worse if I tried again. Hopefully i don't need to re-paste down the line.

Your delta is what I would expect, but it seems my values are reversed almost completely. It's lucky i decided to test it under heavy load first, as seeing a 11c difference in idle delta had me initially thinking it was a really bound mount for whatever reason.


----------



## J7SC

LtMatt said:


> (...)
> 
> I agree completely. I'm delighted and I don't plan on touching it now since it is likely I would only make it worse if I tried again. Hopefully i don't need to re-paste down the line.
> 
> Your delta is what I would expect, but it seems my values are reversed almost completely. It's lucky i decided to test it under heavy load first, as seeing a 11c difference in idle delta had me initially thinking it was a really bound mount for whatever reason.


...yeah, don't touch it as it is superb for an AIO. I won't touch mine either, unless there's some deterioration down the line w/ my custom loop setup. At stock (no MPT PL), my max Hotspot is in the high 40s / low 50s, and even with the 20%+ extra juice below, it seems under control, given max values in the vbios...









...re. Hotspot in 'general terms', that parameter must have been there for a while, even if earlier GPUs (eg. NVidia) didn't show it until recently. It does play a sizable role in the boost / throttle algorithms, especially with the more granular boost / throttle steps with certain cards.


----------



## The EX1

cfranko said:


> 80c junction is terrible at 350 watts, when you build a watercooling system the whole point is to get good temps, not sit around like there is nothing wrong when your junction is at 80


That would be true if he had an abundance of radiator surface area and good airflow. Two things he does not have with a choked off front intake that then also feeds into the top radiator. So yes, 80C junction at 350W+ with around 40C fluid temps is fine.

Edit: It doesn’t matter anymore I guess since he remounted the block again and it seems to of made temps worse. He needs to use the proper .5mm and 1mm pads and thick thermal paste. Once his mount is correct, he can then decide on upgrading his case if he wants better thermals. I have built in that exact case before and it is miserable for temperatures. It looks like he sandwiched the front radiator bracket between the radiator and fans. This forces the fans up against the front panel and chokes them.


----------



## Kawaz

My delta @3000mhz in wattman, 606w core load in furmark with liquid metal.
But first boot of the day, so watertemp was low, probably 25 ish.

Edit: No way im running furmark with those settings any longer


----------



## LtMatt

Kawaz said:


> My delta @3000mhz in wattman, 606w core load in furmark with liquid metal.
> But first boot of the day, so watertemp was low, probably 25 ish.
> 
> Edit: No way im running furmark with those settings any longer


I'm amazed you ran it at all.


----------



## Kawaz

LtMatt said:


> I'm amazed you ran it at all.


yeah, was a very fast run. Surprised the temps didnt shoot up more than they did.


----------



## cfranko

Kawaz said:


> My delta @3000mhz in wattman, 606w core load in furmark with liquid metal.
> But first boot of the day, so watertemp was low, probably 25 ish.
> 
> Edit: No way im running furmark with those settings any longer


Are you out of your mind


----------



## Kawaz

cfranko said:


> Are you out of your mind


At this point, pretty much yeah. Props to XFX for making a good card with only dual 8pins


----------



## Bobbydo

CS9K said:


> I can not tell you how much it means that you took my advice. Thank you so much <3 I only want to see you enjoy your GPU problem-free, and it does mean a lot that you were willing to try different things 💗
> 
> I do tech support for a living, and my account age should give you an idea of how long I've been playing with computers. PC gaming is my jam, and I hope we can get your issue figured out 💗
> 
> 
> Does it continue to be fast once the loop is full of water, though? (thinking back to the big bubble that wouldn't move) A restriction that slows water may not slow air as much as it would water.
> 
> Aside from a possible flow restriction somewhere, I don't have any other thoughts other than to try _one_ more thing:
> 
> Pull the thermal pads off of the GPU, and _carefully_ re-mount the PCB to the block (you can re-use paste that is on there for this step, as this test is temporary). Leave the power target limit and clock speed at stock, aim a fan at the GPU so air is (hopefully) blowing between the block and card, and start Heaven or Superposition, and see if your temperatures behave differently. You only need to run the benchmark for a few seconds or tens of seconds. Doing this will rule out a mounting issue.
> 
> If your core temperatures are suddenly better during that test, then we know it is the thermal pads causing core-to-block mounting issues.


I tried liquid metal. It was easy. But temps still go to 67c fans full speed under stress test. Hotspot 85c max. Isn't this so high for liquid metal? Backplate was hot a little.


----------



## Kawaz

Bobbydo said:


> I tried liquid metal. It was easy. But temps still go to 67c fans full speed under stress test. Hotspot 85c max. Isn't this so high for liquid metal? Backplate was hot a little.


What rads are you running? And how was water temp.


----------



## LtMatt

Kawaz said:


> At this point, pretty much yeah. Props to XFX for making a good card with only dual 8pins


Right, I forgot that the Merc line only had 2x8.


----------



## Bobbydo

Bobbydo said:


> I tried liquid metal. It was easy. But temps still go to 67c fans full speed under stress test. Hotspot 85c max. Isn't this so high for liquid metal? Backplate was hot a little.


I put the card Norma not vertically and temps gone down. Now max temps is 50c and 70 hotspot. With liquid metal. This stuff is amazing. 😍 And one of my rads is very thin. Like 30mm max.


----------



## Bobbydo

Kawaz said:


> What rads are you running? And how was water temp.


Thanks guys. Now everything is cool.


----------



## CS9K

Bobbydo said:


> I put the card Norma not vertically and temps gone down. Now max temps is 50c and 70 hotspot. With liquid metal. This stuff is amazing. 😍 And one of my rads is very thin. Like 30mm max.


Glad to hear that temperatures are finally under control! Nothing wrong with 30mm radiators, I dump 500W+ into my loop and the fans barely have to turn past the 45% duty cycle they all idle at, to keep water temp at like 8C Delta-T over ambient.


----------



## Bobbydo

Probably that little Buble is what caused the problem. It's not in there anymore. I removed the vertical bracket.


----------



## CS9K

LtMatt said:


> Well yours is the only logical answer I've seen so far tbf. But I have never seen this behaviour before, ever since hotspot became a thing from Vega 64 Liquid onwards and I've had around 2-8 of each top tier card per generation from Vega, to Radeon VII, to 5700 XT (had 8 of those Lol) and onwards. The delta has always been bigger under load, not the other way around so this is definitely a strange anomaly.
> 
> I agree completely. I'm delighted and I don't plan on touching it now since it is likely I would only make it worse if I tried again. Hopefully i don't need to re-paste down the line.
> 
> Your delta is what I would expect, but it seems my values are reversed almost completely. It's lucky i decided to test it under heavy load first, as seeing a 11c difference in idle delta had me initially thinking it was a really bound mount for whatever reason.


There may be more to it at idle than my generalized explanation, but as @J7SC said, your load Delta-T is fantastic, so in the grand scheme of things, you're good to go!


----------



## Bobbydo

Bobbydo said:


> Probably that little Buble is what caused the problem. It's not in there anymore. I removed the vertical bracket.


----------



## CS9K

Bobbydo said:


> View attachment 2534653
> 
> View attachment 2534652


THAT is more like it! Happy for you! 💗


----------



## Bobbydo

Thank you guys for all the help. Really. 
Why don't everyone use liquid metal?


----------



## tolis626

tolis626 said:


> Hey guys, quick question. Any of you peeps playing Warzone at 1440p with these cards? I'm asking because I think I may have some sort of issue. See, I mostly get around 180-220 fps, but I sometimes get dips to below 144 fps causing stuttering (bouncing between over and under the Freesync range of my monitor). What I've noticed is that in Warzone, my GPU usage is usually rather low and sporadic (60% one moment, 90% the next, then 50, then 80 etc), and so are my clocks, bouncing between full boost (about 2560MHz) and like 2100MHz, resulting in some jarring interruptions to the buttery smoothness I expect. Weirdly, Modern Warfare multiplayer shows no such issues. FPS is constantly over 200, clocks are practically locked to 2560MHz and GPU usage is over 90% at all times.
> 
> Any ideas?


Hey, my post got buried by admittedly much more interesting stuff (I wanna go water too, but funds are tight right now  ), but this kept bugging me so I decided to bump it again and post some screenies to show what I meant. First image is after a round of Warzone. Choppy clockspeed chart, choppy utilization chart, choppy everything. Gameplay was ok, but it does dip occassionaly to like 140FPS (when usually it's in the 200 range when the card is fully utilized). Second image is after playing MW multiplayer for a while. Smooth as butter, framerates were stable and very high, no hitches, no nothing, card is almost fully utilized all of the time, just like it should. Any thoughts?

















PS : When playing Warzone I sometimes get a black screen for like a split second and then it continues playing normally. At first I thought it was my screen (My old Samsung CHG70) that's probably dying and doing a lot of weird stuff, but I got a new one yesterday (Samsung G7 32", it's damn glorious) and it's doing the same thing. I never crash in Warzone (or anything else for that matter) with the settings I have now (500MHz min clock, 2600MHz max, 1150mV in RS, 330W MPT PL with +6% in RS so like 350W, 2150MHz FT RAM). I'm kind of thinking that it may be SoC clock of FCLK since I've bumped those slightly to 1267MHz and 2000MHz respectively. Ideas?

PS 2 : @Kawaz Hey man, nice clocks you got there. What is your PSU using for its power generation? Uranium or Thorium?


----------



## CS9K

tolis626 said:


> Hey, my post got buried by admittedly much more interesting stuff (I wanna go water too, but funds are tight right now  ), but this kept bugging me so I decided to bump it again and post some screenies to show what I meant. First image is after a round of Warzone. Choppy clockspeed chart, choppy utilization chart, choppy everything. Gameplay was ok, but it does dip occassionaly to like 140FPS (when usually it's in the 200 range when the card is fully utilized). Second image is after playing MW multiplayer for a while. Smooth as butter, framerates were stable and very high, no hitches, no nothing, card is almost fully utilized all of the time, just like it should. Any thoughts?
> View attachment 2534667
> 
> View attachment 2534668
> 
> 
> PS : When playing Warzone I sometimes get a black screen for like a split second and then it continues playing normally. At first I thought it was my screen (My old Samsung CHG70) that's probably dying and doing a lot of weird stuff, but I got a new one yesterday (Samsung G7 32", it's damn glorious) and it's doing the same thing. I never crash in Warzone (or anything else for that matter) with the settings I have now (500MHz min clock, 2600MHz max, 1150mV in RS, 330W MPT PL with +6% in RS so like 350W, 2150MHz FT RAM). I'm kind of thinking that it may be SoC clock of FCLK since I've bumped those slightly to 1267MHz and 2000MHz respectively. Ideas?
> 
> PS 2 : @Kawaz Hey man, nice clocks you got there. What is your PSU using for its power generation? Uranium or Thorium?


TL;DR: What I think you're seeing is perfectly normal behavior.

Rationale: Today's battle royale shooters are notoriously CPU-limited in a lot of cases, and the behaviour of your GPU is about what I would expect as it sits and waits on your CPU to feed it data before it can pump out the next frame.

There _are_ some edge cases where the super-aggressive power-savings built into modern AMD drivers can cause stuttering in CPU-limited scenarios, as some of the CU's in the GPU will be put to sleep while waiting for work, then have to be woken back up when work comes in. This will manifest as stuttering during light loads, where there may not or should not be stuttering otherwise.

I don't know if the above issue is adding to the CPU-limited stuttering issue that I think you have, but going from 240fps to 140fps sounds completely normal for a game that can push both your CPU and GPU to its limits depending on what's going on. Keep in mind exponential growth: for every additional person in the server and on screen, it has to send that person's data to everyone else, and vice versa. Now add that up to 120 players or whatever it caps out at nowadays, and that's a LOT of crunching for your CPU to do.

*Edited for clarity.


----------



## tolis626

CS9K said:


> TL;DR: What I think you're seeing is perfectly normal behavior.
> 
> Rationale: Today's battle royale shooters are notoriously CPU-limited in a lot of cases, and the behaviour of your GPU is about what I would expect as it sits and waits on your CPU to feed it data before it can pump out the next frame.
> 
> There _are_ some edge cases where the super-aggressive power-savings built into modern AMD drivers can cause stuttering in CPU-limited scenarios, as some of the CU's in the GPU will be put to sleep while waiting for work, then have to be woken back up when work comes in. This will manifest as stuttering during light loads, where there may not be stuttering.
> 
> I don't know if this is adding to the issue, but going from 240fps to 140fps sounds completely normal for a game that can push both your CPU and GPU to its limits depending on what's going on. Keep in mind exponential growth: for every additional person in the server and on screen, it has to send that person's data to everyone else, and vice versa. Now add that up to 120 players or whatever it caps out at nowadays, and that's a LOT of crunching for your CPU to do.


I was quite sure that that would be the answer, I just wanted to check with you guys since many of you here have (way) more experience than I do.

I'm going on a little rant, sorry, but here I go. Performance wise, the game used to run fine at the beginning. I had a more modest setup then, with the 3800X and 5700XT, and I was consistently above 110FPS, usually around 130-140. The game had other issues, but performance was not one of them. Then there came the Cold War integration, Raven took the reigns and the "new" map was introduced. It seemed that I was getting less and less performance with each update. My 5700XT that was once killing it was only getting like 90FPS average with dips in the high 70s-low 80s. And I had changed nothing in my system hardware wise at that point. Upgrading to the 5900X was a definite improvement by itself, but not what I would expect if I was just CPU bound. Then came the 6900XT, and sure enough, it destroys the 5700XT in performance, but still I feel like the game isn't behaving like it was supposed to. I mean, ok, there's the 5950x for the CPU (And I guess Alder Lake now) and there's higher clocking 6900XTs or 3090s, but come on, my system is basically the best you can get for gaming right now (outside of RT). What I mean by that is that, it's not like with my previous hardware. It was great, but if there was something that would run poorly, I always knew that I was basically mid-range and there was the 3900X/3950X/10900k and the 2080ti, so someone was probably getting the performance I wanted. But now, if I'm not getting it, probably no one is. I get what you're saying, and I wouldn't even be talking about it if the game didn't use to run much better.

Anyway. The TL;DR is that Warzone sucks balls, they definitely did something that decreased performance and I'm sitting here like an idiot splitting hairs because I feel like I should be getting better performance when I'm using a 240Hz monitor and I'm practically maxing it out most of the time. But damn it I'm an idiot with a right to know! 

Thanks for taking the time to reply man, appreciate it!


----------



## CS9K

tolis626 said:


> Anyway. The TL;DR is that Warzone sucks balls,* they definitely did something that decreased performance* and I'm sitting here like an idiot splitting hairs because I feel like I should be getting better performance when I'm using a 240Hz monitor and I'm practically maxing it out most of the time. But damn it I'm an idiot with a right to know!
> 
> Thanks for taking the time to reply man, appreciate it!


For sure! PC gaming is my jam, so helping folks out is allll good.

Bold text basically nails it. You do indeed have what amounts to the most badass gaming rig one can own, and yet, every game is only going to be _as_ efficient as its coding and threading distribution allow. 

When Warzone came out, it was an unoptimized pile of garbage. They _did_ improve the game significantly, but the game is what it is, and well, you've done _your_ due diligence to make sure your system wasn't holding you back. That's all you can do, yanno?


----------



## Skinnered

Also struggling with very high hotspot temps.
Running a refferenceRX 6900 XT with an Alpha Eiswolf 2 and allthough the GPU is capable of 2700 mhz+ core and 2150mhz mem speeds, with MTP on 350W PL GPU, 360W TDC, fclk 2200mhz, on 1100mV, the hotspots keep on rising to 100 degrees C. I even add two extra fan inside the radiator sucking air inside + the 3 pushing on the outside.
I guess the radiator doesn't cut ? Need more surface. I could try respaste with better thermal paste. Just used the standard paste from the Eiswolf AIO.


----------



## CS9K

Skinnered said:


> Also struggling with very high hotspot temps.
> Running a refferenceRX 6900 XT with an Alpha Eiswolf 2 and allthough the GPU is capable of 2700 mhz+ core and 2150mhz mem speeds, with MTP on 350W PL GPU, 360W TDC, fclk 2200mhz, on 1100mV, the hotspots keep on rising to 100 degrees C. I even add two extra fan inside the radiator sucking air inside + the 3 pushing on the outside.
> I guess the radiator doesn't cut ? Need more surface. I could try respaste with better thermal paste. Just used the standard paste from the Eiswolf AIO.


So long as air _and_ water are moving through your loop, your water system is _probably_ fine. Thermal paste is sometimes what can cause high Delta-T between "core" temp and hotspot temp. A few pages back I put a video up of how to paste using Gelid GC-Extreme; the paste myself and many others here use.


----------



## Skinnered

/\ I will get this asap and repaste.
My delta is 30+, sometimes 40 esp when gaming with reshade (rtgi included).
Thanks for the heads up.

This Gpu is relly shining when oc' ed, sometimes beating the 3090.


----------



## Godhand007

Skinnered said:


> Also struggling with very high hotspot temps.
> Running a refferenceRX 6900 XT with an Alpha Eiswolf 2 and allthough the GPU is capable of 2700 mhz+ core and 2150mhz mem speeds, with MTP on 350W PL GPU, 360W TDC, fclk 2200mhz, on 1100mV, the hotspots keep on rising to 100 degrees C. I even add two extra fan inside the radiator sucking air inside + the 3 pushing on the outside.
> I guess the radiator doesn't cut ? Need more surface. I could try respaste with better thermal paste. Just used the standard paste from the Eiswolf AIO.


Now, this might sound a bit blasphemous, I am using a reference air-cooled card with 1.2 mv (MPT) and 2700Mhz actual clocks, and my hotshot temp sometimes goes over 110 C for apps like GT2. Even before the 1.2 m MPT trick, my card could go up to 110 C hotspot. I have been using it with these settings for about a year and have had no problems with it. One time the hotspot temp even went over 118 C and my PC shut down. This was due to extreme dirt and grime on the cooler but after cleaning it up everything is working as expected. This is just an FYI. We know that is best to keep your PC components as cool as possible (generally ) but your not going to break the card with a 110 C hotspot temp.


----------



## Skinnered

Godhand007 said:


> Now, this might sound a bit blasphemous, I am using a reference air-cooled card with 1.2 mv (MPT) and 2700Mhz actual clocks, and my hotshot temp sometimes goes over 110 C for apps like GT2. Even before the 1.2 m MPT trick, my card could go up to 110 C hotspot. I have been using it with these settings for about a year and have had no problems with it. One time the hotspot temp even went over 118 C and my PC shut down. This was due to extreme dirt and grime on the cooler but after cleaning it up everything is working as expected. This is just an FYI. We know that is best to keep your PC components as cool as possible (generally ) but your not going to break the card with a 110 C hotspot temp.


I know it can handle it, but sometimes ( and I don't know why sometimes it does not,) it throttles and also shutdowns and crashes.
Sometimes it runs fine with 104 and sometimes I get crashes at 97.
Probably different reasons, heat vs power or voltage?

I think I can hit 2800 when getting those temps in check, maybe also 1,2 mv needed then.


----------



## J7SC

CS9K said:


> So long as air _and_ water are moving through your loop, your water system is _probably_ fine. Thermal paste is sometimes what can cause high Delta-T between "core" temp and hotspot temp. A few pages back I put a video up of how to paste using Gelid GC-Extreme; the paste myself and many others here use.


Gelid GC Extreme (and in my applications the thermal putty for the VRAM) all really helped to get low temps even w/ 500W+ (indicated 460W per HWInfo). But I like to focus on the sometimes-ignored back of the 6900XT instead:


Spoiler



Even a fan and/or heatsink on the_ backplate_ can help with overall temps. When I got my Gigabyte 3x8pin 6900XT, its stock backplate would get searing hot, much hotter than any other GPU I have. That isn't necessarily a bad thing as it shows that heat transfer is actually taking place, but still. When I took the card apart to mount the Bykski block, I noticed a couple of things. As already mentioned, the die had a thumb / palm print on it with partially exposed die...

The stock backplate mount was also interesting...on the one hand, it had 1.5mm or so thick white and very soft thermal pad sitting between the_ back _of the GPU die (on the caps and all) and the backplate. On the other, there were no other thermal pads on the back, even where there were outlines for it...nothing...nada. When I substituted the Bykski backplate that came with the waterblock, I added thermal putty on the backside of the VRAM on the PCB, reused that white soft pad for the back of the GPU (carefully, as it sits on little caps etc) and also added Thermalright thermal pads on the back for the power delivery. I then mounted a big aluminum heatsink (per below, from Amazon) on the backplate and even added an Arctic fan.

I was so impressed with the temp improvements of the 6900XT that I applied the same steps to the 3090 on the dual mobo in the same case of that work-play build. In percentage terms, the 3090 showed even more improvements, probably because it has more VRAM that is also double-sided and the hotter-running GGDR6_X_.


So adding additional cooling to the back of a big-PL, small nm GPU, no matter what the brand, can bring additional improvements if you have already done what you can re. the business end of the card.


----------



## LtMatt

Not gonna lie, this kind of delta could easily cause an erection. 
Call of Duty: Vanguard Online | 6900 XT Toxic Extreme | 4K Ultra Settings - YouTube


----------



## Godhand007

Skinnered said:


> I know it can handle it, but sometimes ( and I don't know why sometimes it does not,) it throttles and also shutdowns and crashes.
> Sometimes it runs fine with 104 and sometimes I get crashes at 97.
> Probably different reasons, heat vs power or voltage?
> 
> I think I can hit 2800 when getting those temps in check, maybe also 1,2 mv needed then.


Are you sure that issue is with GPU and not with CPU overclock or RAM configuration? I had an issue with random shutdowns with my card, I thought it was PSU-related but turned out I had undervolted my CPU and it was causing the issue. The same could happen if your memory OC is not stable.

FYI, my card can probably do 2800Mhz with 1.2mv in games but if you want to be sure about your OC, Timespy GT2 in the loop is the best way to go about it. I had 2700Mhz working fine in Cyberpunk 2077 with default voltage for a long time but then suddenly one day a driver timeout occurred. So OC stability is something for which you are going to have to sacrifice ~100Mhz on average.


----------



## Skinnered

Godhand007 said:


> Are you sure that issue is with GPU and not with CPU overclock or RAM configuration? I had an issue with random shutdowns with my card, I thought it was PSU-related but turned out I had undervolted my CPU and it was causing the issue. The same could happen if your memory OC is not stable.
> 
> FYI, my card can probably do 2800Mhz with 1.2mv in games but if you want to be sure about your OC, Timespy GT2 in the loop is the best way to go about it. I had 2700Mhz working fine in Cyberpunk 2077 with default voltage for a long time but then suddenly one day a driver timeout occurred. So OC stability is something for which you are going to have to sacrifice ~100Mhz on average.


Thats a good question as my z690 mobo is still in the early raw bios fase and is not 100% stable yet.
I only have turbo multicore enhancements, ie turbo on all cores and some memory enhancements, but nothing to exotic yet.
And I have tested with gpu at default settings and that was mostly fine.
Cod Vanguard keeps on freezing during checkpoint saving though.
Followed scan and repair message.

Well I also experience reshade also puts a lot of strain on the gpu so thats a good test too.
But I will do a Timespy check.


----------



## Godhand007

Skinnered said:


> Thats a good question as my z690 mobo is still in the early raw bios fase and is not 100% stable yet.
> I only have turbo multicore enhancements, ie turbo on all cores and some memory enhancements, but nothing to exotic yet.
> And I have tested with gpu at default settings and that was *mostly fine.*
> Cod Vanguard keeps on freezing during checkpoint saving though.
> Followed scan and repair message.
> 
> Well I also experience reshade also puts a lot of strain on the gpu so thats a good test too.
> But I will do a Timespy check.


At default, GPU should be completely fine instead of mostly fine. Please check if the CPU voltage being supplied is as per Intel specs and stick to the default XMP profile for memory during testing.
Multiplayer games are best to test the stability of the system but we can not standardize them to produce consistent resulsts. One thing I have learned is that to always keep other components at default and only focus on one of the components for your OC. Simultaneous OCs on multiple components can cause a lot of confusion and misdirection during diagnosis.


----------



## Skinnered

I had a rough start with the Gigabyte z690 Aorus.Master
It was till the second or third bios that my samsung 961 ssd was finally supported for example and the board it's still not 100% stable and I don't know what causes it.

Memory is on xmp 3.0.

As far as I can see nothing unusual with Cpu voltages, but haven't monitored it during workloads.
Cant explain the Cod Vanguard freezes during checkpoints.

Every bios update I can enable a thing or two extra without immediate crashes though, so hopefully it get 100% soon.


----------



## SAN-NAS

Idk but I couldn't help myself and bought an ASRock 6900 XT PHANTOM D on black Friday... It was bundled with ASRock G10 router, Farcry 6, Resident evil 8, and Xbox game pass... Plus was $100 off for $1500 with it all... It's the xtx core but I don't care that much with all the free stuff...

Super excited to try it out as I've been using Nvidia for the last few years. I do like RT and Dlss but there's not many games that I want to use RT now that I'm done with 2077..

Hoping I can get some good clocks out of the core. It's a 3x8pin pwr but not sure that helps with overclock on the xtx cores? Are people using Afterburner mainly, any limitations using it over amds?


----------



## J7SC

SAN-NAS said:


> Idk but I couldn't help myself and bought an ASRock 6900 XT PHANTOM D on black Friday... It was bundled with ASRock G10 router, Farcry 6, Resident evil 8, and Xbox game pass... Plus was $100 off for $1500 with it all... It's the xtx core but I don't care that much with all the free stuff...
> 
> Super excited to try it out as I've been using Nvidia for the last few years. I do like RT and Dlss but there's not many games that I want to use RT now that I'm done with 2077..
> 
> Hoping I can get some good clocks out of the core. It's a 3x8pin pwr but not sure that helps with overclock on the xtx cores? Are people using Afterburner mainly, any limitations using it over amds?


 'grats on your package deal. For oc'ing, you can use Afterburner but get more options and better control with the built-in Radeon tuning software. Also download MPT > here ...making sure to read the descriptions.

In a nutshell, MPT allows you to easily raise the power limit w/o vbios flashing, but it is best to start in small steps - as it can really raise your temps - after you establish a baseline with stock PL. I also have a 3x8 pin 'regular' XTX which can reach as high as 2740 MHz in some benches, but typically settles around 2660 + - for trouble-free gaming. I did w-cool the card though


----------



## CS9K

SAN-NAS said:


> Idk but I couldn't help myself and bought an ASRock 6900 XT PHANTOM D on black Friday... It was bundled with ASRock G10 router, Farcry 6, Resident evil 8, and Xbox game pass... Plus was $100 off for $1500 with it all... It's the xtx core but I don't care that much with all the free stuff...
> 
> Super excited to try it out as I've been using Nvidia for the last few years. I do like RT and Dlss but there's not many games that I want to use RT now that I'm done with 2077..
> 
> Hoping I can get some good clocks out of the core. It's a 3x8pin pwr but not sure that helps with overclock on the xtx cores? Are people using Afterburner mainly, any limitations using it over amds?


Howdy! There's actually no benefit to using Afterburner if you use the full install of the driver package. The only other program you'll need is "MorePowerTool" to raise the power limit; instructions for which can be found elsewhere in this thread.

I highly recommend setting Afterburner back to all default settings, then close it and delete it, then uninstall then DDU-in-safe-mode all of the Nvidia software off of your PC.


----------



## jonRock1992

SAN-NAS said:


> Idk but I couldn't help myself and bought an ASRock 6900 XT PHANTOM D on black Friday... It was bundled with ASRock G10 router, Farcry 6, Resident evil 8, and Xbox game pass... Plus was $100 off for $1500 with it all... It's the xtx core but I don't care that much with all the free stuff...
> 
> Super excited to try it out as I've been using Nvidia for the last few years. I do like RT and Dlss but there's not many games that I want to use RT now that I'm done with 2077..
> 
> Hoping I can get some good clocks out of the core. It's a 3x8pin pwr but not sure that helps with overclock on the xtx cores? Are people using Afterburner mainly, any limitations using it over amds?


In afterburner you can't change memory timing settings.


----------



## SoloCamo

J7SC said:


> Even a fan and/or heatsink on the_ backplate_ can help with overall temps. When I got my Gigabyte 3x8pin 6900XT, its stock backplate would get searing hot, much hotter than any other GPU I have. That isn't necessarily a bad thing as it shows that heat transfer is actually taking place, but still. When I took the card apart to mount the Bykski block, I noticed a couple of things. As already mentioned, the die had a thumb / palm print on it with partially exposed die...


Funny you say that, my 6900XT backplate is near enough touching my Noctua NH-D15 (i9 10900 in use with it). Must help a bit because on my Radeon VII it actually seemed to drop my card a few degrees compared to my 4790k w/ a Noctua NH-U12S.


----------



## tolis626

CS9K said:


> For sure! PC gaming is my jam, so helping folks out is allll good.
> 
> Bold text basically nails it. You do indeed have what amounts to the most badass gaming rig one can own, and yet, every game is only going to be _as_ efficient as its coding and threading distribution allow.
> 
> When Warzone came out, it was an unoptimized pile of garbage. They _did_ improve the game significantly, but the game is what it is, and well, you've done _your_ due diligence to make sure your system wasn't holding you back. That's all you can do, yanno?


A man after my own heart! I love the fact that guys like me (and maybe you too) who were the nerds of old have grown up, become businessmen/engineers/doctors/whatever but are still the same nerds deep down. My job has nothing to do with computers or gaming, but I love this stuff too much and I love talking about it with other people who share my sentiments about it and are equally or more knowledgeable (i.e not like my friends and family who would get a stroke and die if I told them I threw 1300€ on a graphics card  ).

Warzone is indeed a steaming pile of garbage. The devs can't seem to catch a break. Fix one thing, another two things break. Ban one hacker, ten take his place. And then there's the trash playerbase who do everything for an extra kill or a win (I swear, if these guys could get one win more by selling their mothers, they'd do it). I've grown to resent the game, but I play with a group of very good friends who we can't hang out IRL easily as we live in different cities.

Anyway... I decided to try out one last thing. I remembered a while ago, I was researching tips to fix the game's performance for a friend who was struggling, and I ran across a post suggesting users of high core count CPUs go into the game's config files (Documents/CODMW/players/advanced.ini) and change the "renderer worker thread count" to half of their CPU cores. So I went there, changed 12 to 6 and performance is quite a bit smoother. No huge sudden dips and more stable framerates and clocks. Quite a bit so too. Take a look at the graphs.








I am quite happy by the result. BUT! (And there's always a but) I got some weirdness. I launched the game and, as I was waiting for a friend to join me, I joined a match of Shipment 24/7 and it was playing with 20-30 fps. I alt+tab'ed out of the game, went back in and it was working for a bit, but I was getting huge stutters and pauses for like a whole second. It was unplayable. I quit, went to play Warzone with my buddy, same story, 20-30 FPS. I quit the game, did a quick restart, went back in, it seemed to be playing fine, then after a few minutes the system suddenly blanked out and rebooted. It took a while for the screen to come back on, it was black for some reason when it booted back into Windows (it did show the POST screen). I opened MPT, turned down SOC clock to 1200MHz and FCLK to 1940MHz, rebooted and then played normally with the above results. No issues whatsoever after that. Weird. I'd been playing with the same MPT settings for a good couple of months without a single crash or problem. Can increasing SOC and FCLK cause such issues? I know I don't need them as they don't really affect gaming performance, but I thought that, as my card is a dud overclocker, I could at least get something out of it. I was also getting some random black screens lately for like a split second, like my monitor turned off and on rapidly, and I'm thinking it may be related.

Anyway, only downside to this WZ tweak is that CCD1 runs hotter. Before it used to peak between 75-80C, now it went to 82C. Oh well.


----------



## LtMatt

tolis626 said:


> A man after my own heart! I love the fact that guys like me (and maybe you too) who were the nerds of old have grown up, become businessmen/engineers/doctors/whatever but are still the same nerds deep down. My job has nothing to do with computers or gaming, but I love this stuff too much and I love talking about it with other people who share my sentiments about it and are equally or more knowledgeable (i.e not like my friends and family who would get a stroke and die if I told them I threw 1300€ on a graphics card  ).
> 
> Warzone is indeed a steaming pile of garbage. The devs can't seem to catch a break. Fix one thing, another two things break. Ban one hacker, ten take his place. And then there's the trash playerbase who do everything for an extra kill or a win (I swear, if these guys could get one win more by selling their mothers, they'd do it). I've grown to resent the game, but I play with a group of very good friends who we can't hang out IRL easily as we live in different cities.
> 
> Anyway... I decided to try out one last thing. I remembered a while ago, I was researching tips to fix the game's performance for a friend who was struggling, and I ran across a post suggesting users of high core count CPUs go into the game's config files (Documents/CODMW/players/advanced.ini) and change the "renderer worker thread count" to half of their CPU cores. So I went there, changed 12 to 6 and performance is quite a bit smoother. No huge sudden dips and more stable framerates and clocks. Quite a bit so too. Take a look at the graphs.
> View attachment 2534736
> 
> I am quite happy by the result. BUT! (And there's always a but) I got some weirdness. I launched the game and, as I was waiting for a friend to join me, I joined a match of Shipment 24/7 and it was playing with 20-30 fps. I alt+tab'ed out of the game, went back in and it was working for a bit, but I was getting huge stutters and pauses for like a whole second. It was unplayable. I quit, went to play Warzone with my buddy, same story, 20-30 FPS. I quit the game, did a quick restart, went back in, it seemed to be playing fine, then after a few minutes the system suddenly blanked out and rebooted. It took a while for the screen to come back on, it was black for some reason when it booted back into Windows (it did show the POST screen). I opened MPT, turned down SOC clock to 1200MHz and FCLK to 1940MHz, rebooted and then played normally with the above results. No issues whatsoever after that. Weird. I'd been playing with the same MPT settings for a good couple of months without a single crash or problem. Can increasing SOC and FCLK cause such issues? I know I don't need them as they don't really affect gaming performance, but I thought that, as my card is a dud overclocker, I could at least get something out of it. I was also getting some random black screens lately for like a split second, like my monitor turned off and on rapidly, and I'm thinking it may be related.
> 
> Anyway, only downside to this WZ tweak is that CCD1 runs hotter. Before it used to peak between 75-80C, now it went to 82C. Oh well.


Yes. I just discovered FCLK was causing my system to reboot while playing Vanguard. Had to drop it down from 2200Mhz, to 2000 to stop it happening.

It's odd as it was working fine in other games like Far Cry 6, but Vanguard sees higher wattage and gets close to 400W at times.

It seems to make very little difference in games thankfully so I've dialled it back and no random restarts now.



tolis626 said:


> A man after my own heart! I love the fact that guys like me (and maybe you too) who were the nerds of old have grown up, become businessmen/engineers/doctors/whatever but are still the same nerds deep down. My job has nothing to do with computers or gaming, but I love this stuff too much and I love talking about it with other people who share my sentiments about it and are equally or more knowledgeable (i.e not like my friends and family who would get a stroke and die if I told them I threw 1300€ on a graphics card  ).
> 
> Warzone is indeed a steaming pile of garbage. The devs can't seem to catch a break. Fix one thing, another two things break. Ban one hacker, ten take his place. And then there's the trash playerbase who do everything for an extra kill or a win (I swear, if these guys could get one win more by selling their mothers, they'd do it). I've grown to resent the game, but I play with a group of very good friends who we can't hang out IRL easily as we live in different cities.
> 
> Anyway... I decided to try out one last thing. I remembered a while ago, I was researching tips to fix the game's performance for a friend who was struggling, and I ran across a post suggesting users of high core count CPUs go into the game's config files (Documents/CODMW/players/advanced.ini) and change the "renderer worker thread count" to half of their CPU cores. So I went there, changed 12 to 6 and performance is quite a bit smoother. No huge sudden dips and more stable framerates and clocks. Quite a bit so too. Take a look at the graphs.
> View attachment 2534736
> 
> I am quite happy by the result. BUT! (And there's always a but) I got some weirdness. I launched the game and, as I was waiting for a friend to join me, I joined a match of Shipment 24/7 and it was playing with 20-30 fps. I alt+tab'ed out of the game, went back in and it was working for a bit, but I was getting huge stutters and pauses for like a whole second. It was unplayable. I quit, went to play Warzone with my buddy, same story, 20-30 FPS. I quit the game, did a quick restart, went back in, it seemed to be playing fine, then after a few minutes the system suddenly blanked out and rebooted. It took a while for the screen to come back on, it was black for some reason when it booted back into Windows (it did show the POST screen). I opened MPT, turned down SOC clock to 1200MHz and FCLK to 1940MHz, rebooted and then played normally with the above results. No issues whatsoever after that. Weird. I'd been playing with the same MPT settings for a good couple of months without a single crash or problem. Can increasing SOC and FCLK cause such issues? I know I don't need them as they don't really affect gaming performance, but I thought that, as my card is a dud overclocker, I could at least get something out of it. I was also getting some random black screens lately for like a split second, like my monitor turned off and on rapidly, and I'm thinking it may be related.
> 
> Anyway, only downside to this WZ tweak is that CCD1 runs hotter. Before it used to peak between 75-80C, now it went to 82C. Oh well.


Warzone is a funny one. So many people have stuttering issues with this game, but oddly I have none. I did not even have to adjust the cpu cores used like you you did to get it smooth.

People doubted me, so I uploaded a couple of videos to the tube to prove there was no stuttering at 4K whether using Highest or Lowest settings.

I had even dialled my Toxic (first one I owned) back to default stock clocks and power limits, lowered my DRAM frequency to 3600Mhz, remove PBO from the CPU and it still ran well.
Call of Duty Warzone - Testing for Stuttering with a 6900 XT + 5950X - YouTube
Call of Duty Warzone - Testing for Stuttering with Low Settings + 6900 XT + 5950X - YouTube


----------



## Godhand007

CS9K said:


> Yes, it completely Takes over the stock boost behavior for voltage. When one sets a voltage via the "temp dependent vmin" trick, _any_ time the card boosts up near your max clock, it feeds it the voltage that you set, which is great for bench runs, but something I would absolutely never daily-drive.
> 
> We do not yet have bios flashing/modding capability with Radeon 6000 series GPU's.


So I have a question: *Based on your experience *what are the chances that I brick my card if run it at 1.2mv (though MPT) as a daily driver? I know it may not be worth it for others but I do want that 150- 200 Mhz badly. Also, many of the new 6900XTs have 1.2 mv as default (with proper boost behavior) so it might be okay from that perspective. I have run my card for long periods (8 hrs +) on 3dmarks benchmarks in loop and voltage remained at 1175mv for almost all the time during those runs.


Others can chime in as well.


----------



## LtMatt

Godhand007 said:


> So I have a question: *Based on your experience *what are the chances that I brick my card if run it at 1.2mv (though MPT) as a daily driver? I know it may not be worth it for others but I do want that 150- 200 Mhz badly. Also, many of the new 6900XTs have 1.2 mv as default (with proper boost behavior) so it might be okay from that perspective. I have run my card for long periods (8 hrs +) on 3dmarks benchmarks in loop and voltage remained at 1175mv for almost all the time during those runs.
> 
> 
> Others can chime in as well.


As long as temps are kept in check, along with long term power limits, it will probably be fine. No guarantee though, anytime you increase things past the factory default settings you take a risk. 

The problem with using MPT to increase voltage for 24/7 though is that I believe you basically lock the voltage at 1.2v constantly once you use the VMIN feature in MPT.

I believe the above is right, but others will correct me If I am wrong.


----------



## tolis626

LtMatt said:


> Yes. I just discovered FCLK was causing my system to reboot while playing Vanguard. Had to drop it down from 2200Mhz, to 2000 to stop it happening.
> 
> It's odd as it was working fine in other games like Far Cry 6, but Vanguard sees higher wattage and gets close to 400W at times.
> 
> It seems to make very little difference in games thankfully so I've dialled it back and no random restarts now.
> 
> 
> Warzone is a funny one. So many people have stuttering issues with this game, but oddly I have none. I did not even have to adjust the cpu cores used like you you did to get it smooth.
> 
> People doubted me, so I uploaded a couple of videos to the tube to prove there was no stuttering at 4K whether using Highest or Lowest settings.
> 
> I had even dialled my Toxic (first one I owned) back to default stock clocks and power limits, lowered my DRAM frequency to 3600Mhz, remove PBO from the CPU and it still ran well.
> Call of Duty Warzone - Testing for Stuttering with a 6900 XT + 5950X - YouTube
> Call of Duty Warzone - Testing for Stuttering with Low Settings + 6900 XT + 5950X - YouTube


Damn, that third guy you killed in the high settings video was oblivious. 

Well, yes, it seems to be running fine on your rig. Although I have to say, there have been times when it ran fine just fine on mine too. But then another update would break everything for no reason. It's a mess. Having said that, it seems that at 4K you're getting much more stable framerates. I guess at that point you're pretty much 100% GPU limited.

As for FCLK, I think that that may have caused my issues, but I'm not sure. I'll have to test more when I have time. Weird thing is, I didn't even push it that hard, just 2000MHz. Maybe it was SOC clock that was causing it? Maybe it was a fluke? I mean, the weird 20fps issue and the reboot appeared for the first time yesterday, and I've been running the same settings for over 2 months now.


----------



## LtMatt

tolis626 said:


> Damn, that third guy you killed in the high settings video was oblivious.
> 
> Well, yes, it seems to be running fine on your rig. Although I have to say, there have been times when it ran fine just fine on mine too. But then another update would break everything for no reason. It's a mess. Having said that, it seems that at 4K you're getting much more stable framerates. I guess at that point you're pretty much 100% GPU limited.
> 
> As for FCLK, I think that that may have caused my issues, but I'm not sure. I'll have to test more when I have time. Weird thing is, I didn't even push it that hard, just 2000MHz. Maybe it was SOC clock that was causing it? Maybe it was a fluke? I mean, the weird 20fps issue and the reboot appeared for the first time yesterday, and I've been running the same settings for over 2 months now.


Could be. I leave SOC clock at default in MPT cos outside of synthetics, I see no extra FPS with it set to 1266Mhz or 1300Mhz so it seems pointless moving it above 1200Mhz.


----------



## EastCoast

Has anyone tried to inject fsr into timespy?


----------



## CS9K

LtMatt said:


> As long as temps are kept in check, along with long term power limits, it will probably be fine. No guarantee though, anytime you increase things past the factory default settings you take a risk.
> 
> The problem with using MPT to increase voltage for 24/7 though is that I believe you basically lock the voltage at 1.2v constantly once you use the VMIN feature in MPT.
> 
> I believe the above is right, but others will correct me If I am wrong.


I will echo LtMatt's comment, @Godhand007. I think most RX 6900 XT's would be perfectly fine with 1.2V IF that extra 25mV was applied to the GPU's internal curve. (My RX 6900 XT's _actual_ voltage tops out at 1115mV via the firmware's internal curve, adding 25mV to _that_ value gave me an extra 100MHz core clock).

The problem with how MPT overrides the voltage, is that, as LtMatt said, it completely ignores the stock voltage curve and just feeds voltage into the core with no consideration to load or clock speed. I am not a fan of that, which is why I only left it enabled for a few benches, then reverted back to stock settings.

I think patience will reward us in the end, when we eventually get control bios modification. Benchmarks are one thing, but longevity and daily-driver settings are another.


----------



## Skinnered

I have not tried soc clock, but setting gfxclock from 2666 to 2719 did crash.

Btwe, have my hotspot just under control by upping the pumpspeed of the Eiswolf2, I even could lower the fanspeed.
Corespeed now stable ~2700mhz during long gaming with temps 60-65 core, 85-90 hotspot.

I have ordered Gelid Solutions GC-Extreme
past though to see if I bring hs temp further down and maybe reach 2800mhz.

How can one add voltages to the curve?
Offsett?


----------



## CS9K

Skinnered said:


> I have not tried soc clock, but setting gfxclock from 2666 to 2719 did crash.
> 
> Btwe, have my hotspot just under control by upping the pumpspeed of the Eiswolf2, I even could lower the fanspeed.
> Corespeed now stable ~2700mhz during long gaming with temps 60-65 core, 85-90 hotspot.
> 
> I have ordered Gelid Solutions GC-Extreme
> past though to see if I bring hs temp further down and maybe reach 2800mhz.
> 
> How can one add voltages to the curve?
> Offsett?


So far, we can not edit the bios nor can we flash other bios files to our GPU's. Until we get that ability, there is only one way to add voltage to the GPU, and that is via "temperature dependent Vmin", a very not-official way that I discussed in the post above yours. It is not something I would run as a daily-driver setting.


----------



## cfranko

SAN-NAS said:


> Idk but I couldn't help myself and bought an ASRock 6900 XT PHANTOM D on black Friday... It was bundled with ASRock G10 router, Farcry 6, Resident evil 8, and Xbox game pass... Plus was $100 off for $1500 with it all... It's the xtx core but I don't care that much with all the free stuff...
> 
> Super excited to try it out as I've been using Nvidia for the last few years. I do like RT and Dlss but there's not many games that I want to use RT now that I'm done with 2077..
> 
> Hoping I can get some good clocks out of the core. It's a 3x8pin pwr but not sure that helps with overclock on the xtx cores? Are people using Afterburner mainly, any limitations using it over amds?


I have the same card, the air cooler is quite terrible FYI. At least my temps were pretty bad with the air cooler, not sure if it was an issue specific to my card.


----------



## ZealotKi11er

Has the 3DMark issue been fixed or people can still use the "exploit"?


----------



## LtMatt

ZealotKi11er said:


> Has the 3DMark issue been fixed or people can still use the "exploit"?


Do you mean the tess bug? If so yes scores were removed from Timespy AFAIK.


----------



## ZealotKi11er

LtMatt said:


> Do you mean the tess bug? If so yes scores were removed from Timespy AFAIK.


I know some people removed them manually, but is it still possible to upload new scores. What I mean to ask, does 3dmark detect invalid scores now?


----------



## Godhand007

CS9K said:


> I will echo LtMatt's comment, @Godhand007. I think most RX 6900 XT's would be perfectly fine with 1.2V IF that extra 25mV was applied to the GPU's internal curve. (My RX 6900 XT's _actual_ voltage tops out at 1115mV via the firmware's internal curve, adding 25mV to _that_ value gave me an extra 100MHz core clock).
> 
> The problem with how MPT overrides the voltage, is that, as LtMatt said, it completely ignores the stock voltage curve and just feeds voltage into the core with no consideration to load or clock speed. I am not a fan of that, which is why I only left it enabled for a few benches, then reverted back to stock settings.
> 
> I think patience will reward us in the end, when we eventually get control bios modification. Benchmarks are one thing, but longevity and daily-driver settings are another.


I think the voltage does drop to 0 when idle with 1200 mv MPT setup. If I understand you correctly; during load, voltage is going to go to 1200mv directly. So _wait & watch_ is the thing to do for now.
Just a reminder to all though, It's has been an ~ year since these cards were released and we are yet to unlock their full potential.


----------



## J7SC

CS9K said:


> So far, we can not edit the bios nor can we flash other bios files to our GPU's. Until we get that ability, there is only one way to add voltage to the GPU, and that is via "temperature dependent Vmin", a very not-official way that I discussed in the post above yours. It is not something I would run as a daily-driver setting.


I have done a few runs w/ TempDepVmin set to 1.200v (below). For some strange reason, HWInfo (and GPUz etc) also report 1.175v though that actually is an increase from the regular (no TempDepVmin mod) reported values. Is that a sensor thing, or the max allowable to be reported as this is a regular XTX card ?


----------



## Bobbydo

J7SC said:


> I have done a few runs w/ TempDepVmin set to 1.200v (below). For some strange reason, HWInfo (and GPUz etc) also report 1.175v though that actually is an increase from the regular (no TempDepVmin mod) reported values. Is that a sensor thing, or the max allowable to be reported as this is a regular XTX card ?
> 
> View attachment 2534834



Guys. Sorry. It's me again. My card is using 220w power in fh5. I get 171fps or so. In 2k Extreme settings. Motion blur off. Everything is in stock. Not overclocked or anything. Just undervolted a little to 1175. I set power limit in mpt to 372-292-55. Will I get more fps if I overclock it a bit?

Is that the reason why temps stay under 53c?

Min tempreture is 25-30c in idle






- YouTube


Enjoy the videos and music you love, upload original content, and share it all with friends, family, and the world on YouTube.




youtube.com


----------



## cfranko

Bobbydo said:


> Guys. Sorry. It's me again. My card is using 220w power in fh5. I get 171fps or so. In 2k Extreme settings. Motion blur off. Everything is in stock. Not overclocked or anything. Just undervolted a little to 1175. I set power limit in mpt to 372-292-55. Will I get more fps if I overclock it a bit?
> 
> 
> 
> 
> 
> 
> 
> - YouTube
> 
> 
> Enjoy the videos and music you love, upload original content, and share it all with friends, family, and the world on YouTube.
> 
> 
> 
> 
> youtube.com
> 
> 
> 
> 
> 
> View attachment 2534836




















Both photos are from the built in benchmark of Forza Horizon 5 at 1440P. All settings are fully maxed out from the graphics tab in settings, the Extreme preset of the game doesn’t actually Max out all the graphics settings you have to manually set them higher to absolutely max out the settings. The first photo is the card running overclocked with extra voltage. Around 2800-2900 MHz. Second photo is at stock around 2400-2500 MHz. As you can see there is a considerable difference so yes overclocking does benefit.


----------



## CS9K

Whoof, @cfranko @Bobbydo yall's pictures are killer, man. Thumbnails 4tw.


----------



## CS9K

Godhand007 said:


> - I think the voltage does drop to 0 when idle with 1200 mv MPT setup. If I understand you correctly; during load, voltage is going to go to 1200mv directly. So _wait & watch_ is the thing to do for now.
> 
> - Just a reminder to all though, It's has been an ~ year since these cards were released and we are yet to unlock their full potential.


- Yes

- It was a little over a year after the Radeon 5000 series came out, before progress was made toward bios-unlocking that series. They released July-Aug 2019, the RedPanda bios showed up around August-September 2020. The bad news is, the bios-checks that Radeon 6000 GPU's perform are more complicated than on the 5000-series, and people have yet to figure how to either circumvent those checks, or tell the card to ignore them completely. If you know anyone that's handy with bios modding in hex, has a Radeon 6000 GPU, and has an external bios flasher, their help would be appreciated!


----------



## Bobbydo

cfranko said:


> View attachment 2534840
> 
> View attachment 2534839
> 
> 
> Both photos are from the built in benchmark of Forza Horizon 5 at 1440P. All settings are fully maxed out from the graphics tab in settings, the Extreme preset of the game doesn’t actually Max out all the graphics settings you have to manually set them higher to absolutely max out the settings. The first photo is the card running overclocked with extra voltage. Around 2800-2900 MHz. Second photo is at stock around 2400-2500 MHz. As you can see there is a considerable difference so yes overclocking does benefit.


I have same settings. But motion blur off. I got 91fps min. And 171fps max. No overclock or anything. Only voltage lowered to 1175. 

So then it's cool that gpu uses less power but more fps?


----------



## J7SC

CS9K said:


> (...) f you know anyone that's handy with bios modding in hex, has a Radeon 6000 GPU, and has an external bios flasher, their help would be appreciated!


...while they're at it, they hopefully boot the XTX VRAM limits as well.


----------



## SAN-NAS

cfranko said:


> I have the same card, the air cooler is quite terrible FYI. At least my temps were pretty bad with the air cooler, not sure if it was an issue specific to my card.


What were you seeing gaming for temp and hotspot? Any luck with OC and did you block yours?


----------



## cfranko

SAN-NAS said:


> What were you seeing gaming for temp and hotspot? Any luck with OC and did you block yours?


With the air cooler I would get 75c edge temperature and 100c hotspot temperature at 300 watts. Timespy is barely stable at 2630 MHz max frequency but while gaming 2720 MHz is stable. The 3x8 pin is just there for the looks of it, the card isn’t a good overclocker making the 3x8 pin pretty much useless. I am not saying you got a terrible card and should cancel your order but I am saying to not have too high expectations. Right now I have a waterblock on it. But again, I personally think the air cooler I had had a mounting pressure issue, I have seen people get better temps than me with the ASRock Phantom Gaming model.


----------



## snakeeyes111

Is it a Bug? No one do anything, so i think there is no Bug.

No one fixed cpu Bug with nvida cards, so why there should be Bug with amd gpus?


I mean over 22k cpu score on a stock 12900k.... no one matter...


----------



## tolis626

snakeeyes111 said:


> Is it a Bug? No one do anything, so i think there is no Bug.
> 
> No one fixed cpu Bug with nvida cards, so why there should be Bug with amd gpus?
> 
> 
> I mean over 22k cpu score on a stock 12900k.... no one matter...


Maybe 3DMark got bought by UserBench?


----------



## Bobbydo

CS9K said:


> Whoof, @cfranko @Bobbydo yall's pictures are killer, man. Thumbnails 4tw.


Thanks


----------



## Bobbydo

snakeeyes111 said:


> Is it a Bug? No one do anything, so i think there is no Bug.
> 
> No one fixed cpu Bug with nvida cards, so why there should be Bug with amd gpus?
> 
> 
> I mean over 22k cpu score on a stock 12900k.... no one matter...


22k CPU score 😮 didn't see that before.

Update: Just checked 3dmark rauf seem to reach 26k on CPU.


----------



## Bobbydo

tolis626 said:


> Maybe 3DMark got bought by UserBench?


----------



## lawson67

LtMatt said:


> Decent score. What clocks is that with?
> 
> Be interested to see what more you can get from it.


Hey @LtMatt with the card installed on the toxic Extreme which position do i need slide the bios switch for the fastest bios?, towards the ports on the back of the card or the middle position or the far right?


----------



## LtMatt

lawson67 said:


> Hey @LtMatt with the card installed on the toxic Extreme which position do i need slide the bios switch for the fastest bios?, towards the ports on the back of the card or the middle position or the far right?


Default position, far left I believe.

That said, I recommend using the middle BIOS (Quiet) as you can adjust the fan speed down to 25% whereas on the performance BIOS it’s locked to 38% or higher. You can always manage power limits with MPT rather than worrying about the BIOS.

What is your stock core clock?


----------



## lawson67

LtMatt said:


> Default position, far left I believe.
> 
> That said, I recommend using the middle BIOS (Quiet) as you can adjust the fan speed down to 25% whereas on the performance BIOS it’s locked to 38% or higher. You can always manage power limits with MPT rather than worrying about the BIOS.
> 
> What is your stock core clock?


I've just got it installed default core clock is 2579, not as good as my powercolor at 2634, what's your default core speed?, also do install that Trixx software?


----------



## LtMatt

lawson67 said:


> I've just got it installed default core clock is 2579, not as good as my powercolor at 2634, what's your default core speed?, also do install that Trixx software?


Mine is 2579 or 2584 so very similar. can’t confirm right now as my motherboard decides to die randomly this morning, got a a replacement coming tomorrow.

The stock clock does not mean too much though it seems. All my 6900 XTXs and XTXHs, the ones with the lowest clocks have overclocked highest.


----------



## CS9K

LtMatt said:


> The stock clock does not mean too much though it seems. All my 6900 XTXs and XTXHs, the ones with the lowest clocks have overclocked highest.


Came to say this. I compiled a list of RX 6900 XT stock clocks earlier this year, and if we found any correlation, there was an _inverse_ correlation between stock clock, and max clocks when tuned; The lower stock clocks sometimes had better max overclocks.


----------



## ZealotKi11er

CS9K said:


> Came to say this. I compiled a list of RX 6900 XT stock clocks earlier this year, and if we found any correlation, there was an _inverse_ correlation between stock clock, and max clocks when tuned; The lower stock clocks sometimes had better max overclocks.


Generally speaking, higher clock is higher leaking part. Higher leaking parts OC better with better cooling.


----------



## lawson67

LtMatt said:


> Mine is 2579 or 2584 so very similar. can’t confirm right now as my motherboard decides to die randomly this morning, got a a replacement coming tomorrow.
> 
> The stock clock does not mean too much though it seems. All my 6900 XTXs and XTXHs, the ones with the lowest clocks have overclocked highest.


Well i am really happy with this card its on par with my Red Devil, it can do 20 loops of GT2 @ 2850mhz , the Devil could do that @ 2870mhz so only slightly less but i am very pleased with it indeed


----------



## LtMatt

lawson67 said:


> Well i am really happy with this card its on par with my Red Devil, it can do 20 loops of GT2 @ 2850mhz , the Devil could do that @ 2870mhz so only slightly less but i am very pleased with it indeed


Wow that is very good indeed, pretty sure that’s better than both of my Toxics!


----------



## Godhand007

So here is a question to all; Since proper voltage control is still some days away, is there anything else I can try to get higher clocks on my reference cards through any other settings in MPT. Currently, with default voltage, it can maintain a 2550Mhz actual clock (GT2 stable)?


----------



## CS9K

Godhand007 said:


> So here is a question to all; Since proper voltage control is still some days away, is there anything else I can try to get higher clocks on my reference cards through any other settings in MPT. Currently, with default voltage, it can maintain a 2550Mhz actual clock (GT2 stable)?


To my knowledge, the only changeable settings in MPT *that make sense for daily-driver setups are power limit for GFX/SOC, fclk speed, and disable "Deep Sleep" (uncheck all 6 of the "DS_whatever" features on the main tab).

That's all it does right now, but at the same time, that's all we really need to change, as modifying memory speed, memory timings, and properly modifying the SoC/Core voltage curve above stock hard-limits, all will require us to flash a modified bios, which we still can not do.

*Edit. To answer your question: If the voltage slider is all the way to the right, and you can't pass GT2 and your temps are still good, then that sounds like that's about all your card can do. My prior RX 6800 XT on water let me set 2600MHz, and my RX 6900 XT on the same block lets me set 2700Mhz, both daily-driver stable. That's it until I can get more voltage


----------



## Bobb3rdown

I scored 45 430 in Fire Strike


AMD Ryzen 7 5800X, AMD Radeon RX 6900 XT x 1, 32768 MB, 64-bit Windows 11}




www.3dmark.com





I can't wait to get this card on water. 65046 graphics score on air. Pretty much stuck at 2710mhz/350w in firestrike. I'm sure it could take more. But don't really want to give it more then that until I get my custom loop going.


----------



## Godhand007

CS9K said:


> To my knowledge, the only changeable settings in MPT *that make sense for daily-driver setups are power limit for GFX/SOC, fclk speed, and disable "Deep Sleep" (uncheck all 6 of the "DS_whatever" features on the main tab).
> 
> That's all it does right now, but at the same time, that's all we really need to change, as modifying memory speed, memory timings, and properly modifying the SoC/Core voltage curve above stock hard-limits, all will require us to flash a modified bios, which we still can not do.
> 
> *Edit. To answer your question: If the voltage slider is all the way to the right, and you can't pass GT2 and your temps are still good, then that sounds like that's about all your card can do. My prior RX 6800 XT on water let me set 2600MHz, and my RX 6900 XT on the same block lets me set 2700Mhz, both daily-driver stable. That's it until I can get more voltage


Just to clarify, my card can pass a few loops of GT2 with 2650Mhz and as mentioned earlier it can do 2700Mhz as well but with 1.2v. So I think the potential is there. I will try playing around with fclk and see if that lands me near 2700Mhz.


----------



## Maulet//*//

No label on the dual bios switch? (My Red Devil 6900 does have). 
Check reviews of your card (that's what I always do when I want to change mine when I can not see the label!) 😁


----------



## Bobbydo

CS9K said:


> Came to say this. I compiled a list of RX 6900 XT stock clocks earlier this year, and if we found any correlation, there was an _inverse_ correlation between stock clock, and max clocks when tuned; The lower stock clocks sometimes had better max overclocks.


Guys I have a problem. Voltage doesn't make a difference for me. When I undervolt or just leave it at 1200. Temps are just the same.


----------



## CS9K

Bobbydo said:


> Guys I have a problem. Voltage doesn't make a difference for me. When I undervolt or just leave it at 1200. Temps are just the same.


The "voltage" slider in the Adrenalin/AMD control panel is a bit of a lie.

The slider, when reduced from its maximum value, sets an _offset_ by that amount of mV to the entire curve, when the card is under 2600MHz or so
Above around 2600MHz, the GPU's built in curve takes over and nothing you input other than max clock speed will matter
Because of this, you _can_ undervolt the GPU for when it is not operating at its maximum clock speed
Set your max clock to 2200 or 2400MHz or so, and reduce voltage by 15mV until you become unstable, then raise it 20mV and retest.


----------



## CS9K

Godhand007 said:


> Just to clarify, my card can pass a few loops of GT2 with 2650Mhz and as mentioned earlier it can do 2700Mhz as well but with 1.2v. So I think the potential is there. I will try playing around with fclk and see if that lands me near 2700Mhz.


Ah, I understand.

FCLK will not affect your GPU core clock speed in any way, they are separate values.


----------



## J7SC

CS9K said:


> The "voltage" slider in the Adrenalin/AMD control panel is a bit of a lie.
> 
> The slider, when reduced from its maximum value, sets an _offset_ by that amount of mV to the entire curve, when the card is under 2600MHz or so
> Above around 2600MHz, the GPU's built in curve takes over and nothing you input other than max clock speed will matter
> Because of this, you _can_ undervolt the GPU for when it is not operating at its maximum clock speed
> Set your max clock to 2200 or 2400MHz or so, and reduce voltage by 15mV until you become unstable, then raise it 20mV and retest.


I like to reiterate a related question I posted recently: Do XTX cards _actually show_ 1.200 V in apps such as HWInfo when using the TempDep Vmin override ?

On my card, clocks did pick up per recent post to between 2740 MHz (effective) and 2840 MHz (initial), but GPU-V showed 1.175V max instead of 1.200 V, in repeated runs. Now that actually is an increase from the 1.156+ - V it usually shows, so the _change_ was picked up, just not the spec'ed value. I should add that for the TempDep Vmin, I only changed GPU-V, not SoC..


----------



## cfranko

J7SC said:


> I like to reiterate a related question I posted recently: Do XTX cards _actually show_ 1.200 V in apps such as HWInfo when using the TempDep Vmin override ?


Yes, I have a XTX card and I see 1.250V in Hwinfo and GPU Z using TempDepVmin


----------



## cfranko

J7SC said:


> but GPU-V showed 1.175V max instead of 1.200 V,


That is because of vdrop, the voltage the GPU receives drops under load, same goes for cpu's as well when overclocking, that is normal. You wouldn't see 1.2V at all times under load just because you set it to that.


----------



## J7SC

cfranko said:


> That is because of vdrop, the voltage the GPU receives drops under load, same goes for cpu's as well when overclocking, that is normal. You wouldn't see 1.2V at all times under load just because you set it to that.


Tx, and yeah - but I'm referring to HWInfo 'max' column. If I use stock (instead of _any_ MPT changes, ie. PL and/or TempDep Vmin ), both HWInfo and GPUz show the correct max at 1.175 V.


----------



## CS9K

J7SC said:


> Tx, and yeah - but I'm referring to HWInfo 'max' column. If I use stock (instead of _any_ MPT changes, ie. PL and/or TempDep Vmin ), both HWInfo and GPUz show the correct max at 1.175 V.


cc @cfranko

Okay, story time:

- The voltage one sees in the Adrenalin control panel and GPU-Z's Sensor tab, is akin to the "Vid" reading for a CPU. That reading is what the GPU is requesting for the GPU core, which is why it always reads whatever it should at your current load (random things under 2600MHz, and 1175mV or 1200mV under full core speed, for XTX and XTXH dies, respectively.

- In HWINFO64, "GPU Core Voltage (VDDCR_GFX)" is more akin to "VR VOUT" that some mainboards report: it is a reading either at, or close to the GPU die, on the core-side of the VRM MOSFETS. This is more or less the voltage that the core is _actually_ receiving.

- That said, you will sometimes see the "max" voltage hit 1175mV or 1200mV under light loads, but the LLC that cfranko mentioned will always kick in above 2600MHz or so (A good thing). So, the value displayed under "max" for core voltage depends on what the GPU was doing. Don't pay attention to the "max" reading so much as just watch the current and average values as your GPU does the thing.


----------



## J7SC

CS9K said:


> cc @cfranko
> 
> Okay, story time:
> 
> - The voltage one sees in the Adrenalin control panel and GPU-Z's Sensor tab, is akin to the "Vid" reading for a CPU. That reading is what the GPU is requesting for the GPU core, which is why it always reads whatever it should at your current load (random things under 2600MHz, and 1175mV or 1200mV under full core speed, for XTX and XTXH dies, respectively.
> 
> - In HWINFO64, "GPU Core Voltage (VDDCR_GFX)" is more akin to "VR VOUT" that some mainboards report: it is a reading either at, or close to the GPU die, on the core-side of the VRM MOSFETS. This is more or less the voltage that the core is _actually_ receiving.
> 
> - That said, you will sometimes see the "max" voltage hit 1175mV or 1200mV under light loads, but the LLC that cfranko mentioned will always kick in above 2600MHz or so (A good thing). So, the value displayed under "max" for core voltage depends on what the GPU was doing. Don't pay attention to the "max" reading so much as just watch the current and average values as your GPU does the thing.


Thanks, that's very helpful. Just to reiterate, I was comparing the same values under the same conditions in HWInfo (and at times GPUz) over time and w/ different settings and cooling.
*EDIT:* The reason why I pay attention to it is because I plan to up TempDep Vmin to 1.250V as I have the cooling for it, even w/ a cozy 23-24 C ambient. I would like some GPU-V app feedback as I ramp voltages up a bit step by step.

New 6900XT, first day, air-cooled, max sustainable clock slider, max voltage per Adrenalin. GPUz also showed 1.175V:









MPT PL, stock voltage, w-cooled. GPUz differed and showed 1.175V max









MPT PL, TempDep Vmin GPU-V at 1.200 V, w-cooled (forgot to screenshot GPUz)


----------



## Godhand007

J7SC said:


> Thanks, that's very helpful. Just to reiterate, I was comparing the same values under the same conditions in HWInfo (and at times GPUz) over time and w/ different settings and cooling.
> *EDIT:* The reason why I pay attention to it is because I plan to up TempDep Vmin to 1.250V as I have the cooling for it, even w/ a cozy 23-24 C ambient. I would like some GPU-V app feedback as I ramp voltages up a bit step by step.
> 
> New 6900XT, first day, air-cooled, max sustainable clock slider, max voltage per Adrenalin. GPUz also showed 1.175V:
> View attachment 2535099
> 
> 
> MPT PL, stock voltage, w-cooled. GPUz differed and showed 1.175V max
> View attachment 2535100
> 
> 
> MPT PL, TempDep Vmin GPU-V at 1.200 V, w-cooled (forgot to screenshot GPUz)
> View attachment 2535101


Are you planning to use _TempDep Vmin_ settings as a daily driver or just for benchmarks?


----------



## J7SC

Godhand007 said:


> Are you planning to use _TempDep Vmin_ settings as a daily driver or just for benchmarks?


...just a few benchmarks. The 6900XT system is actually the 'daily workhorse'.


----------



## EastCoast

*Update: MSI Gaming Z 6900xt xtxh*

Ok, I noticed that my junction temps were going up. So I reapply thermal compound. Temps lowered down a bit now. I used the washer mod as well. But I did notice when I unscrewed them some of them had little to no resistance. So if you have this card make sure those screws are tight/snug.

I also used 90% alcohol on the ram IC's to clean them as there was stuff on top of it. Some of those thermal pads are pretty thick but otherwise looked fine. Once I put everything back together I was able to raise vram to 2150Mhz which is 24/7 stable now. I guess I could go to 2250Mhz so that it's actually 18Gbp. But haven't found a need for that yet.












































These are not my photos but is representative of what I saw when I disassembled the card. Even the thermal paste spread was the same. Except that on my the thermal paste on the die was a lot more liquid then paste. Indicating that it did needed to be reapplied.

The red and black connector was a bit tricking. Easy to detached with a small screwdriver. But upon reassembly you have to screw the anti-sag bracket 1st (which is the black metal bar ontop/overlapping of the black and red connector). Then you begin to sandwich the heatsink and card together close enough to attach those black and red connectors. Then fully sandwich the card and begin applying the screws on the back.


----------



## xR00Tx

26,844 graphic points on TS! 
Getting close to 27,000 points with my XTX! 
GPU clock at 2935MHz and 1225mv









I scored 25 466 in Time Spy


AMD Ryzen 9 5950X, AMD Radeon RX 6900 XT x 1, 32768 MB, 64-bit Windows 10}




www.3dmark.com


----------



## LtMatt

xR00Tx said:


> 26,844 graphic points on TS!
> Getting close to 27,000 points with my XTX!
> GPU clock at 2935MHz and 1225mv
> 
> 
> 
> 
> 
> 
> 
> 
> 
> I scored 25 466 in Time Spy
> 
> 
> AMD Ryzen 9 5950X, AMD Radeon RX 6900 XT x 1, 32768 MB, 64-bit Windows 10}
> 
> 
> 
> 
> www.3dmark.com


That is crazy good with only 1225mv, well done!


----------



## SAN-NAS

xR00Tx said:


> 26,844 graphic points on TS!
> Getting close to 27,000 points with my XTX!
> GPU clock at 2935MHz and 1225mv
> 
> 
> 
> 
> 
> 
> 
> 
> 
> I scored 25 466 in Time Spy
> 
> 
> AMD Ryzen 9 5950X, AMD Radeon RX 6900 XT x 1, 32768 MB, 64-bit Windows 10}
> 
> 
> 
> 
> www.3dmark.com


What settings did you use with the mpt?


----------



## jonRock1992

xR00Tx said:


> 26,844 graphic points on TS!
> Getting close to 27,000 points with my XTX!
> GPU clock at 2935MHz and 1225mv
> 
> 
> 
> 
> 
> 
> 
> 
> 
> I scored 25 466 in Time Spy
> 
> 
> AMD Ryzen 9 5950X, AMD Radeon RX 6900 XT x 1, 32768 MB, 64-bit Windows 10}
> 
> 
> 
> 
> www.3dmark.com


Wow. That's one of the best binned cores I've seen.


----------



## Skinnered

xR00Tx said:


> 26,844 graphic points on TS!
> Getting close to 27,000 points with my XTX!
> GPU clock at 2935MHz and 1225mv
> 
> 
> 
> 
> 
> 
> 
> 
> 
> I scored 25 466 in Time Spy
> 
> 
> AMD Ryzen 9 5950X, AMD Radeon RX 6900 XT x 1, 32768 MB, 64-bit Windows 10}
> 
> 
> 
> 
> www.3dmark.com


Holy ****. Thats beast must fly in games, the best gear for FC6 and COD Vanguard (And quake Darplaces with openGL rendering complete with injected RTGI).

Getting a Sapphire Toxic EE Liquid on the way again. Gonna let my room cool off with the windows open, winter os coming tomorrow with west snow showers and try to attack high scores


----------



## LtMatt

Skinnered said:


> Holy ****. Thats beast must fly in games, the best gear for FC6 and COD Vanguard (And quake Darplaces with openGL rendering complete with injected RTGI).
> 
> Getting a Sapphire Toxic EE Liquid on the way again. Gonna let my room cool off with the windows open, winter os coming tomorrow with west snow showers and try to attack high scores


Always warms the cockles of my heart to see someone else buying the Toxic Extreme. There's not many of us about Lol.


----------



## jonRock1992

LtMatt said:


> Always warms the cockles of my heart to see someone else buying the Toxic Extreme. There's not many of us about Lol.


I kinda wish I did lol. I would have just put some liquid metal on it and called it a day lol.


----------



## LtMatt

jonRock1992 said:


> I kinda wish I did lol. I would have just put some liquid metal on it and called it a day lol.


Not a bad strategy to be fair.


----------



## tolis626

Hey guys, I have a quick question for y'all. Is anyone else having problems running overclocked FCLK? I used to run mine at 2000MHz for months (With the FCLC boost thing at stock) along with a bump to SOC clock at 1267MHz with no problems whatsoever. But 2 days ago I got a crash after getting some weird behavior and decided to revert back to stock. Today I decided to put FCLK back at 2000MHz (and disabled all the DS stuff too in MPT), but I got a hard crash (instant reboot) after like 5 minutes of Warzone. SOC was left at 1200MHz. I put FCLK back at 1940MHz and I played for a few hours no problems. Anyone has any ideas? It's not like it's a big performance drop or anything, I'm just more concerned about why that is the case.


----------



## jonRock1992

tolis626 said:


> Hey guys, I have a quick question for y'all. Is anyone else having problems running overclocked FCLK? I used to run mine at 2000MHz for months (With the FCLC boost thing at stock) along with a bump to SOC clock at 1267MHz with no problems whatsoever. But 2 days ago I got a crash after getting some weird behavior and decided to revert back to stock. Today I decided to put FCLK back at 2000MHz (and disabled all the DS stuff too in MPT), but I got a hard crash (instant reboot) after like 5 minutes of Warzone. SOC was left at 1200MHz. I put FCLK back at 1940MHz and I played for a few hours no problems. Anyone has any ideas? It's not like it's a big performance drop or anything, I'm just more concerned about why that is the case.


Actually, BF2042 doesn't like FCLK overclock on my GPU. Resulted in driver crashes.


----------



## ncpneto

.


----------



## xR00Tx

SAN-NAS said:


> What settings did you use with the mpt?


----------



## Heimdallr

did anyone here try the card with a 750W psu?


----------



## cfranko

Heimdallr said:


> did anyone here try the card with a 750W psu?


I did, works perfectly.


----------



## Heimdallr

thanks a lot, for some reasons I can find 6900XT at the same price or less then many 6800xt but with a mITX I wasn't sure...


----------



## Oversemper

Heimdallr said:


> thanks a lot, for some reasons I can find 6900XT at the same price or less then many 6800xt but with a mITX I wasn't sure...


The main price formation factor nowadays is hashrate, and it's the same for 6800/6900, ergo the price.


----------



## tolis626

jonRock1992 said:


> Actually, BF2042 doesn't like FCLK overclock on my GPU. Resulted in driver crashes.


Thanks Jon! BF2042 seems to not like a lot of things, playing properly included. 

I still find it really strange that it used to work for months only to crap out now. Playing the same game, with the same overclock, same everything. Only thing that changed was that I installed the latest driver, so that might be it? I dunno.


----------



## LtMatt

Admire my delta's. Admire them! 
Call of Duty: Vanguard Online | 6900 XT Toxic Extreme | 4K Ultra Settings - YouTube

Overlay:
Edge Temp, Mem Temp, Junction Temp.


----------



## cfranko

LtMatt said:


> Admire my delta's. Admire them!
> Call of Duty: Vanguard Online | 6900 XT Toxic Extreme | 4K Ultra Settings - YouTube
> 
> Overlay:
> Edge Temp, Mem Temp, Junction Temp.


The delta is great but isn’t both the edge and junction temp high for an AIO at 350 watts?


----------



## LtMatt

cfranko said:


> The delta is great but isn’t both the edge and junction temp high for an AIO at 350 watts?


It’s weird because I can remount it as I did before and get edge temperature to reduce 10c+, but then junction climbs a little and the delta between them goes from 4-6c to 22-25c. It’s quite odd.

It is broadly similar to my previous Toxic in terms of temperature and power draw.

I think it runs a little warmer though overall but then it overclocks a bit higher and can run 2800mhz in games at 400W with hotspot temps below 85c

I don’t know any XTXH that can do similar on air other than water cooled versions or AIOs.


----------



## SoloCamo

Heimdallr said:


> did anyone here try the card with a 750W psu?


Ran mine with a 10900 (power limit removed) and my reference 6900xt at 1175mv +15% Power limit without issue on my EVGA SuperNOVA 750 P2, 80+ PLATINUM 750W. Not really warm, either with both in use.

For serious oc'ing though? Yea, I'd recommend a good quality 1000w to be honest.


----------



## Simzak

Heimdallr said:


> did anyone here try the card with a 750W psu?


10900k oc + 6900xt uv running no problems for over a month on my corsair rm750x


----------



## J7SC

By coincidence, I just added a Seasonic Prime 1300 W Platinum for the 6900XT system a few days ago as the current one has 11.8 V (load) to 11.9 V (idle) on the 12 V rail ...it doesn't really need 1300 W, but a.) it puts me nicely at the peak of efficiency when benching, b.) it might migrate over to a HEDT dual card work system later, and c.) the Seasonic was on sale for a great price.


----------



## EastCoast

edited out


----------



## Godhand007

@LtMatt @CS9K I have noticed behavior that seems odd given the current discussion regarding voltage unlock through MPT. I have enabled 1.2v for GFX but I can see the voltage curve is supposedly working i.e full voltage is not being applied. Any thoughts on this?


----------



## Godhand007

Godhand007 said:


> @LtMatt @CS9K I have noticed behavior that seems odd given the current discussion regarding voltage unlock through MPT. I have enabled 1.2v for GFX but I can see the voltage curve is supposedly working i.e full voltage is not being applied. Any thoughts on this?
> 
> View attachment 2535482
> 
> 
> View attachment 2535483


Others can chime in as well.


----------



## ptt1982

Tested the new drivers. Seem to give around +10-15mhz to OC, temps are the same, power usage maybe slightly higher. Halo Infinite and Forza Horizon 5 had immense FPS boost, and overall games seem to have a better frame-pacing. I heard CP2077 got around 5% performance boost as well. There's more juice left in 6900xt, and AMD is confidently squeezing it out every couple of months.


----------



## ZealotKi11er

ptt1982 said:


> Tested the new drivers. Seem to give around +10-15mhz to OC, temps are the same, power usage maybe slightly higher. Halo Infinite and Forza Horizon 5 had immense FPS boost, and overall games seem to have a better frame-pacing. I heard CP2077 got around 5% performance boost as well. There's more juice left in 6900xt, and AMD is confidently squeezing it out every couple of months.


With each GPU release NV22/NV23/NV24, AMD seems to push RDNA2 harder as they find more perf left in the table.Initially they try to make the thing run and not have stability issues and max perf is hard to get day 1.


----------



## TaunyTiger

I've tryed to increase the maximum voltage on my 6900XT Liquid Devil Ultimate. But when i try for ex. 1225mV in AMD software on BF2042 profile, the game crashes in loading screen.
Write to SPPT then a reboot before setting 1225mV.
Didnt touch the PW-limits because get 381.8W PPT with +15% already.


----------



## Godhand007

TaunyTiger said:


> I've tryed to increase the maximum voltage on my 6900XT Liquid Devil Ultimate. But when i try for ex. 1225mV in AMD software on BF2042 profile, the game crashes in loading screen.
> Write to SPPT then a reboot before setting 1225mV.
> Didnt touch the PW-limits because get 381.8W PPT with +15% already.


That's not how you do it. 

You need to put that value in the area highlighted inside the black square and then click on Write SPPT and reboot. Of Course, be careful when OCing and understand the risks before going ahead. 
Also, Leave the Max value (which you have highlighted under red box) as deafult.


----------



## TaunyTiger

Godhand007 said:


> That's not how you do it.
> 
> You need to put that value in the area highlighted inside the black square and then click on Write SPPT and reboot. Of Course, be careful when OCing and understand the risks before going ahead.
> Also, Leave the Max value (which you have highlighted under red box) as deafult.
> 
> View attachment 2535593


Why is the High mV so low in the black box? 800mV, Stock max voltage is 1200mV. Should i leave the red box at 1200mV? or keep it at 1250mV?


----------



## Godhand007

TaunyTiger said:


> Why is the High mV so low in the black box? 800mV, Stock max voltage is 1200mV. Should i leave the red box at 1200mV? or keep it at 1250mV?


That's just the default stuff. Here is the guide  (use google translate) and refer to details under the "*Overclocking stage 4 - MPT * ... " headline.

Should i leave the red box at 1200mV? 
Ans: Leave it at deafult.


----------



## TaunyTiger

Godhand007 said:


> That's just the default stuff. Here is the guide  (use google translate) and refer to details under the "*Overclocking stage 4 - MPT * ... " headline.


Thanks man! I'll look it up!


----------



## cfranko

Would it be safe If I tried to set the vcore to 1.25v and power limit to 450 watts with a sapphire pulse 6800 xt that has *only 1 8 pin power connector?*


----------



## jonRock1992

cfranko said:


> Would it be safe If I tried to set the vcore to 1.25v and power limit to 450 watts with a sapphire pulse 6800 xt that has *only 1 8 pin power connector?*


I'm gonna guess that is not. But you could always try for science lol.


----------



## LtMatt

cfranko said:


> Would it be safe If I tried to set the vcore to 1.25v and power limit to 450 watts with a sapphire pulse 6800 xt that has *only 1 8 pin power connector?*


Do 1x8 PIN 6800 XTs even exist?


----------



## cfranko

LtMatt said:


> Do 1x8 PIN 6800 XTs even exist?


Normally the 6800 xt pulse has 2 8 pins but mine has 1 lol


----------



## Godhand007

Godhand007 said:


> @LtMatt @CS9K I have noticed behavior that seems odd given the current discussion regarding voltage unlock through MPT. I have enabled 1.2v for GFX but I can see the voltage curve is supposedly working i.e full voltage is not being applied. Any thoughts on this?
> 
> View attachment 2535482
> 
> 
> View attachment 2535483


No one? This is a very strange behavior or am I collecting data improperly? I have tried setting GFX voltage to 1.235 as well but HWInfo newer reports voltage above 1.181.


----------



## LtMatt

cfranko said:


> Normally the 6800 xt pulse has 2 8 pins but mine has 1 lol


What, why? Where did you get it from?


----------



## cfranko

LtMatt said:


> What, why? Where did you get it from?


Imported it from japan because it was cheap it works though


----------



## ptt1982

I'm doing undervolting experiments here with my Red Devil 6900 XTX. Does anyone know which MPT settings I should tweak to reach minimum stable power consumption?

I've used so far MPT's Voltage Control Parameters and only touched the Max values for GFX and Soc. The minimum stable ones I could do for Soc is 1000mv, at 981mv it will crash at TimeSpy GT2. I've only tried GFX at 1000mv as well, which it seems to be stable.

Are there any other settings, such as Voltage Control Parameters Min values, TDC Limits (A) Soc values, or Vmin Low / High values I could tune? 

My goal is to see what is a lowest possible rock solid voltage the card can run at, and then later start tuning the clocks. I want to achieve the lowest possible power consumption, and see what the delta is with the highest achievable one. You know, for science!


----------



## ZealotKi11er

ptt1982 said:


> I'm doing undervolting experiments here with my Red Devil 6900 XTX. Does anyone know which MPT settings I should tweak to reach minimum stable power consumption?
> 
> I've used so far MPT's Voltage Control Parameters and only touched the Max values for GFX and Soc. The minimum stable ones I could do for Soc is 1000mv, at 981mv it will crash at TimeSpy GT2. I've only tried GFX at 1000mv as well, which it seems to be stable.
> 
> Are there any other settings, such as Voltage Control Parameters Min values, TDC Limits (A) Soc values, or Vmin Low / High values I could tune?
> 
> My goal is to see what is a lowest possible rock solid voltage the card can run at, and then later start tuning the clocks. I want to achieve the lowest possible power consumption, and see what the delta is with the highest achievable one. You know, for science!



Try something like 0.85v at 2000-2100 MHz. That is probably the best perf/power ratio.


----------



## ptt1982

ZealotKi11er said:


> Try something like 0.85v at 2000-2100 MHz. That is probably the best perf/power ratio.


Thanks for the tip, I will! Just to confirm, do you mean 0.85v GFX value (850mv) via MPT under Voltage Control Parameters? I guess I should leave the Soc value at 1000mv, because the card becomes unstable under that.


----------



## 99belle99

ptt1982 said:


> Thanks for the tip, I will! Just to confirm, do you mean 0.85v GFX value (850mv) via MPT under Voltage Control Parameters? I guess I should leave the Soc value at 1000mv, because the card becomes unstable under that.


You leave voltages at stock in MPT if you want to run lower voltages. The card can be set lower with Radeon settings. You only touch voltages with MPT if you want higher but it's made more so for bench runs than a daily run. Could be fine but you run the risk of frying the chip.


----------



## ptt1982

99belle99 said:


> You leave voltages at stock in MPT if you want to run lower voltages. The card can be set lower with Radeon settings. You only touch voltages with MPT if you want higher but it's made more so for bench runs than a daily run. Could be fine but you run the risk of frying the chip.


Thanks for the input. Respectfully, I'm not completely convinced by Radeon settings voltage, because it reduces the whole curve. It's an instant crash at 1035mv using Radeon settings at core 2000mhz max, whereas with MPT I can limit the max voltage to 900mv (and total consumption to exactly 200W). Timespy scores 19K at core 2080mhz max, memory 2080mhz fast timings. Junction temp stays at 50C, Edge at 38C. Radeon settings would not allow me to get to these results. 

Now onto testing games to see if I see any difference!


----------



## ptt1982

Reporting some interesting undervolting findings from Halo Infinite and Forza Horizon 5 4K60 gameplay:
-Red Devil 6900xt peak consumption at 163W, edge 36C junction 46C (MPT GFX 900mv, Soc 1000mv, 2080mhz max core 2080mhz mem fast timings)
-5600X total package peak consumption at 67W, hotspot 42C (Curve optimizer -21 each core, 4650mhz max clock)

Seasonic Focus 850W Platinum seems to be around 92% efficiency at 200-300W range, so the efficiency is there.
Let's see if I can squeeze 30 more watts somewhere without a gaming performance impact... What a strange quest I'm on today!!


----------



## ZealotKi11er

ptt1982 said:


> Reporting some interesting undervolting findings from Halo Infinite and Forza Horizon 5 4K60 gameplay:
> -Red Devil 6900xt peak consumption at 163W, edge 36C junction 46C (MPT GFX 900mv, Soc 1000mv, 2080mhz max core 2080mhz mem fast timings)
> -5600X total package peak consumption at 67W, hotspot 42C (Curve optimizer -21 each core, 4650mhz max clock)
> 
> Seasonic Focus 850W Platinum seems to be around 92% efficiency at 200-300W range, so the efficiency is there.
> Let's see if I can squeeze 30 more watts somewhere without a gaming performance impact... What a strange quest I'm on today!!


You can lower SOC more if you bring FCLK down.


----------



## Godhand007

Godhand007 said:


> No one? This is a very strange behavior or am I collecting data improperly? I have tried setting GFX voltage to 1.235 as well but HWInfo newer reports voltage above 1.181.
> 
> View attachment 2535639


Update: It seems that the voltage increase is proportional to the value set in MPT VMin settings. For e.g, a value of 1262mv results in an actual max value of 1.227mv (HWInfo). I got my system stable with 2650- 2750 Mhz clocks on GT2 but I received a random error on Port Royal. Is there any other setting apart from the Vmin that I can try to get stable clocks. I don't think any increase in voltage further is going to help with clock stability.


----------



## CS9K

Godhand007 said:


> Update: It seems that the voltage increase is proportional to the value set in MPT VMin settings. For e.g, a value of 1262mv results in an actual max value of 1.227mv (HWInfo). I got my system stable with 2650- 2750 Mhz clocks on GT2 but I received a random error on Port Royal. Is there any other setting apart from the Vmin that I can try to get stable clocks. I don't think any increase in voltage further is going to help with clock stability.
> 
> View attachment 2535731


No, there really is no other way to achieve clock stability, aside from increasing voltage, and decreasing temperature as best as you can. The Silicon Lottery™ is real and once your silicon runs out of voltage, then that's it. That's not a bad thing usually, unless you have the bottom 10% on the 0%-100% "potato"<->"golden bin" spectrum of possible bins.


----------



## Godhand007

CS9K said:


> No, there really is no other way to achieve clock stability, aside from increasing voltage, and decreasing temperature as best as you can. The Silicon Lottery™ is real and once your silicon runs out of voltage, then that's it. That's not a bad thing usually, unless you have the bottom 10% on the 0%-100% "potato"<->"golden bin" spectrum of possible bins.


Your comment is a bit everywhere but keeping the obvious stuff aside (binned chip, max GPU Core voltage ):

1. Would an increase in the SOC voltage help? I have also seen people fiddling with FCLK.

2. What do you mean by "once your silicon runs out of voltage, then that's it"? In my case, I haven't run out of voltage, I can add more, for benching at least.

3. About decreasing temperature, as per my understanding this allows for a higher boost clock but does low temperature help you with clock stability as well? For e.g, will a card that is unstable for 2800 Mhz at 100 C hotspot become stable with the same frequency at 90 C ?


----------



## ZealotKi11er

Godhand007 said:


> Your comment is a bit everywhere but keeping the obvious stuff aside (binned chip, max GPU Core voltage ):
> 
> 1. Would an increase in the SOC voltage help? I have also seen people fiddling with FCLK.
> 
> 2. What do you mean by "once your silicon runs out of voltage, then that's it"? In my case, I haven't run out of voltage, I can add more, for benching at least.
> 
> 3. About decreasing temperature, as per my understanding this allows for a higher boost clock but does low temperature help you with clock stability as well? For e.g, will a card that is unstable for 2800 Mhz at 100 C hotspot become stable with the same frequency at 90 C ?


10C in hotspot will not make the card stable. I have noticed about 20-30MHz more when the GPU at the limit going about 40-50C lower.


----------



## Godhand007

ZealotKi11er said:


> 10C in hotspot will not make the card stable. I have noticed about 20-30MHz more when the GPU at the limit going about 40-50C lower.


That's been my experience as well. i.e. low temperatures will not make an unstable clock stable.


----------



## CS9K

Godhand007 said:


> Your comment is a bit everywhere but keeping the obvious stuff aside (binned chip, max GPU Core voltage ):
> 
> 1. Would an increase in the SOC voltage help? I have also seen people fiddling with FCLK.
> 
> 2. What do you mean by "once your silicon runs out of voltage, then that's it"? In my case, I haven't run out of voltage, I can add more, for benching at least.
> 
> 3. About decreasing temperature, as per my understanding this allows for a higher boost clock but does low temperature help you with clock stability as well? For e.g, will a card that is unstable for 2800 Mhz at 100 C hotspot become stable with the same frequency at 90 C ?


My apologies; please forgive me.

1. An increase in SOC voltage will _only_ make a difference with stability if your SOC or some component therein, was the _cause_ of your instability. If your GPU core clock is unstable, changing SOC values will not make the core more stable

2. Each piece of silicon can _only_ do a certain clock speed at a fixed voltage. The higher the stable clock speed at a fixed voltage, the better "bin" your GPU core is. Feeding a core more voltage lets that individual core be stable at higher clock speeds, but with _every_ piece of silicon, voltage increases exponentially as clock speed rises. It looks something like this:









_Nothing_ you do will change how fast your GPU core can run at a certain voltage, that's just how it works. You can feed it more voltage, but as you do, it will take exponentially more voltage to get the next "X" amount of MHz stable.

3. Less heat isn't going to let you increase core clock speed by any tangible amount, but it _is_ better for the core and GPU as a whole, to keep it as cool as you can.


----------



## J7SC

Extra cooling (ie. expansive water cooling) will help - for example by mitigating the extra heat _additional core-v and MPT PL generate_. It may not make the GPU chip faster by itself, but it helps exploit the highest clock potential over longer runs.


----------



## Godhand007

CS9K said:


> My apologies; please forgive me.
> 
> 1. An increase in SOC voltage will _only_ make a difference with stability if your SOC or some component therein, was the _cause_ of your instability. If your GPU core clock is unstable, changing SOC values will not make the core more stable
> 
> 2. Each piece of silicon can _only_ do a certain clock speed at a fixed voltage. The higher the stable clock speed at a fixed voltage, the better "bin" your GPU core is. Feeding a core more voltage lets that individual core be stable at higher clock speeds, but with _every_ piece of silicon, voltage increases exponentially as clock speed rises. It looks something like this:
> View attachment 2535744
> 
> 
> _Nothing_ you do will change how fast your GPU core can run at a certain voltage, that's just how it works. You can feed it more voltage, but as you do, it will take exponentially more voltage to get the next "X" amount of MHz stable.
> 
> 3. Less heat isn't going to let you increase core clock speed by any tangible amount, but it _is_ better for the core and GPU as a whole, to keep it as cool as you can.


Thanks for the clarifications and further details. So,

1. If original clocks or mild OC [2550 Mhz] were fine with default SOC, then increasing SOC is not going to help with clock stability? FCLK and the rest of the stuff in MPT are not useful, at least for clock stability?

2. So my GPU requires close to 1.268 mv (which is actually around ~1.210mv during 3d tasks according to HWiNfo) for maintaining ~2700 Mhz core clocks. This is because of the silicon lottery at my end. Also, my voltage never reaches 1.268mv. It reached a max of 1.240mv (once), Is this the expected behavior? Shouldn't it go to max value since the voltage curve is being bypassed by the MPT method?

3. Yes, keeping PC components as cool as possible is always good but sometimes you have to do that with just air cooling, unfortunately.

4. I was able to test my settings with GT2 on loop for five hrs with 1.262mv (actual around 1.206mv as per HWiNFO). Then I got a random error on Port Royal. This means even GT2 loops are not good enough to confirm clock stability. I have increased voltage to around 1.268 and SOC to 1175mv (just to be sure) and Port Royal has not given me any error yet. Based on your experience, is there any other method that can help me quickly find out if my clocks are stable or not? I know playing games is best for testing stability but it is not methodical and repeatable in the short term.

5. So, the last thing, do drivers have an impact on OC? I mean significant impact like 50-100 Mhz from one driver to other or when drivers for a card mature?

*Also, My card is a reference 6900XT and I know I am pushing it a bit too hard but I just want those 2700MHz clocks. I am taking a risk with my card with these voltages but do you think I am going completely out of whack*? 
Others, please share your opinion on the aforementioned point as well.


----------



## ZealotKi11er

I your card under water or stock cooler? Also increasing the voltage over the stock that AMD set with the trick is only for benchmarking. The card will degrade much faster if you use it for gaming. No features/settings will make a card OC more.


----------



## Godhand007

ZealotKi11er said:


> I your card under water or stock cooler? Also increasing the voltage over the stock that AMD set with the trick is only for benchmarking. The card will degrade much faster if you use it for gaming. No features/settings will make a card OC more.


Stock cooler with full 100% fan speed, open case in a cool room (HotSpot is always below 110 C during gaming). "The card will degrade much faster if you use it for gaming." What is the timeframe we are talking about? I don't plan on keeping this card for more than a year.

"No features/settings will make a card OC more." You mean except for increasing voltage.


----------



## Bobbydo

ZealotKi11er said:


> Try something like 0.85v at 2000-2100 MHz. That is probably the best perf/power ratio.


Hi you guys. I have an issue with my 6900xt ultimate. In msfs 2020 I get less than 40fps. The GPU usage doesn't go above 70% mostly it's under 60% GPU doesn't throttle or anything. Temps under 50c. Undervolted or on default or overclocked. The same results. CPU usage is less than 30% Vsync is off. I get the same amount of fps on hd, 2k and 4k. Doesn't Matter. 60hz, 120hz, 165hz. Don't know what to do. 

But when I'm on the main menu. The GPU goes to 99% usage and so much higher fps. So I don't get it. It's supposed to be the opposite.


----------



## CS9K

Bobbydo said:


> Hi you guys. I have an issue with my 6900xt ultimate. In msfs 2020 I get less than 40fps. The GPU usage doesn't go above 70% mostly it's under 60% GPU doesn't throttle or anything. Temps under 50c. Undervolted or on default or overclocked. The same results. CPU usage is less than 30% Vsync is off. I get the same amount of fps on hd, 2k and 4k. Doesn't Matter. 60hz, 120hz, 165hz. Don't know what to do.
> 
> But when I'm on the main menu. The GPU goes to 99% usage and so much higher fps. So I don't get it. It's supposed to be the opposite.


You are CPU limited in some way or another. Is your memory's XMP/DOCP profile loaded?

Oh, right, until ASOBO puts out a hotfix, Sim Update 7 is messed up and performance is lower than expected when photogrammetry is enabled. Disable photogrammetry and your framerate should improve a little.


----------



## Bobbydo

CS9K said:


> You are CPU limited in some way or another. Is your memory's XMP/DOCP profile loaded?
> 
> Oh, right, until ASOBO puts out a hotfix, Sim Update 7 is messed up and performance is lower than expected when photogrammetry is enabled. Disable photogrammetry and your framerate should improve a little.


Yes. It's enabled and My CPU is 11700f running at full speed. I had 3 6900 xt and several Nvidia gpus too like 3080 and ti before. And they were all at 50-70fps with the same CPU. And now I have 6900 xt liquid devil ultimate. It's supposed to be the fastest.


----------



## ZealotKi11er

Godhand007 said:


> Stock cooler with full 100% fan speed, open case in a cool room (HotSpot is always below 110 C during gaming). "The card will degrade much faster if you use it for gaming." What is the timeframe we are talking about? I don't plan on keeping this card for more than a year.
> 
> "No features/settings will make a card OC more." You mean except for increasing voltage.


That is pretty crazy you can run those voltage with stock cooler. My card cant even maintain more than 1.1v, let alone 1.26v. I would say 3-6 months is enough to kill it if you game daily.


----------



## Godhand007

ZealotKi11er said:


> That is pretty crazy you can run those voltage with stock cooler. My card cant even maintain more than 1.1v, let alone 1.26v. I would say 3-6 months is enough to kill it if you game daily.


What do you mean your card can't maintain 1.1v? If you have a reference card your voltages should be around 1.100mv-1.150mv. Also, like I mentioned before the actual voltage is around 1.210mv during 3d loads as per HWiNFO. 1.260mv is never reached just like 1175mv is never reached for a stock configuration, even though that's what we see in Wattman and GPU-Z.


----------



## J7SC

Bobbydo said:


> Hi you guys. I have an issue with my 6900xt ultimate. In msfs 2020 I get less than 40fps. The GPU usage doesn't go above 70% mostly it's under 60% GPU doesn't throttle or anything. Temps under 50c. Undervolted or on default or overclocked. The same results. CPU usage is less than 30% Vsync is off. I get the same amount of fps on hd, 2k and 4k. Doesn't Matter. 60hz, 120hz, 165hz. Don't know what to do.
> 
> But when I'm on the main menu. The GPU goes to 99% usage and so much higher fps. So I don't get it. It's supposed to be the opposite.


Try 'Developer Mode' via the main Options tab in FS2020...that introduces a toolbar on top. Choose options in there and click on 'Display FPS'...that will show your more than just FPS, ie. 'limited by main thread' that refers to CPU and network vs 'limited by GPU'. During actual, busy flight sequences at 4K, ie. low over larger metro areas, it should tell you whether it is more of a CPU or GPU issue.


----------



## CS9K

J7SC said:


> Try 'Developer Mode' via the main Options tab in FS2020...that introduces a toolbar on top. Choose options in their and click on 'Display FPS'...that will show your more than just FPS, ie. 'limited by main thread' that refers to CPU and network vs 'limited by GPU'. During actual, busy flight sequences at 4K, ie. low over larger metro areas, it should tell you whether it is more of a CPU or GPU issue.


Try this, yes. I still think that if you haven't played since Sim Update 7 came out, then Sim Update 7 is the reason why your GPU is performing poorly.

At Dallas-Ft. Worth International airport, I get ~40-45fps with photogrammetry on, and 55-60fps with photogrammetry off, at 4k resolution. It is a _huge_ difference, and something that will be fixed in the upcoming hotfix, @Bobbydo


----------



## J7SC

CS9K said:


> Try this, yes. I still think that if you haven't played since Sim Update 7 came out, then Sim Update 7 is the reason why your GPU is performing poorly.
> 
> At Dallas-Ft. Worth International airport, I get ~40-45fps with photogrammetry on, and 55-60fps with photogrammetry off, at 4k resolution. It is a _huge_ difference, and something that will be fixed in the upcoming hotfix, @Bobbydo


A related question is also whether we're talking about the 'regular' DX11, or the new DX12 'beta' setting. The latter has decent overall frame rates on both my systems, but 1% and 10% low frame times are 'jerkomotion', no matter whether it is with an oc'ed 3090 Strix, or dual 2080 TI in SLI-CFR.


----------



## ZealotKi11er

Godhand007 said:


> What do you mean your card can't maintain 1.1v? If you have a reference card your voltages should be around 1.100mv-1.150mv. Also, like I mentioned before the actual voltage is around 1.210mv during 3d loads as per HWiNFO. 1.260mv is never reached just like 1175mv is never reached for a stock configuration, even though that's what we see in Wattman and GPU-Z.


I am talking set voltage. My 6900 XT is set to 1.175v. It does not reach that because I am power limited. You did the trick which set the vmin to 1.26v. What hw sets is completely different. Under load I never hit 1.175v at 255w.


----------



## ptt1982

ZealotKi11er said:


> You can lower SOC more if you bring FCLK down.


Appreciate your input here. Two questions:

1) What's the effect of bringing FCLK down on 6900xt?
2) How much should I bring it down, if my card is stable at core 900mv/soc 1000mv (anything below no matter the frequency is unstable) @ core 2080mhz max, memory fast timings 2080mhz? 100mhz? 300mhz? 500mhz? 50mhz?

Thank you.


----------



## Kawaz

Anyone Else than me having issues with 12900k/heavy oc 6900xt. 

My Old settings from My 5900x rig Just keeps crashing. Especially 3dmark is horrible compared to My old 5900x system. The CPU score is as expected from The 12900k, but im down like 500-1000 gpu points at The exact same gpu settings. 

All done on a fresh 11 install.
Also did a run of tomb raider 1080p maxxed (according to bench settings over on OCUK) and i did a 174fps run with My 5900x/[email protected] 

Same settings with 12900k and im down like 10+ fps. 

Also did some benches like cb23 and stuff and scores are where they should be. Its Just the experience in My daily games as well as 3dmark that properly sucks. Had some hickups in csgo and pubg as well. 

Gonna tweak a couple of days more, if i cant figure it out, Its getting swapped for a 5950x for them benches. 

Any input is welcome. Im a bit at a loss atm. 

Full setup is 
12900k
Msi z690 force 
32gb vengeance ddr5 
Full custom loop.


----------



## Godhand007

ZealotKi11er said:


> I am talking set voltage. My 6900 XT is set to 1.175v. It does not reach that because I am power limited. You did the trick which set the vmin to 1.26v. What hw sets is completely different. Under load I never hit 1.175v at 255w.


What HWinFO says is the "real" voltage so whatever is going to happen to the card would be dependent upon that voltage and not what is seen in Wattman or GPU-Z. FYI, 1.262mv (shown in GPU-Z) is not constantly applied, only during high 3d load (1-2 hrs a day average right now).


----------



## ZealotKi11er

Godhand007 said:


> What HWinFO says is the "real" voltage so whatever is going to happen to the card would be dependent upon that voltage and not what is seen in Wattman or GPU-Z. FYI, 1.262mv (shown in GPU-Z) is not constantly applied, only during high 3d load (1-2 hrs a day average right now).


If you set the vmin to 1.26v with the trick, any load will set 1.26v unless you are not using the pc at all.
Also the overuse of MPT for more power and this voltage trick will probably mean even more lockdown with next gen GPUs from AMD. Get ready for the Nvidia treatment.


----------



## Godhand007

ZealotKi11er said:


> If you set the vmin to 1.26v with the trick, any load will set 1.26v unless you are not using the pc at all.
> Also the overuse of MPT for more power and this voltage trick will probably mean even more lockdown with next gen GPUs from AMD. Get ready for the Nvidia treatment.


Nah, Unless you mean GPU-Z/Wattman voltage, this is not true for me. Refer to this post where I have provided exact details. 

About AMD locking things up, they have been moving in that direction already. I doubt ~0.01 AMD GPU users not using MPT is going to deter them. It just means enjoy what we have for now.


----------



## ZealotKi11er

Godhand007 said:


> Nah, Unless you mean GPU-Z/Wattman voltage, this is not true for me. Refer to this post where I have provided exact details.
> 
> About AMD locking things up, they have been moving in that direction already. I doubt ~0.01 AMD GPU users not using MPT is going to deter them. It just means enjoy what we have for now.


More than 0.01 AMD users use MPT especially 6800/6900.


----------



## kennypc4k

I am currently using Sapphire 6900 XT (original boost clock 2275mhz) with MPT unlocked the power. The card is running ~ 2400-2500 MHz in game. I am considering to buy another 6900 XT that officially boost to 2475 MHz, would there be any gain for me? Is the card going to run like 2600-2700 MHz? Thanks!


----------



## Bobbydo

J7SC said:


> Try 'Developer Mode' via the main Options tab in FS2020...that introduces a toolbar on top. Choose options in there and click on 'Display FPS'...that will show your more than just FPS, ie. 'limited by main thread' that refers to CPU and network vs 'limited by GPU'. During actual, busy flight sequences at 4K, ie. low over larger metro areas, it should tell you whether it is more of a CPU or GPU issue.


Limited by GPU. Not main thread


----------



## lestatdk

kennypc4k said:


> I am currently using Sapphire 6900 XT (original boost clock 2275mhz) with MPT unlocked the power. The card is running ~ 2400-2500 MHz in game. I am considering to buy another 6900 XT that officially boost to 2475 MHz, would there be any gain for me? Is the card going to run like 2600-2700 MHz? Thanks!


To be sure you'd need an XTXH card. Those usually boost way higher than regular XTX


----------



## J7SC

Bobbydo said:


> Limited by GPU. Not main thread


It's good to know that. In that case, @CS9K 's point about the latest patch (#7 ?) slowing things down is probably the culprit. FS2020 loves doing 'mandatory patches' for previous 'mandatory patches'  so it will probably get fixed reasonably soon. BTW, are you playing FS2020 on DX11 or the new DX12 beta?


----------



## frankangelillo01

What's up guys/girls..Just picked up my Sapphire Nitro+ SE to complete my build. Stock it's crazy but if I wanted to do some slight overclocking through the AMD software what are some example values to start with? What are some Min and Max Mhz you Sapphire guys are running? I upgraded from a 6700xt and the difference is crazy. I thought the 6700xt was awesome. Thank you in advance.


----------



## cfranko

kennypc4k said:


> I am currently using Sapphire 6900 XT (original boost clock 2275mhz) with MPT unlocked the power. The card is running ~ 2400-2500 MHz in game. I am considering to buy another 6900 XT that officially boost to 2475 MHz, would there be any gain for me? Is the card going to run like 2600-2700 MHz? Thanks!


The boost clock doesn't matter just increase the frequency yourself if you want more frequency.


----------



## CS9K

J7SC said:


> It's good to know that. In that case, @CS9K 's point about the latest patch (#7 ?) slowing things down is probably the culprit. FS2020 loves doing 'mandatory patches' for previous 'mandatory patches'  so it will probably get fixed reasonably soon. BTW, are you playing FS2020 on DX11 or the new DX12 beta?


FS2020 is still limited by MainThread for me both with and without photogrammetry on, at 4k with the hotfix. It looks like the threading changes are not being fixed in the hotfix :<


----------



## bloot

Halo Infinite campaign mode benchmarked Halo Infinite im Technik-Test: Benchmarks in FHD, WQHD & UHD, Frametimes, Treibervergleich und VRAM - ComputerBase


----------



## tolis626

frankangelillo01 said:


> What's up guys/girls..Just picked up my Sapphire Nitro+ SE to complete my build. Stock it's crazy but if I wanted to do some slight overclocking through the AMD software what are some example values to start with? What are some Min and Max Mhz you Sapphire guys are running? I upgraded from a 6700xt and the difference is crazy. I thought the 6700xt was awesome. Thank you in advance.


Well, if my card is anything to go by, don't expect much in terms of overclocking. I mean, there's always the silicon lottery, which you may win unlike me, but the cooler does tend to struggle if you push the power more. I mean, it can mostly handle the 330W or so that you get without MPT (Power slider at +15% and fan at 100%), but if you want to push it more with MPT, it's going to have a hard time. I'm usually running mine at min 500Mhz-max 2600MHz at 1.15V, memory 2150MHz with fast timings and an aggressive fan curve.

That said, I came from a 5700XT to a 6900XT and the difference is massive. Even comparing a max overclocked 5700XT to a stock 6900XT it was no comparison. So no matter what, the performance is there. Unless you're after numbers in benchmarks, don't sweat it too much.


bloot said:


> Halo Infinite campaign mode benchmarked Halo Infinite im Technik-Test: Benchmarks in FHD, WQHD & UHD, Frametimes, Treibervergleich und VRAM - ComputerBase
> 
> View attachment 2535929
> View attachment 2535928


Well that's interesting. I didn't expect Halo to love AMD that much. That's a fresh change over the norm. That said, what the hell is wrong with the 5700XT? It shouldn't be getting 1/3 of the performance of the 6900XT.


----------



## LtMatt

bloot said:


> Halo Infinite campaign mode benchmarked Halo Infinite im Technik-Test: Benchmarks in FHD, WQHD & UHD, Frametimes, Treibervergleich und VRAM - ComputerBase
> 
> View attachment 2535929
> View attachment 2535928


You love to see it, great showing for RDNA2, nicely ahead in forza 5 too big gains in 21.12.1.


----------



## J7SC

CS9K said:


> FS2020 is still limited by MainThread for me both with and without photogrammetry on, at 4k with the hotfix. It looks like the threading changes are not being fixed in the hotfix :<


...so a patch for the patch for the hotfix soon, then  . FYI, I've used Process Lasso before on FS2020 with some success re. threading.


----------



## CS9K

J7SC said:


> ...so a patch for the patch for the hotfix soon, then  . FYI, I've used Process Lasso before on FS2020 with some success re. threading.


Unfortunately, this issue is completely on their end to fix :<


----------



## J7SC

CS9K said:


> Unfortunately, this issue is completely on their end to fix :<


...yeah, in the meantime, I switched one system (SLI-CFR) back to DX11 which is a big improvement over DX12...I have a 1Gbps connection (also for work) which helps as slow connections also impact the 'main thread' at 4K Ultra. Fastest I have seen from the FS2020 server is around 120Mbps.


----------



## CS9K

J7SC said:


> ...yeah, in the meantime, I switched one system (SLI-CFR) back to DX11 which is a big improvement over DX12...I have a 1Gbps connection (also for work) which helps as slow connections also impact the 'main thread' at 4K Ultra. Fastest I have seen from the FS2020 server is around 120Mbps.


Yeah, the problem is definitely threading this time. The connection helps in situations like just after SU7 released, where it was the threading issue _and_ Azure eating **** that was tanking framerates as the thread that loaded scenery waited on data to come in. For now, things are loading and caching correctly, so it's just the threading changes that are bottlenecking the works.

I have confidence that they will fix it. I'm still _shocked_ that they've managed to make DX11 do what they have, because holy **** DX11 is _not_ ideal for highly parallel, heavy-number-crunching, multi-threaded apps like simulators. Fingers crossed that DX12's threading and scheduling abilities will allow them to develop a more efficient code setup for the sim, and bring performance up to being GPU-limited as it should be <3


----------



## tolis626

So, I started getting some crashes in Warzone after installing the latest driver, so I started digging, see what it is. Turns out, it was fclk. I posted about this before. I don't know what changed, but for some reason, after running it for months at 2000MHz, it just suddenly doesn't like it and I left it at stock. Same with SOC, but just turning down SOC didn't solve the issue. But, as expected, it worked like a charm afterwards. Then I thought, maybe that 2000MHz fclk was limiting my overclocks all these months, so I tried overclocking a bit further again. 2650MHz at 1.175V does work, but I will crash eventually. It's a huge bummer, having lost the silicon lottery with this card. And I don't even know why it's doing it, Warzone isn't even pushing it that hard. With the 5700XT I could play Warzone with semi stable overclocks all day long, whereas other games would crash. Not with this thing though. Everytime I push it it's like "NOPE!". You guys have any tips beyond the obvious stuff? Anything I could try (save for tricking the voltage higher, the cooling on this thing is barely hanging on for dear life at 350W and full voltage)?

Also, what's up with fclk? Any way to stabilize it? Or is it just what it is? I've read this whole thread, but I don't remember it being answered conclusively. And I'm still very much confused about that FclkMaxBoostFreq setting, I have no idea what it does or where I should set it.


----------



## Godhand007

tolis626 said:


> So, I started getting some crashes in Warzone after installing the latest driver, so I started digging, see what it is. Turns out, it was fclk. I posted about this before. I don't know what changed, but for some reason, after running it for months at 2000MHz, it just suddenly doesn't like it and I left it at stock. Same with SOC, but just turning down SOC didn't solve the issue. But, as expected, it worked like a charm afterwards. Then I thought, maybe that 2000MHz fclk was limiting my overclocks all these months, so I tried overclocking a bit further again. 2650MHz at 1.175V does work, but I will crash eventually. It's a huge bummer, having lost the silicon lottery with this card. And I don't even know why it's doing it, Warzone isn't even pushing it that hard. With the 5700XT I could play Warzone with semi stable overclocks all day long, whereas other games would crash. Not with this thing though. Everytime I push it it's like "NOPE!". You guys have any tips beyond the obvious stuff? Anything I could try (save for tricking the voltage higher, the cooling on this thing is barely hanging on for dear life at 350W and full voltage)?
> 
> Also, what's up with fclk? Any way to stabilize it? Or is it just what it is? I've read this whole thread, but I don't remember it being answered conclusively. And I'm still very much confused about that FclkMaxBoostFreq setting, I have no idea what it does or where I should set it.


The general opinion here is that save for voltage increase, there is pretty much nothing else you can do to OC further. I have a reference 6900XT and I have increased the voltage to further my OC, I am sure your card has better cooling than reference one.


----------



## J7SC

CS9K said:


> Yeah, the problem is definitely threading this time. The connection helps in situations like just after SU7 released, where it was the threading issue _and_ Azure eating **** that was tanking framerates as the thread that loaded scenery waited on data to come in. For now, things are loading and caching correctly, so it's just the threading changes that are bottlenecking the works.
> 
> I have confidence that they will fix it. I'm still _shocked_ that they've managed to make DX11 do what they have, because holy **** DX11 is _not_ ideal for highly parallel, heavy-number-crunching, multi-threaded apps like simulators. Fingers crossed that DX12's threading and scheduling abilities will allow them to develop a more efficient code setup for the sim, and bring performance up to being GPU-limited as it should be <3


I agree - the DX11 implementation is something else, and I happily revert back to it until they fix the latest threading issues and other problems, including on DX12 1% and 10% frame times.

I do use a very big rolling cache on a separate drive which helps a lot when Azure servers are wheezing. With the new faster connection, I probably will also go for a big manual cache and download max detail for those areas I spent the most time in.


----------



## CS9K

tolis626 said:


> So, I started getting some crashes in Warzone after installing the latest driver, so I started digging, see what it is. Turns out, it was fclk. I posted about this before. I don't know what changed, but for some reason, after running it for months at 2000MHz, it just suddenly doesn't like it and I left it at stock. Same with SOC, but just turning down SOC didn't solve the issue. But, as expected, it worked like a charm afterwards. Then I thought, maybe that 2000MHz fclk was limiting my overclocks all these months, so I tried overclocking a bit further again. 2650MHz at 1.175V does work, but I will crash eventually. It's a huge bummer, having lost the silicon lottery with this card. And I don't even know why it's doing it, Warzone isn't even pushing it that hard. With the 5700XT I could play Warzone with semi stable overclocks all day long, whereas other games would crash. Not with this thing though. Everytime I push it it's like "NOPE!". You guys have any tips beyond the obvious stuff? Anything I could try (save for tricking the voltage higher, the cooling on this thing is barely hanging on for dear life at 350W and full voltage)?
> 
> Also, what's up with fclk? Any way to stabilize it? Or is it just what it is? I've read this whole thread, but I don't remember it being answered conclusively. And I'm still very much confused about that FclkMaxBoostFreq setting, I have no idea what it does or where I should set it.


As best I can tell, the fclk is the cache frequency. Like with the compute units in the core, the cache is just another part of the silicon. Your GPU as a whole is only as fast/powerful/capable as its slowest individual part (the cache is scattered around the GPU die in 4MB blocks). The silicon as a whole can only do so much at a specific voltage. Once you find that limit, then that's it.


----------



## Blameless

tolis626 said:


> So, I started getting some crashes in Warzone after installing the latest driver, so I started digging, see what it is. Turns out, it was fclk. I posted about this before. I don't know what changed, but for some reason, after running it for months at 2000MHz, it just suddenly doesn't like it and I left it at stock. Same with SOC, but just turning down SOC didn't solve the issue. But, as expected, it worked like a charm afterwards. Then I thought, maybe that 2000MHz fclk was limiting my overclocks all these months, so I tried overclocking a bit further again. 2650MHz at 1.175V does work, but I will crash eventually. It's a huge bummer, having lost the silicon lottery with this card. And I don't even know why it's doing it, Warzone isn't even pushing it that hard. With the 5700XT I could play Warzone with semi stable overclocks all day long, whereas other games would crash. Not with this thing though. Everytime I push it it's like "NOPE!". You guys have any tips beyond the obvious stuff? Anything I could try (save for tricking the voltage higher, the cooling on this thing is barely hanging on for dear life at 350W and full voltage)?


I would recommend finding some tests that reliably crash the card at as low clocks as possible and use those for the first run of testing, so that you don't run into any unpleasant surprises later.

In the case of my 6800XT, for example, this is Night Raid...which is only stable at a full 100MHz lower than Time Spy graphics test 2, despite needing much less power. This is likely because my PowerColor Red Dragon has weak input filtering, less than ideal output filtering, and because the test produces wild transient loads due the the frame rates it will reach (1600+) if not CPU limited. My new ASRock 6900XT(XH) OCF has a much better VRM (especially with regards to filtering capacitors) and in initial testing can run Night Raid much closer to the clocks I can use for other tests.

Anyway, trimming power and clocks elsewhere, so you can put a bit more into the GPU is often beneficial. SoC voltage, memory VDD, and VDDCI can frequently be tuned down, and this can free up 10-20w for the GPU. Reducing LCLK slightly can also improve stability a bit with negligible performance impact.

It's also possible that GDDR6 instability is a culprit. 2150 with fast timings is fully stable on almost no 16Gbps GDDR6 equipped Navi21 part, but it can be very difficult to isolate memory instability if it's borderline. Usually takes me 24-72 hours of mining with ReBAR enabled to throw an error that is clearly caused by memory clock.



tolis626 said:


> Also, what's up with fclk? Any way to stabilize it? Or is it just what it is?


More SoC and GFX voltage can sometimes stabilize higher FCLKs, but I usually leave it alone as it's not an efficient way to get more performance out of either of my Navi21 samples.

Reducing associated clocks that hang off the fabric, like SoC, PHY, or LCLK, might help, but probably not without harming performance before any FCLK related gains are realized.



tolis626 said:


> And I'm still very much confused about that FclkMaxBoostFreq setting, I have no idea what it does or where I should set it.


It doesn't control FCLK directly, that's for sure.

Best anyone can figure is that it when the FCLK is at or below the frequency listed, due to GPU load, the GPU will try to enable the MALL power saving feature and refresh the frame buffer from cache rather than having to power up the main memory. Could possibly be the reverse...with MALL capping FCLK boost to save power at lower loads. Regardless, unless GPU use is very low, what you set under MALL for FclkBoostFreq shouldn't matter...any significant load should keep the FCLK at the max boost clock set in the column to the left in MPT.






Radeon Linux Driver Seeing "MALL" Feature For Big Navi - Phoronix







www.phoronix.com


----------



## J7SC

Blameless said:


> (...) In the case of my 6800XT, for example, this is Night Raid...which is only stable at a full 100MHz lower than Time Spy graphics test 2, despite needing much less power. This is likely because my PowerColor Red Dragon has weak input filtering, less than ideal output filtering, and because the test produces wild transient loads due the the frame rates it will reach (1600+) if not CPU limited. My new ASRock 6900XT(XH) OCF has a much better VRM (especially with regards to filtering capacitors) and in initial testing can run Night Raid much closer to the clocks I can use for other tests.
> (...)


Visually, I love Night Raid, but when I see well over 1000 FPS, I get a bit nervous (what with Amazon's NewWorld etc) even with a solid 3x8 pin PCB and decent filtering.

On VRAM, I noticed some odd things: First, when leaving the stock vbios values, HWInfo will usually show 2150 / fast timings in the 'max' column, but with higher MPT PL values, it tends to drop to 2140 or so. Second, when opening MSI AB after the Radeon software tuning has been set and with a higher-than-stock MPT PL, the VRAM slider in MSI AB doesn't show anything, but moving it to 2150 results in a subsequent HWInfo VRAM speed of 2144 + more often than not. That doesn't make much of a difference anyhow, but it is kind of weird.

Finally, my weekly complaint about XTX cards having a capped VRAM speed at 2150. I do think on my GPU there's more in it, not least if I input some XTXH vbios values of a related card, the build-in Radeon software suggest an overclock of 2260 w/ fast timing enabled for the VRAM after its little test (though GPU speed drops to safe mode when trying to bench). I don't know how accurate that AMD Radeon test is, but I would think it would be conservative. It annoys me to no end that I cannot overclock VRAM to either a speed where efficiency declines per my standard benches, or the card outright crashes.


----------



## Blameless

J7SC said:


> Visually, I love Night Raid, but when I see well over 1000 FPS, I get a bit nervous (what with Amazon's NewWorld etc) even with a solid 3x8 pin PCB and decent filtering.


No test should kill a non-defective part without spec power or temperature limits being wildly exceeded somewhere.

One of the reasons I still have 3DMark 2003, UT 2003, and Quake 3 (all of which can hit 5000+ fps with a fast CPU on modern cards), along with Night Raid, in my box of tests is to blow up broken cards soon after I get them so they can be sent back during the e-tailers return period.



J7SC said:


> On VRAM, I noticed some odd things: First, when leaving the stock vbios values, HWInfo will usually show 2150 / fast timings in the 'max' column, but with higher MPT PL values, it tends to drop to 2140 or so. Second, when opening MSI AB after the Radeon software tuning has been set and with a higher-than-stock MPT PL, the VRAM slider in MSI AB doesn't show anything, but moving it to 2150 results in a subsequent HWInfo VRAM speed of 2144 + more often than not. That doesn't make much of a difference anyhow, but it is kind of weird.


I've noticed similar weirdness with how memory clocks are actually calculated. I figure there is some multiplier of the card's reference clock at work that varies with the various GPU clock curves, but I haven't really investigated it in detail.

As long as the clock speed one sets is what determines the memory timing table/strap used--and as far as I can tell, it does--that's good enough. The 2124MHz (slider) sweet spot (2125 kicks into a looser set of timings) is about all that one can expect to get unconditionally stable on these 16Gbps parts anyway. I've been fooled into thinking I was stable higher, but eventually I always found and issue and had to back off. In practice I use 2100 fast (2088-2090 actual) on my 6800 XT and 2124 (fast) on my 6900 XTXH. I'm still testing the 6900 however, and that may prove to be less than completely stable.

Most who think they are stable with fast at 2140+ are either on water, or can be proven wrong if they force worst case GDDR6 temps (I test for VRAM stability at ~10C hotter than I ever see it get in games, or 85C memory junction, whichever is higher) and run something like ethash/dagger-hashimoto long enough. Some games can certainly be stable though, at least until they aren't. But, I'm a real stickler for stability...I go out of my way to find problems before they find me.



J7SC said:


> Finally, my weekly complaint about XTX cards having a capped VRAM speed at 2150. I do think on my GPU there's more in it, not least if I input some XTXH vbios values of a related card, the build-in Radeon software suggest an overclock of 2260 w/ fast timing enabled for the VRAM after its little test (though GPU speed drops to safe mode when trying to bench). I don't know how accurate that AMD Radeon test is, but I would think it would be conservative. It annoys me to no end that I cannot overclock VRAM to either a speed where efficiency declines per my standard benches, or the card outright crashes.
> View attachment 2536038


AMD's auto overclocking tool produces blatantly unstable results on every card I've tried it on. It's useless.

The cap is silly (some exceptional samples surely exist, and with looser timing tables, who knows how far the memory will scale), but I'm convinced most cards aren't missing out on anything by it being there. I'm not even reliably bench stable past 2150-2160 (standard timings) on my XTHX, even if I keep the memory below 45C while over/undervolting everything related to it. Auto overclock is still spitting out figures in the 2200-2300 range.


----------



## Illuminado

Any good hard and fast tips for overclocking this card? Have increased power limit to 15% (as that's all we can do D. Any good results from playing with the voltage and frequencies/VRAM? I have a lot of temp overhead space atm as just finished a new custom loop. Just was ingame with stock settings and its now happily at about 37 degrees at full load. Makes a big difference from pushing 80-90 under air!


----------



## jonRock1992

J7SC said:


> Visually, I love Night Raid, but when I see well over 1000 FPS, I get a bit nervous (what with Amazon's NewWorld etc) even with a solid 3x8 pin PCB and decent filtering.
> 
> On VRAM, I noticed some odd things: First, when leaving the stock vbios values, HWInfo will usually show 2150 / fast timings in the 'max' column, but with higher MPT PL values, it tends to drop to 2140 or so. Second, when opening MSI AB after the Radeon software tuning has been set and with a higher-than-stock MPT PL, the VRAM slider in MSI AB doesn't show anything, but moving it to 2150 results in a subsequent HWInfo VRAM speed of 2144 + more often than not. That doesn't make much of a difference anyhow, but it is kind of weird.
> 
> Finally, my weekly complaint about XTX cards having a capped VRAM speed at 2150. I do think on my GPU there's more in it, not least if I input some XTXH vbios values of a related card, the build-in Radeon software suggest an overclock of 2260 w/ fast timing enabled for the VRAM after its little test (though GPU speed drops to safe mode when trying to bench). I don't know how accurate that AMD Radeon test is, but I would think it would be conservative. It annoys me to no end that I cannot overclock VRAM to either a speed where efficiency declines per my standard benches, or the card outright crashes.
> View attachment 2536038


Don't trust the vram auto OC. It's massively incorrect. It recommended 2250MHz for me, and it was only actually stable at 2170MHz without the fast-timings. With the fast-timings it's fully stable at 2150MHz.


----------



## Blameless

I just ran the auto OC test again while my most recent custom SPPT was loaded and it's actually reporting clocks that are pretty close to accurate this time...it's suggesting 2150 memory with the -50mV undervolt I have on both the MVDD and VDDCI.

Edit: reverted back to stock no-SPPT settings and auto OC is only reporting 2160 on the memory now. Maybe the card needed to be broken in, or the higher temp from the zero fan speed mode (which I normally turn off) being active kept it from overestimating too radically?

Edit2: Now it says 2210MHz again. I think Wattman is drunk.


----------



## J7SC

Blameless said:


> No test should kill a non-defective part without spec power or temperature limits being wildly exceeded somewhere.
> (...)
> Most who think they are stable with fast at 2140+ are either *on water*, or can be proven wrong if they force worst case GDDR6 temps (I test for VRAM stability at ~10C hotter than I ever see it get in games, or 85C memory junction, whichever is higher) and run something like ethash/dagger-hashimoto long enough. Some games can certainly be stable though, at least until they aren't. But, I'm a real stickler for stability...I go out of my way to find problems before they find me.
> (...)
> AMD's auto overclocking tool produces blatantly unstable results on every card I've tried it on. It's useless.
> The cap is silly (some exceptional samples surely exist, and with looser timing tables, who knows how far the memory will scale), but I'm convinced most cards aren't missing out on anything by it being there. (...)





jonRock1992 said:


> Don't trust the vram auto OC. It's massively incorrect. It recommended 2250MHz for me, and it was only actually stable at 2170MHz without the fast-timings. With the fast-timings it's fully stable at 2150MHz.


Thanks folks 

On the '1000 fps', I'm not necessarily worried about it (...don't even have Amazon's New World, nor any of the mostly NVidia card vendor brands which caused the trouble). But I limit fps at 200 (running mostly 4K) on my NVidia RTX cards anyhow as I have seen 700+ fps in load screens for eg. FS2020 and will probably try to do s.th. similar on the 6900XT.

Per pic collage below, VRAM temps are very good (ambient was a cozy 24 C) as this has an extensive water-cooling setup. I also used thermal putty for the VRAM as well as the back of the die / back of the PCB and added a big heatsink on the back.

On VRAM clock caps, I don't think the AMD 'auto-overclock' feature is to be trusted either...it's VRAM test is super quick / rudimentary. All that said, I've built up a nice database of various benches and tests for oc'ed GPUs for a decade plus which includes other recent GDDR6 models, and I like to at least '_have the option_' to slide down the efficiency curve, or outright freeze / black-screen / crash. I don't like caps, especially after I went the extra mile re. cooling, unless I'm the one putting the cap / limit on...


----------



## tolis626

Godhand007 said:


> The general opinion here is that save for voltage increase, there is pretty much nothing else you can do to OC further. I have a reference 6900XT and I have increased the voltage to further my OC, I am sure your card has better cooling than reference one.


Yup, that's what I meant by "beyond the obvious". More voltage isn't really an option, as I'm at 1.175V already, which is max for an XTX card. I went and bought one without checking what it really is like an idiot, I wanted an XTXH card. Oh well, at least the performance I paid for is there. That said, I don't want to push voltage with MPT trickery. Not so much because of concerns about the card's longevity (I wouldn't ever run it 24/7), but mostly because I can already easily max out my card's cooling at stock voltage, so there's no point to go further. I only see like 1.125V most of the time under load anyway, so I suppose better cooling would drop temps, so then I could push power limits, which would then allow the card to keep its voltage higher, resulting in higher clocks. But, as it stands, that's not gonna happen.


Blameless said:


> I would recommend finding some tests that reliably crash the card at as low clocks as possible and use those for the first run of testing, so that you don't run into any unpleasant surprises later.
> 
> In the case of my 6800XT, for example, this is Night Raid...which is only stable at a full 100MHz lower than Time Spy graphics test 2, despite needing much less power. This is likely because my PowerColor Red Dragon has weak input filtering, less than ideal output filtering, and because the test produces wild transient loads due the the frame rates it will reach (1600+) if not CPU limited. My new ASRock 6900XT(XH) OCF has a much better VRM (especially with regards to filtering capacitors) and in initial testing can run Night Raid much closer to the clocks I can use for other tests.
> 
> Anyway, trimming power and clocks elsewhere, so you can put a bit more into the GPU is often beneficial. SoC voltage, memory VDD, and VDDCI can frequently be tuned down, and this can free up 10-20w for the GPU. Reducing LCLK slightly can also improve stability a bit with negligible performance impact.
> 
> It's also possible that GDDR6 instability is a culprit. 2150 with fast timings is fully stable on almost no 16Gbps GDDR6 equipped Navi21 part, but it can be very difficult to isolate memory instability if it's borderline. Usually takes me 24-72 hours of mining with ReBAR enabled to throw an error that is clearly caused by memory clock.
> 
> 
> 
> More SoC and GFX voltage can sometimes stabilize higher FCLKs, but I usually leave it alone as it's not an efficient way to get more performance out of either of my Navi21 samples.
> 
> Reducing associated clocks that hang off the fabric, like SoC, PHY, or LCLK, might help, but probably not without harming performance before any FCLK related gains are realized.
> 
> 
> 
> It doesn't control FCLK directly, that's for sure.
> 
> Best anyone can figure is that it when the FCLK is at or below the frequency listed, due to GPU load, the GPU will try to enable the MALL power saving feature and refresh the frame buffer from cache rather than having to power up the main memory. Could possibly be the reverse...with MALL capping FCLK boost to save power at lower loads. Regardless, unless GPU use is very low, what you set under MALL for FclkBoostFreq shouldn't matter...any significant load should keep the FCLK at the max boost clock set in the column to the left in MPT.
> 
> 
> 
> 
> 
> 
> Radeon Linux Driver Seeing "MALL" Feature For Big Navi - Phoronix
> 
> 
> 
> 
> 
> 
> 
> www.phoronix.com


Well, for me that reliably crashing test is TimeSpy, but it's problematic. See, let's say I run it at the current settings I'm testing, 2650MHz at 1.175V with 2150MHz FT memory. GT2 will mostly immediately shoot up to my set PL of about 350W, make the card run rather hot, drop the clocks then have them bounce around between overboosting and 2400MHz, then crash at some point. In fact, it's a flip of the coin whether it will pass TimeSpy on my gaming OC of 2600MHz at 1.15V with memory at 2150MHz FT, which has been rock solid stable (well, apart from those weird fclk crashes I got recently) on every game I tried for months now. Inversely, if I drop power so that temps are better in check, it may pass with higher OC but only because it's consistently running low clocks due to power restraints. 

Anyway, I'm quite sure that my memory is stable at 2150MHz FT (I even did some mining to check too), but I will entertain the idea that it may not be and I will be trying it at 2100-2124MHz for the next few days. Maybe if i'd ran mining for longer, it could've crashed. Maybe pushing memory that little bit higher causes core instability. I dunno, so I have to test it. Let's see! Having said that, my memory at least stays nice and cool. During gaming it won't even hit 60C and the highest I think I've seen was in the high 60s-low 70s during mining.

As for FCLK, it's a damn mess, I'm telling you! I may fiddle around with the other stuff you mentioned, but I don't expect VDDCI or MVDD to make much of a difference. Not for the better at least. SOC at under 1.1V is unstable on my card, so I don't really see the point dropping it by just a few mV.

Thank you for your insight man. And if I break my card with your suggestions, I know I can't blame you. You are, after all, Blameless.  

Cheers!


----------



## Blameless

tolis626 said:


> GT2 will mostly immediately shoot up to my set PL of about 350W, make the card run rather hot, drop the clocks then have them bounce around between overboosting and 2400MHz, then crash at some point.


Hitting the power limiter can induce instability itself as this prompts rather extreme transient power spikes. The card will issue a bunch of halt states to get power down, then immediately go back to tripping the power limiter again. Somewhat counterintuitively, a borderline limiter can cause serious spikes in current demand.

This effect can be seen in power consumption tests of card in reviews. For example:









Limiters keep average power consumption in check, which is important for cooling, but they also increase jitter considerably.

This is where tuning down everything else can help. If you have a thermal limit near 350w, another ~20w that can go to the GPU might keep it from hammering the limiter. If you can't reduce any secondary clocks/voltages further, see if you can get away with increasing the power limit.

As an aside, CPU limitations can also have a similar effect, in some tests. I'm getting crashes in Night Raid on my test bench than I don't get on my main system, even though the test bench has a better PSU, because the test bench also has a 3900X on it and can't keep a Navi21 at full GPU utilization in the test, while the 5800X in my main system generally can.


----------



## cfranko

My memory and VRM temperatures are not being reported in HWinfo64 anymore with the 21.12.1 driver. Is it the same for you guys?


----------



## LtMatt

cfranko said:


> My memory and VRM temperatures are not being reported in HWinfo64 anymore with the 21.12.1 driver. Is it the same for you guys?


Working fine here, using HWINFO64 V7.11-4550. Definitely not on the latest version and been using this version for a while.


----------



## Godhand007

tolis626 said:


> Yup, that's what I meant by "beyond the obvious". More voltage isn't really an option, as I'm at 1.175V already, which is max for an XTX card. I went and bought one without checking what it really is like an idiot, I wanted an XTXH card. Oh well, at least the performance I paid for is there. That said, I don't want to push voltage with MPT trickery. Not so much because of concerns about the card's longevity (I wouldn't ever run it 24/7), but mostly because I can already easily max out my card's cooling at stock voltage, so there's no point to go further. I only see like 1.125V most of the time under load anyway, so I suppose better cooling would drop temps, so then I could push power limits, which would then allow the card to keep its voltage higher, resulting in higher clocks. But, as it stands, that's not gonna happen.


I got you the first time and that's why said nothing else apart from voltage etc. Though I have seen multiple things that can be done when you are UV/UC or trying to reduce the power consumption.
As to your point about voltage increase not being an option, it certainly is. As mentioned earlier I have a reference card with a stock cooler and I was able to get around 150 Mhz more with the below MPT settings after increasing voltage (2700 Mhz+). I understand that you might not want to try that due to various reasons but it definitely works.

Actual GPU voltage stays around ~1.223 mv during 3d loads.











Increased power limits won't result in an automatic increase in voltage due to LLC (others can correct me here) but they would give you higher clocks. But after a creatin point even if you increase your power limits to 500w if the clock are not stable then that's that.


----------



## lawson67

LtMatt said:


> Working fine here, using HWINFO64 V7.11-4550. Definitely not on the latest version and been using this version for a while.


Ive got my Sapphire Toxic Extreme on my new Bykski waterblock now with LM applied, it fits perfectly and hotstop temp is in the high 60's low 70's @ 420watt, its also scoring higher in TS now as expected with the cooler junction temp and is passing TS @2850mhz so overall i am very happy with Bykski's new waterblock for the sapphire extreme , it also fits the sapphire nitro+ apparently as it uses the Toxic PCB


----------



## Godhand007

One more piece of info which many might already be familiar with: Voltage and power consumption increases based on resolution. I was doing VSR near 4k (I have a 1440p monitor) and voltage increased up to ~25mv sometimes compared to 1440p.


----------



## tolis626

Blameless said:


> Hitting the power limiter can induce instability itself as this prompts rather extreme transient power spikes. The card will issue a bunch of halt states to get power down, then immediately go back to tripping the power limiter again. Somewhat counterintuitively, a borderline limiter can cause serious spikes in current demand.
> 
> This effect can be seen in power consumption tests of card in reviews. For example:
> 
> 
> 
> 
> 
> 
> 
> 
> 
> Limiters keep average power consumption in check, which is important for cooling, but they also increase jitter considerably.
> 
> This is where tuning down everything else can help. If you have a thermal limit near 350w, another ~20w that can go to the GPU might keep it from hammering the limiter. If you can't reduce any secondary clocks/voltages further, see if you can get away with increasing the power limit.
> 
> As an aside, CPU limitations can also have a similar effect, in some tests. I'm getting crashes in Night Raid on my test bench than I don't get on my main system, even though the test bench has a better PSU, because the test bench also has a 3900X on it and can't keep a Navi21 at full GPU utilization in the test, while the 5800X in my main system generally can.


That actually makes a lot of sense. I realize that until now I had no idea how power limiters worked or what detriments they may cause. And this fits my experience exactly. Thanks for sharing mate!

Now that I'm thinking about it, I don't know if it's maybe my PSU that may not be able to handle these transients. I doubt it, because it hasn't buckled once, even under full load (CPU+GPU, pulling over 650W from the wall during a balls to the wall overclocked run), and I haven't seen its 12V line drop below 12.04V even once. Eh, if all goes well with my work, I might bite the bullet and get to building a custom loop at some point. If I do, I'll get a better (mostly higher power) PSU.

As I said, I'll give it a try, I'll test your suggestions. Maybe they work for me, maybe they don't, but either way I have nothing to lose.


Godhand007 said:


> I got you the first time and that's why said nothing else apart from voltage etc. Though I have seen multiple things that can be done when you are UV/UC or trying to reduce the power consumption.
> As to your point about voltage increase not being an option, it certainly is. As mentioned earlier I have a reference card with a stock cooler and I was able to get around 150 Mhz more with the below MPT settings after increasing voltage (2700 Mhz+). I understand that you might not want to try that due to various reasons but it definitely works.
> 
> Actual GPU voltage stays around ~1.223 mv during 3d loads.
> 
> View attachment 2536233
> 
> 
> 
> Increased power limits won't result in an automatic increase in voltage due to LLC (others can correct me here) but they would give you higher clocks. But after a creatin point even if you increase your power limits to 500w if the clock are not stable then that's that.


Well, I will insist that it's not an option for me. More than anything else, it's because the cooler is probably not mounted perfectly on my card, so I get mediocre to bad hotspot temps. At my current settings, a TimeSpy run will push 105C hotspot. If my hotspot was cooler, I would at least give it a shot, but as it stands, there's no point. As I said, I'm not even maxing out the current voltage limit. The card is set to 1.175V but it itself decides to not use over 1.13V. Silicon has at least some headroom if I improve the cooling, even if I don't push max voltage.

As for your last point, I think you are correct. BUT! I don't think what I'm seeing is LLC at work. It's probably just the card's internal algo going "hmmm... Temps are here, power's here, well we need up to this voltage". I'm not saying that increased power limit will increase voltage, but at a certain point, the card will lower voltage due to power and temperature constraints. I think I'm in the last case.


lawson67 said:


> Ive got my Sapphire Toxic Extreme on my new Bykski waterblock now with LM applied, it fits perfectly and hotstop temp is in the high 60's low 70's @ 420watt, its also scoring higher in TS now as expected with the cooler junction temp and is passing TS @2850mhz so overall i am very happy with Bykski's new waterblock for the sapphire extreme , it also fits the sapphire nitro+ apparently as it uses the Toxic PCB
> 
> View attachment 2536246
> View attachment 2536248


Ok, first off, that's a sick system and I hate you because I now want one too.  

I just want to make a small "correction". I am quite certain that the Nitro+ uses a different PCB than the Toxic. It's the Nitro+ SE that uses the Toxic PCB.


----------



## Godhand007

tolis626 said:


> Well, I will insist that it's not an option for me. More than anything else, it's because the cooler is probably not mounted perfectly on my card, so I get mediocre to bad hotspot temps. At my current settings, a TimeSpy run will push 105C hotspot. If my hotspot was cooler, I would at least give it a shot, but as it stands, there's no point. As I said, I'm not even maxing out the current voltage limit. The card is set to 1.175V but it itself decides to not use over 1.13V. Silicon has at least some headroom if I improve the cooling, even if I don't push max voltage.
> 
> As for your last point, I think you are correct. BUT! I don't think what I'm seeing is LLC at work. It's probably just the card's internal algo going "hmmm... Temps are here, power's here, well we need up to this voltage". I'm not saying that increased power limit will increase voltage, but at a certain point, the card will lower voltage due to power and temperature constraints. I think I'm in the last case.


Without voltage increase, I was able to get 2550 Mhz stable with MAX TDP of ~360/370W. This was working fine (tested for months). Given that our chips are similar, you should be able to achieve at least that.

On the point about _voltage increase being proportional to power consumption_, I am not sure anymore. Refer to this post I made earlier. We need inputs from others on this.


----------



## CS9K

cfranko said:


> My memory and VRM temperatures are not being reported in HWinfo64 anymore with the 21.12.1 driver. Is it the same for you guys?


All present and accounted for in v7.14-4610, here


----------



## Blameless

Trying to decide if this 6900 XT OCF sample is mediocre, or if my year old 6800 XT Red Dragon sample is just that good.

The 6800 XT will loop Time Spy and Time Spy Extreme test #2 forever, and pass indefinite loops of Night Raid test #2, with 1112mV set for the core in MPT (about 1056mV actual in Time Spy Extreme) and a 2577MHz max boost (~2510MHz actual) set in Wattman.

The 6900XT OC needs 50mV more (1162 set, 1100 actual) to pass the same tests at the same clocks, which translates into about 40 more watts at peak loads.

Temps on both are roughly the same, and quite high (nearing 100C hotspot), as the 6800XT is in an ITX system and I'm calbrating the cooling on the OCF to what I'll have available for it in the same setup.

This is with the 21.12.1 drivers.


----------



## Bobb3rdown

lawson67 said:


> Ive got my Sapphire Toxic Extreme on my new Bykski waterblock now with LM applied, it fits perfectly and hotstop temp is in the high 60's low 70's @ 420watt, its also scoring higher in TS now as expected with the cooler junction temp and is passing TS @2850mhz so overall i am very happy with Bykski's new waterblock for the sapphire extreme , it also fits the sapphire nitro+ apparently as it uses the Toxic PCB
> 
> View attachment 2536246
> View attachment 2536248


The nitro+ special edition is the same. Not the regular nitro+. I have that block for my toxic air cooled. Will start to hit 110c hotspot in timespy around 360w. Can't wait to get my loop together. Just need a pump/res, tubing, and fittings.9


----------



## tolis626

Blameless said:


> Hitting the power limiter can induce instability itself as this prompts rather extreme transient power spikes. The card will issue a bunch of halt states to get power down, then immediately go back to tripping the power limiter again. Somewhat counterintuitively, a borderline limiter can cause serious spikes in current demand.
> 
> This effect can be seen in power consumption tests of card in reviews. For example:
> 
> 
> 
> 
> 
> 
> 
> 
> 
> Limiters keep average power consumption in check, which is important for cooling, but they also increase jitter considerably.
> 
> This is where tuning down everything else can help. If you have a thermal limit near 350w, another ~20w that can go to the GPU might keep it from hammering the limiter. If you can't reduce any secondary clocks/voltages further, see if you can get away with increasing the power limit.
> 
> As an aside, CPU limitations can also have a similar effect, in some tests. I'm getting crashes in Night Raid on my test bench than I don't get on my main system, even though the test bench has a better PSU, because the test bench also has a 3900X on it and can't keep a Navi21 at full GPU utilization in the test, while the 5800X in my main system generally can.


Well, I had nothing better to do, so I decided to go ahead and test your suggestions. Aaaaaaand nothing. Still exactly the same behavior. If I dare go above 2600MHz, TimeSpy crashes, usually immediately. I tried raising the power limit to 380W, lowering memory to 2100MHz (2088MHz actual), lowering SoC to 1.1V, MVDD to 0.8V, VDDIO to 1.3V, I even tried all of that combined and it didn't make a shred of diffference. Not a little bit. No difference in scores, not to stability, nothing. If anything, after trying it for a bit, it even seems to be able to work at 2150MHz with the lowered voltages no problem, but that requires way more testing to confirm as actually stable. I'm at a loss. It seems I just have to accept that my silicon is just that bad.

Only silver lining (if I can see it as such) is that, during benching, when the card is under full load, it will only use up to 1.11-1.12V. I thought I had a good understanding on how it works, but it seems I was wrong. No matter what I did, without hitting the power limit, voltage would not go higher. And it wasn't even extremely hot, it peaked at 71C edge and 96C hotspot, but usually hovered at 70/90 (yay, Greek winter, it's like northern European spring, temps are like 15C outside). I don't know what's causing the voltage to not go higher. Maybe @Godhand007 is right and it's just LLC. But if it's something else that I could tweak, then I do have quite a bit of headroom before needing to do MPT trickery. Any input is welcome.


Godhand007 said:


> Without voltage increase, I was able to get 2550 Mhz stable with MAX TDP of ~360/370W. This was working fine (tested for months). Given that our chips are similar, you should be able to achieve at least that.
> 
> On the point about _voltage increase being proportional to power consumption_, I am not sure anymore. Refer to this post I made earlier. We need inputs from others on this.


Well, if you read above, I actually think you may be right. I lowered other voltages (mem, soc etc) and raised power limit to 380W. Card didn't overheat, but I still wouldn't go above 1.11-1.12V. Still, I may try pushing more voltage into the thing for a few benches when the weather finally decides to go properly cold (if it ever does), but I wouldn't use it like that for daily usage. I don't know if it's LLC or what's causing it, but it's driving me crazy. I never liked losing the silicon lottery, but it seems I have with this card.


----------



## Bobb3rdown

tolis626 said:


> Well, I had nothing better to do, so I decided to go ahead and test your suggestions. Aaaaaaand nothing. Still exactly the same behavior. If I dare go above 2600MHz, TimeSpy crashes, usually immediately. I tried raising the power limit to 380W, lowering memory to 2100MHz (2088MHz actual), lowering SoC to 1.1V, MVDD to 0.8V, VDDIO to 1.3V, I even tried all of that combined and it didn't make a shred of diffference. Not a little bit. No difference in scores, not to stability, nothing. If anything, after trying it for a bit, it even seems to be able to work at 2150MHz with the lowered voltages no problem, but that requires way more testing to confirm as actually stable. I'm at a loss. It seems I just have to accept that my silicon is just that bad.
> 
> Only silver lining (if I can see it as such) is that, during benching, when the card is under full load, it will only use up to 1.11-1.12V. I thought I had a good understanding on how it works, but it seems I was wrong. No matter what I did, without hitting the power limit, voltage would not go higher. And it wasn't even extremely hot, it peaked at 71C edge and 96C hotspot, but usually hovered at 70/90 (yay, Greek winter, it's like northern European spring, temps are like 15C outside). I don't know what's causing the voltage to not go higher. Maybe @Godhand007 is right and it's just LLC. But if it's something else that I could tweak, then I do have quite a bit of headroom before needing to do MPT trickery. Any input is welcome.
> 
> Well, if you read above, I actually think you may be right. I lowered other voltages (mem, soc etc) and raised power limit to 380W. Card didn't overheat, but I still wouldn't go above 1.11-1.12V. Still, I may try pushing more voltage into the thing for a few benches when the weather finally decides to go properly cold (if it ever does), but I wouldn't use it like that for daily usage. I don't know if it's LLC or what's causing it, but it's driving me crazy. I never liked losing the silicon lottery, but it seems I have with this card.


Have you re pasted the card yet? Might want to give that a shot. Considering our cards are almost the same. It definitely helped my card out. Don't be scared to use to much. And make sure to cover the edges of the die. I used gelid gc extreme.


----------



## Blameless

tolis626 said:


> Well, I had nothing better to do, so I decided to go ahead and test your suggestions. Aaaaaaand nothing. Still exactly the same behavior. If I dare go above 2600MHz, TimeSpy crashes, usually immediately. I tried raising the power limit to 380W, lowering memory to 2100MHz (2088MHz actual), lowering SoC to 1.1V, MVDD to 0.8V, VDDIO to 1.3V, I even tried all of that combined and it didn't make a shred of diffference. Not a little bit. No difference in scores, not to stability, nothing. If anything, after trying it for a bit, it even seems to be able to work at 2150MHz with the lowered voltages no problem, but that requires way more testing to confirm as actually stable. I'm at a loss. It seems I just have to accept that my silicon is just that bad.


It does sound like you're near the limit of what your sample can do. Still, every little bit saved is beneficial in other ways, even if it doesn't get you a higher core clock.

My 6900 samples also an unimpressive OCer, at least relative to my 6800XT. I can run Time Spy at about 2650MHz actual (and Time Spy Extreme hit 480w with 1.2v set, ~1.12 actual), but Night Raid has issues around 2600MHz, no matter what I try, and to keep the card within a reasonable power budget without using the limiter I'm stuck with the max clock set to 2.5GHz.



tolis626 said:


> Maybe @Godhand007 is right and it's just LLC. But if it's something else that I could tweak, then I do have quite a bit of headroom before needing to do MPT trickery. Any input is welcome.


That's vdroop. If you could override it, you wouldn't want to, not without much better cooling. Doing so would cause power consumption and temps to skyrocket and just like a CPU where the droop was reduced, would allow for voltage spikes way above target.

Even the best sample is going to have comparable droop at comparable load, by default. This is entirely normal, and generally desirable, behavior for these parts.


----------



## tolis626

Bobb3rdown said:


> Have you re pasted the card yet? Might want to give that a shot. Considering our cards are almost the same. It definitely helped my card out. Don't be scared to use to much. And make sure to cover the edges of the die. I used gelid gc extreme.


No, I haven't yet, but I am considering. Truth be told, I'm kinda scared. Dunno why, but it's mostly the fact that this card is the single most expensive piece of computer hardware I've ever bought, and right now my financials aren't in the best condition they've ever been, so the small chance of needing to replace it prematurely has stayed my hand. I will get to it at some point, it's not like I haven't done it before. I just need my time. And money. Mostly money for that peace of mind. 


Blameless said:


> It does sound like you're near the limit of what your sample can do. Still, every little bit saved is beneficial in other ways, even if it doesn't get you a higher core clock.
> 
> My 6900 samples also an unimpressive OCer, at least relative to my 6800XT. I can run Time Spy at about 2650MHz actual (and Time Spy Extreme hit 480w with 1.2v set, ~1.12 actual), but Night Raid has issues around 2600MHz, no matter what I try, and to keep the card within a reasonable power budget without using the limiter I'm stuck with the max clock set to 2.5GHz.
> 
> 
> 
> That's vdroop. If you could override it, you wouldn't want to, not without much better cooling. Doing so would cause power consumption and temps to skyrocket and just like a CPU where the droop was reduced, would allow for voltage spikes way above target.
> 
> Even the best sample is going to have comparable droop at comparable load, by default. This is entirely normal, and generally desirable, behavior for these parts.


Well, I didn't think LLC would allow for that much vdroop. I always thought it was just the card balancing itself out when thermals or power were approaching their limits. Guess I got spoiled by the 5700XT. If you let it use enought power (and in its case, 220W and above was usually enough), what you set was basically what you got. So, I had set 2150MHz at 1.14V, and I got just above 2100MHz and dead on 1.14V. But I guess the current draw of the 6900XT wouldn't allow such behavior, such is the nature of the beast.

I will continue to fiddle around and see if I can figure anything out, but it seems at this point my best bets are either a) to repaste the card for reduced temps and noise (or a higher power budget) or b) suck it up and wait until I get my **** together with my work so that I can sell this card and get an XTXH. I do want to go custom water at some point, but this particular card just isn't worth it if I can max it out with the stock cooler because the silicon craps out where it does. Well, I guess there's worse things to go through.


----------



## Bobb3rdown

tolis626 said:


> No, I haven't yet, but I am considering. Truth be told, I'm kinda scared. Dunno why, but it's mostly the fact that this card is the single most expensive piece of computer hardware I've ever bought, and right now my financials aren't in the best condition they've ever been, so the small chance of needing to replace it prematurely has stayed my hand. I will get to it at some point, it's not like I haven't done it before. I just need my time. And money. Mostly money for that peace of mind.
> 
> Well, I didn't think LLC would allow for that much vdroop. I always thought it was just the card balancing itself out when thermals or power were approaching their limits. Guess I got spoiled by the 5700XT. If you let it use enought power (and in its case, 220W and above was usually enough), what you set was basically what you got. So, I had set 2150MHz at 1.14V, and I got just above 2100MHz and dead on 1.14V. But I guess the current draw of the 6900XT wouldn't allow such behavior, such is the nature of the beast.
> 
> I will continue to fiddle around and see if I can figure anything out, but it seems at this point my best bets are either a) to repaste the card for reduced temps and noise (or a higher power budget) or b) suck it up and wait until I get my **** together with my work so that I can sell this card and get an XTXH. I do want to go custom water at some point, but this particular card just isn't worth it if I can max it out with the stock cooler because the silicon craps out where it does. Well, I guess there's worse things to go through.


It's an easy cooler to pull. Don't even damage any thermal pads. 8 screws including the 4 die screws, 2 small Phillips heads on the io shield, and the 2 that go to through the backplate and into the fan shroud at the opposite end from the io shield. Along with the 3 electric connectors. I was supper nervous when I did mine as well.


----------



## tolis626

Bobb3rdown said:


> It's an easy cooler to pull. Don't even damage any thermal pads. 8 screws including the 4 die screws, 2 small Phillips heads on the io shield, and the 2 that go to through the backplate and into the fan shroud at the opposite end from the io shield. Along with the 3 electric connectors. I was supper nervous when I did mine as well.


Thanks for sharing, it gives me an idea what to expect.

It's not that I'm scared of the hardware, I've done this plenty of times. I'm scared because of the card's cost. And when I'm scared, my butterfingers get sweaty and I break stuff. I'll get around and do it some day. Luckily, I already have both Kryonaut and GC Extreme, so no need to go around buying stuff.


----------



## J7SC

tolis626 said:


> Thanks for sharing, it gives me an idea what to expect.
> 
> It's not that I'm scared of the hardware, I've done this plenty of times. I'm scared because of the card's cost. And when I'm scared, my butterfingers get sweaty and I break stuff. I'll get around and do it some day. Luckily, I already have both Kryonaut and GC Extreme, so no need to go around buying stuff.


FYI, I use both Kryonaut and Gelid GC Extreme (along w/ MX5 etc) for my builds. For this GPU die, others here as well as I have recommended the Gelid GC Extreme as it is a bit thicker and doesn't' pump out very quickly.

Also as already posted, make sure to cover the whole die, including the edges.


----------



## tolis626

J7SC said:


> FYI, I use both Kryonaut and Gelid GC Extreme (along w/ MX5 etc) for my builds. For this GPU die, others here as well as I have recommended the Gelid GC Extreme as it is a bit thicker and doesn't' pump out very quickly.
> 
> Also as already posted, make sure to cover the whole die, including the edges.


Yup, I know that. Which is a good thing, because I was going to use Kryonaut as it's, technically, better. But after having read this whole thread, it explains why I had problems with my old 390x when repasting with Kryonaut. At first I would get amazing temps, about 2C better than with GC Extreme, but they would get worse and worse over time and I didn't know why. Now I do.

As for covering the whole die, yeah, I've learned the hard way to not leave anything exposed.


----------



## Blameless

tolis626 said:


> Well, I didn't think LLC would allow for that much vdroop.


Your terminology is a bit confused here. LLC is load-line calibration, which implies a manipulation of the default load-line (which is a synthetic resistance value that dictates voltage droop depending on how much current is being pulled), not the load-line itself.

I'm not aware of any good way to adjust droop in software for these parts. The linear droop settings in MPT don't seem to do anything on my parts, at least as far as I am able to tell...same voltage, power, and temps no matter what I set here. Others seems to have had some degree of success manipulating the DcTol (DC tolerance) and DcBtc (boot time calibration), but the effect on droop appears to be very subtle. That said, I did get a about a 25MHz increase in stable core clock for a given voltage by disabling DcBtc entirely.

If you really want to see what more voltage will do you can try the temperature dependent vmin trick via MPT.



tolis626 said:


> I will continue to fiddle around and see if I can figure anything out, but it seems at this point my best bets are either a) to repaste the card for reduced temps and noise (or a higher power budget) or b) suck it up and wait until I get my **** together with my work so that I can sell this card and get an XTXH. I do want to go custom water at some point, but this particular card just isn't worth it if I can max it out with the stock cooler because the silicon craps out where it does. Well, I guess there's worse things to go through.


No guarantee an XTXH would be any better. My particular sample (an ASRock 6900 XT OC Formula with XTXH silicon), which I'm slightly disappointed with, only does about 75MHz better than your part with the same 1200mV set, and needs more power to do it. Past ~2500MHz is benching only territory for this card; it's just too hot and loud.










Same settings won't make it though a Night Raid bench.

Would do better under water, of course, and there are surely better samples out there, but probably a fair number of worse ones as well.


----------



## ZealotKi11er

Blameless said:


> Your terminology is a bit confused here. LLC is load-line calibration, which implies a manipulation of the default load-line (which is a synthetic resistance value that dictates voltage droop depending on how much current is being pulled), not the load-line itself.
> 
> I'm not aware of any good way to adjust droop in software for these parts. The linear droop settings in MPT don't seem to do anything on my parts, at least as far as I am able to tell...same voltage, power, and temps no matter what I set here. Others seems to have had some degree of success manipulating the DcTol (DC tolerance) and DcBtc (DC bus tie contactor), but the effect on droop appears to be very subtle. That said, I did get a about a 25MHz increase in stable core clock for a given voltage by disabling DcBtc entirely.
> 
> If you really want to see what more voltage will do you can try the temperature dependent vmin trick via MPT.
> 
> 
> 
> No guarantee an XTXH would be any better. My particular sample (an ASRock 6900 XT OC Formula with XTXH silicon), which I'm slightly disappointed with, only does about 75MHz better than your part with the same 1200mV set, and needs more power to do it. Past ~2500MHz is benching only territory for this card; it's just too hot and loud.
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> Same settings won't make it though a Night Raid bench.
> 
> Would do better under water, of course, and there are surely better samples out there, but probably a fair number of worse ones as well.


On average XTXH > XTX or XT.


----------



## EastCoast

tolis626 said:


> Yup, I know that. Which is a good thing, because I was going to use Kryonaut as it's, technically, better. But after having read this whole thread, it explains why I had problems with my old 390x when repasting with Kryonaut. At first I would get amazing temps, about 2C better than with GC Extreme, but they would get worse and worse over time and I didn't know why. Now I do.
> 
> As for covering the whole die, yeah, I've learned the hard way to not leave anything exposed.


Kryonaut does suffer from pump out on GPU dies. It's not an issue on CPU IHS though. 
I fixed this by a 2 step process. And this is the only way I can get it to work.
Step 1: Put a layer of kryonaut using the spatula provided (if you have the version). If you don't have a spatula use something something else. 
Step 2: Put a second layer on top of the 1st. A simple "X" should be enough. It doesn't have to be a thick layer. 

That is the only thing that has helped me keep temps low. It's worked wonders on both cards so far. Using any other thermal paste does cause temps to be a bit higher. I just couldn't live with that, ha!


----------



## J7SC

Folks who $prang for an XTXH are likely to have higher expectations re. top end performance values to begin with. In my XTX's case, I just wanted a 6900XT w/ 3x8 pin for mostly work related stuff as I already have a game/bench setup with custom vbios (up to 1KW nominal). 

I had the choice of 'one of one' 6900XT in the store, at a decent (= old MSRP) price to boot. That it turned to be a very decent performer is a bonus...I was going to water-cool it anyways because I do that with most of my GPUs, and my 6900XT's air cooler also had 3x fans turning at up to 3800 rpm with the stock vbios / max clocks / no MPT PL . The water-cooling helped a fair bit with sustained high effective clocks (apart from helping my audio sanity) as well as VRAM temps.

In general, with the latest gens of GPUs that have sophisticated boost/throttle algorithms with multiple input variables to extract the most performance right out of the box, the likelihood of more 'zig-zag' / 'jitter' is built in. IMO, adding extra cooling and adding MPT PL are two steps one can do to impact said algorithms. Then backing off 'oc clocks' until one gets a nice, mostly steady line on the frequency in fav daily apps / games is one way to proceed...max short-term fps benching is a different animal, obviously.


----------



## Blameless

Put ~500w though this OCF earlier while trying 1225mV (about 1170 actual) at 2775MHz (~2725 actual) in Time Spy Extreme on the stock cooler. Almost made it though when I hit 115C on the hotspot and the bench crashed. Card seems fine, but I don't think that's something I'll try again until I've got a block on the card, which I probably won't do until I retire it.

This sample undervolts pretty well and while I'm slightly disappointed in the upgrade over my good 6800XT I've concluded that it will fit in the system I'm trying to fit it in and that I can cool it at settings that will give me about 7-10% better minimum frame rates at 4k. So, it's still an upgrade to my SFF gaming box, and my HTPC will do fine with the 2.55GHz 6800 XT.



ZealotKi11er said:


> On average XTXH > XTX or XT.


Yes, but there is still a huge overlap in potential, and someone in tolis626's position has to weight the chances of getting a significantly better part vs. the hassle involved.



J7SC said:


> I had the choice of 'one of one' 6900XT in the store, at a decent (= old MSRP) price to boot.


Both of my personal Navi21 samples were purchases of opportunity...the only things in stock at times I was looking for a card. I still ended up paying around 1200 USD for the 6800XT and almost 1650 for the 6900 XT OCF.


----------



## LtMatt

Blameless said:


> I'm not aware of any good way to adjust droop in software for these parts. The linear droop settings in MPT don't seem to do anything on my parts, at least as far as I am able to tell...same voltage, power, and temps no matter what I set here. Others seems to have had some degree of success manipulating the DcTol (DC tolerance) and DcBtc (DC bus tie contactor), but the effect on droop appears to be very subtle. That said, I did get a about a 25MHz increase in stable core clock for a given voltage by disabling DcBtc entirely.


What exactly does DcBtc do exactly and what are the effects of disabling it?


----------



## tolis626

Blameless said:


> Your terminology is a bit confused here. LLC is load-line calibration, which implies a manipulation of the default load-line (which is a synthetic resistance value that dictates voltage droop depending on how much current is being pulled), not the load-line itself.
> 
> I'm not aware of any good way to adjust droop in software for these parts. The linear droop settings in MPT don't seem to do anything on my parts, at least as far as I am able to tell...same voltage, power, and temps no matter what I set here. Others seems to have had some degree of success manipulating the DcTol (DC tolerance) and DcBtc (DC bus tie contactor), but the effect on droop appears to be very subtle. That said, I did get a about a 25MHz increase in stable core clock for a given voltage by disabling DcBtc entirely.
> 
> If you really want to see what more voltage will do you can try the temperature dependent vmin trick via MPT.
> 
> 
> 
> No guarantee an XTXH would be any better. My particular sample (an ASRock 6900 XT OC Formula with XTXH silicon), which I'm slightly disappointed with, only does about 75MHz better than your part with the same 1200mV set, and needs more power to do it. Past ~2500MHz is benching only territory for this card; it's just too hot and loud.
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> Same settings won't make it though a Night Raid bench.
> 
> Would do better under water, of course, and there are surely better samples out there, but probably a fair number of worse ones as well.


Yeah, I do the mistake of using LLC and vdroop interchangeably sometimes. I know it's wrong, I know what both are, but I still do it when I'm not thinking about it much. Just ignore it.

I will take a look at the settings you mentioned, but at this point I'm not expecting much. I was just expecting the card to apply more voltage if no power or thermal limit was hit. Not zero droop, but I dunno, 1.11-1.12V instead of 1.175V? 5% droop seems excessive to me for a modern card. Then again, with the 390x I used to get like 0.1V of droop in suicide runs, so I guess it's not unheard of.

As for TempDependentVMin, I'd be ok running a bench or two with it, but I wouldn't use it 24/7. I mean, running a very real risk of killing the card prematurely for what? 1-2% gains? Which I probably won't be able to cool anyway? Seems pointless. Then again, I've always been chasing that pointless 1%.


EastCoast said:


> Kryonaut does suffer from pump out on GPU dies. It's not an issue on CPU IHS though.
> I fixed this by a 2 step process. And this is the only way I can get it to work.
> Step 1: Put a layer of kryonaut using the spatula provided (if you have the version). If you don't have a spatula use something something else.
> Step 2: Put a second layer on top of the 1st. A simple "X" should be enough. It doesn't have to be a thick layer.
> 
> That is the only thing that has helped me keep temps low. It's worked wonders on both cards so far. Using any other thermal paste does cause temps to be a bit higher. I just couldn't live with that, ha!


A ha! So there is a way to get Kryonaut to work!

Yes, I do have the kit with the spatula included. When I end up repasting, I think I'll go with Kryonaut at first. And if it doesn't work, I already have GC Extreme anyway, so no harm done. If I had none of them, I'd buy GC Extreme for the peace of mind that it'll work for sure. But as it stands, I have room to experiment.

Thanks mate!


----------



## ChamberTech

Currently running a 6900 OC Formula on air cooler and 2700/2800 2150 fast timing on vram. Currently temperature limited even with a repaste using kryonaut on the core hitting 110c hotspot. Going to use conductonaut soon but also planning on changing the thermal pads. Has anyone swapped the thermal pads and what size did you use?


----------



## Blameless

LtMatt said:


> What exactly does DcBtc do exactly and what are the effects of disabling it?


I only have a very vague idea of what it does. I can't find any documentation on this implementation of it, and don't have the tools to examine exactly what it's doing, so I'm left with extrapolating from much more generic uses of the terminology.

A bus tie is a relay that allows nominally independent electrical circuits to connect together, so that one voltage source can power them all, for example. It's my (fairly wild) speculation that DcBtc, on these graphics cards, is some sort of VRM or voltage demand feedback system and the values we can control in MPT are the ranges at which they can engage/disengage and the granularity of the voltage steps involved. I've had some fairly reproducible successes with disabling the feature or tightening up the tolerances on them. Maybe it has to do with droop control, phase shedding, or filtering, I don't know.

Someone at Hardwareluxx also did some testing with them, but results were fairly inconclusive.






AMD - MorePowerTool MPT Beta-Programm – neue Features, die Community testet


Moin, I am currently optimizing my RX 6600XT, and I have the following problems ... 1. The memory of my WQHD monitor does not clock down, I got the memory from the boost to about 1076 MHz with an OC of my monitor from 75hz to 76hz, but that's not quite the solution. 2. The fan curve of XFX is...




www.igorslab.de





My 6800 XT and 6900 XTXH undervolt both seem to like a DcTol and GPU DcBtc of 37(mV) while the SoC likes a bit less (I'm assuming these are supposed to be 6.25mV steps, rounded down) at 31.

So yeah, I don't know exactly what it's doing, and it's effect is subtle, but it's doing something.

*Edit:* reference this and ZealotKi11er's post below.


----------



## ZealotKi11er

Blameless said:


> I only have a very vague idea of what it does. I can't find any documentation on this implementation of it, and don't have the tools to examine exactly what it's doing, so I'm left with extrapolating from much more generic uses of the terminology.
> 
> A bus tie is a relay that allows nominally independent electrical circuits to connect together, so that one voltage source can power them all, for example. It's my (fairly wild) speculation that DcBtc, on these graphics cards, is some sort of VRM or voltage demand feedback system and the values we can control in MPT are the ranges at which they can engage/disengage and the granularity of the voltage steps involved. I've had some fairly reproducible successes with disabling the feature or tightening up the tolerances on them. Maybe it has to do with droop control, phase shedding, or filtering, I don't know.
> 
> Someone at Hardwareluxx also did some testing with them, but results were fairly inconclusive.
> 
> 
> 
> 
> 
> 
> AMD - MorePowerTool MPT Beta-Programm – neue Features, die Community testet
> 
> 
> Moin, I am currently optimizing my RX 6600XT, and I have the following problems ... 1. The memory of my WQHD monitor does not clock down, I got the memory from the boost to about 1076 MHz with an OC of my monitor from 75hz to 76hz, but that's not quite the solution. 2. The fan curve of XFX is...
> 
> 
> 
> 
> www.igorslab.de
> 
> 
> 
> 
> 
> My 6800 XT and 6900 XTXH undervolt both seem to like a DcTol and GPU DcBtc of 37(mV) while the SoC likes a bit less (I'm assuming these are supposed to be 6.25mV steps, rounded down) at 31.
> 
> So yeah, I don't know exactly what it's doing, and it's effect is subtle, but it's doing something.


DCBTC is DC Boot Time Calibration. It voltage that is reserved for GPU aging. It is not going to do anything for overclocks.


----------



## Blameless

ZealotKi11er said:


> DCBTC is DC Boot Time Calibration.


Thanks for the correction.



ZealotKi11er said:


> It voltage that is reserved for GPU aging. It is not going to do anything for overclocks.


It has a demonstrable impact on the voltage settings I need to pass loops of Night Raid. So, even if it's not doing what I thought it was, it's still doing something.

Reading more about it, it seems clear that it would be skewing GPU and SoC voltages, which would obviously impact OCing.

https://www.amd.com/system/files/documents/polaris-whitepaper.pdf -- page 12-13.

At least now that I know what it's doing, I'll have more than pure trial and error to base my tuning on.


----------



## LtMatt

Blameless said:


> It has a demonstrable impact on the voltage I need to pass loops of Night Raid. So, even if it's not doing what I thought it was, it's still doing something.


What did you do exactly, just untick it?


----------



## Blameless

LtMatt said:


> What did you do exactly, just untick it?


That's the first thing I tried, which helped. I then dabbled with reduced values.

In hindsight it makes sense that the higher-end parts with higher default voltage ranges would have a wider spread for the maximum boot time calibration and there may be some sweet spot to it.


----------



## Blameless

@CS9K Have you played with the linear droop settings recently? What precise behavior did you see when increasing the voltage values? A direct voltage decrease at the clock speed specified? Did you have any success altering the clock values to shift the droop curve?

I'm getting some results that are contradictory to the testing done on Igor's forum back in July-August. Namely, I have two parts with radically different linear droop tables in MPT that produce the same voltage readings at the same loads and altering the tables doesn't seem to do anything in either direction.

@LtMatt I'm testing an increased DcTol now, with promising results. Going to take a while to be sure it's helping, but if that turns out to be the case it's possible that too low a tolerance setting is crashing loads with extreme transients.



tolis626 said:


> Not zero droop, but I dunno, 1.11-1.12V instead of 1.175V? 5% droop seems excessive to me for a modern card. Then again, with the 390x I used to get like 0.1V of droop in suicide runs, so I guess it's not unheard of.


GPUs have been increasing in power density and absolute power requirements faster than VRM quality has been improving. Honestly, 100+mV of droop at the top end is not particularly surprising.

Anyway, I was doing some reading on the AMD SMU driver, and there are LoadLineResistanceGfx and LoadLineResistanceSoC values. So, once someone figures out how those map to the SPPT, we might be able to tune the loadline directly.



tolis626 said:


> As for TempDependentVMin, I'd be ok running a bench or two with it, but I wouldn't use it 24/7. I mean, running a very real risk of killing the card prematurely for what? 1-2% gains? Which I probably won't be able to cool anyway? Seems pointless. Then again, I've always been chasing that pointless 1%.


All of this applies to reducing droop as well.


----------



## CS9K

Blameless said:


> @CS9K Have you played with the linear droop settings recently? What precise behavior did you see when increasing the voltage values? A direct voltage decrease at the clock speed specified? Did you have any success altering the clock values to shift the droop curve?
> 
> I'm getting some results that are contradictory to the testing done on Igor's forum back in July-August. Namely, I have two parts with radically different linear droop tables in MPT that produce the same voltage readings at the same loads and altering the tables doesn't seem to do anything in either direction.


I have not played with linear droop in MPT at all, as yours (and others') observations have all come to the same conclusion: It's something RDNA2 GPU's use internally, and changes to MPT don't make a tangible difference when overclocking the top end. There _is_ an application where it does make a difference: undervolting when limiting one's max clock below the 2600MHz LLC-start range, but only a few people bother with that.

Likewise, changing DCBTC and other settings of the sort haven't made an appreciable difference for me with any settings; the card behaves the same regardless. With RDNA2, it seems that a lot of features and settings in the SPPT that may have changed things in the past, do not do so with RDNA2 GPU's. And that's okay, they don't need to. Once we eventually get bios flash control for modded bios files, life will be good. Until then: It is what it is.


----------



## CS9K

Blameless said:


> @CS9K Have you played with the linear droop settings recently? What precise behavior did you see when increasing the voltage values? A direct voltage decrease at the clock speed specified? Did you have any success altering the clock values to shift the droop curve?
> 
> I'm getting some results that are contradictory to the testing done on Igor's forum back in July-August. Namely, I have two parts with radically different linear droop tables in MPT that produce the same voltage readings at the same loads and altering the tables doesn't seem to do anything in either direction.
> 
> @LtMatt I'm testing an increased DcTol now, with promising results. Going to take a while to be sure it's helping, but if that turns out to be the case it's possible that too low a tolerance setting is crashing loads with extreme transients.
> 
> 
> 
> GPUs have been increasing in power density and absolute power requirements faster than VRM quality has been improving. Honestly, 100+mV of droop at the top end is not particularly surprising.
> 
> Anyway, I was doing some reading on the AMD SMU driver, and there are LoadLineResistanceGfx and LoadLineResistanceSoC values. So, once someone figures out how those map to the SPPT, we might be able to tune the loadline directly.
> 
> 
> 
> All of this applies to reducing droop as well.


Ohhhh I think I see what this was about. Temperature Dependent Vmin in regards to Sylwester's post over on Hardwareluxx, right?

Hm, I may mess around with things, knowing that Deep Sleep turned off is why the voltage goes nuts even at partial/light loads.


----------



## J7SC

@CS9K ...on an entirely different matter, have you tried the F18, or better yet that Volocopter in FS2020 yet ? I'm having a blast landing the Volocpter in places I shouldn't land 

Sreenshot below w/ DX12, HDR so colours are a bit off


----------



## CS9K

J7SC said:


> @CS9K ...on an entirely different matter, have you tried the F18, or better yet that Volocopter in FS2020 yet ? I'm having a blast landing the Volocpter in places I shouldn't land
> 
> Sreenshot below w/ DX12, HDR so colours are a bit off
> View attachment 2536615
> 
> View attachment 2536616


I have not. I'm a bit disillusioned with FS2020 right now. CPU threading performance took a pretty steep regression with Sim Update 7, to the point where I run the game with photogrammetry disabled. Kind of takes the fun out of it. And sucks that from what I can tell, the hotfix will not address said threading issues.


----------



## CS9K

CS9K said:


> Ohhhh I think I see what this was about. Temperature Dependent Vmin in regards to Sylwester's post over on Hardwareluxx, right?
> 
> Hm, I may mess around with things, knowing that Deep Sleep turned off is why the voltage goes nuts even at partial/light loads.


Follow up: No change in behavior on my end. I'm leaving TDV alone for now.


----------



## Blameless

CS9K said:


> There _is_ an application where it does make a difference: undervolting when limiting one's max clock below the 2600MHz LLC-start range, but only a few people bother with that.


This is exactly what I'm doing. I'm trying to make this 6900 XT OCF unconditionally stable within the power and temperature constraints of an ITX system, and this means 2.5-2.6GHz on this sample.

My general procedure for this is finding tests that reproducibly fail and changing whatever is required to get them to pass, while staying within the power limit I know I can cool, even in the most power hungry of other tests (Time Spy Extreme test #2 is my current benchmark for anything that vaguely resembles a real world load, with regards to power).

I'm not convinced the linear droop is even being parsed on my setups, and from looking at the SMU documentation, there should need to be a flag set to use it.



CS9K said:


> Likewise, changing DCBTC and other settings of the sort haven't made an appreciable difference for me with any settings; the card behaves the same regardless.


On my (nearly year old) 6800 XT, DcBtc at stock vs. disabled is the difference between crashing after two or three loops of Night Raid at 2.575GHz max boost, and passing hundreds of loops at 2.6GHz. It's effect is less dramatic, but still demonstrable on my two week old 6900 XT OCF.

Given the description for it in the Polaris architecture paper, which states...


Spoiler



When the GPU boots up, the power management unit performs boot time calibration,
which measures the voltage that is delivered to the GPU, compared to the voltage
measured during the test and binning process. For example, it is fairly common for a
voltage regulator to output 1.15V, but the GPU only receives 1.05V due to the system
design. In the Polaris architecture, the power management unit can correct for this static
difference very precisely, rather than requesting a more conservative (i.e. higher) voltage
that would waste power. As a result, platform differences (e.g., higher quality voltage
regulators) will translate into higher frequencies and lower power consumption.

In addition, the boot-time calibration optimizes the voltage to account for aging and
reliability. Typically, as silicon ages the transistors and metal interconnects degrade and
need a higher voltage to maintain stability at the same frequency. The traditional solution
to this problem is to specify a voltage that is sufficiently high to guarantee reliable
operation over 3-7 years under worst case conditions, which, over the life of the
processor, can require as much as 6% greater power. Since the boot-time calibration uses
aging-sensitive circuits, it automatically accounts for any aging and reliability issues. As a
result, Polaris-based GPUs will run at a lower voltage or higher frequency throughout the
life time of the product, delivering more performance for gaming and compute workloads.



...I strongly suspect boot time calibration is setting a small negative voltage offset so that the part will use less power early in it's life, and then gradually ramp up voltage as it accumulates wear and degrades. This may well apply more readily to lighter loads, which may be why I'm only see measurable effects in Night Raid, so far.

*Edit:* Upon further testing, I may have had this backwards. The calibration seems to applying a positive voltage offset, that is more pronounced on my older part. I've also found a quick way to guestimate optimal values via the Auto OC feature as the figure it reports for suggested GPU OC, inaccurate as it may be, does seem to scale with the level of DcBtc applied. The sweet spot on my 6800XT is less than than stock, but more than nothing/disabled.

*Edit #2:* I think my initial interpretation was correct...the DcBtc is likely applying a negative offset itself, but the guard band exists as margin to counter act this. This is probably why disabling DcBtc produces better results than stock.

Regardless, it was clearly intended for the stock dynamic clocks, and I'll probably leave it off on all of my parts, as I'd rather take care of the guardbanding myself. The way I see it, if the AMD solution was ideal, they wouldn't be overvolting these parts by 75-100mV out of the box.



CS9K said:


> Ohhhh I think I see what this was about. Temperature Dependent Vmin in regards to Sylwester's post over on Hardwareluxx, right?
> 
> Hm, I may mess around with things, knowing that Deep Sleep turned off is why the voltage goes nuts even at partial/light loads.


Not specifically what I was thinking of at the time of my quoted post, but that is another area I was curious about, given the odd behavior I observed when messing with TDV while doing high-clock benching runs.

Aside from the linear droop, there is clearly a load-line at work, which the SMU documentation also mentions. If this is parsed by Navi2x, setting that value (in mOhm) should be the best way to control droop, essentially giving us true LLC.


----------



## Blameless

Some more observations regarding DcBtc and related settings:


Altering the min/max variables will consistently (within the same part) change the Automatic Tuning Overclock GPU clock speed reported.
The clock speed reported above may or may not be representative of the maximum clock the part is capable of at the settings used (my 6800XT can do almost 200MHz more than auto tune reports, but the auto tune value is essentially spot on for my 6900 XT, in the worst case tests), but the direction and magnitude of these changes are representative (e.g. if auto tune goes from 2399 to 2415, my max stable Night Raid clock also invariably goes up).
_Edit:_ The above are temperature sensitive, so rather less useful than I'd hoped.

Optimal values are very different between my two Navi21 parts. My heavily used 6800 XT likes a min GFX DcBtc of zero and a maximum of 37, versus the 0/44 that is default. My new 6900 XT(XH) likes a minimum GFX DcBtc of at least 6, and a maximum of 31-43, versus the 0/58 of stock.
The changes here do not seem to be doing anything to maximum load tests. FurMark and Time Spy Extreme report identical load voltages, identical power consumption, identical temps, and produce identical scores, with identical stability, to stock DcBtc values.
The opposite is true of low load/high fps tests, with Night Raid being the the most obvious case I've yet encountered. Voltage and power seems to be boosted slightly here, along with stability.

So far, it appears that DcBtc can be used to mitigate borderline stability/overboosting issues at lower loads, without harming higher load OCs.

_Update:_ The last value (DcBctGb) is apparently the guardband, or margin for the calbrated voltage. I'm experimenting with increasing this now.


----------



## Godhand007

Just an FYI to those who might be interested in TDVmin; these are the settings that I am running on my reference RX6900XT 24/7. They are on the extreme end but I wanted those 2700MHz plus clocks since last year when I got this card (around this time). I will make sure to post if my card goes kaput so that at least others benefit from my experience. If all goes well then others have an option to try this as well.

These settings are stable for GT2 (60-70 loops ) and a bunch of Halo Infinite multiplayer with lots of randomnesses.
Voltage stays around ~1.230mv (HWiNFO) during 3d loads.
Hotspot temps are under 110C for proper gaming sessions. It goes above that when I am running GT2 loops.

























Also, noticed one weird behavior with 3dMark Port Royal. It crashes with the aforementioned settings if the power target is low. If I increase it beyond 450W it works properly with the same clocks.


----------



## cfranko

Godhand007 said:


> Just an FYI to those who might be interested in TDVmin; these are the settings that I am running on my reference RX6900XT 24/7. They are on the extreme end but I wanted those 2700MHz plus clocks since last year when I got this card (around this time). I will make sure to post if my card goes kaput so that at least others benefit from my experience. If all goes well then others have an option to try this as well.
> 
> These settings are stable for GT2 (60-70 loops ) and a bunch of Halo Infinite multiplayer with lots of randomnesses.
> Voltage stays around ~1.230mv (HWiNFO) during 3d loads.
> Hotspot temps are under 110C for proper gaming sessions. It goes above that when I am running GT2 loops.
> 
> 
> View attachment 2536718
> View attachment 2536719
> View attachment 2536720
> 
> 
> Also, noticed one weird behavior with 3dMark Port Royal. It crashes with the aforementioned settings if the power target is low. If I increase it beyond 450W it works properly with the same clocks.


I think it isn't worth the risk you're taking for 100-200 mhz which makes like a 5 fps difference, also considering the reference has the worst pcb of all.


----------



## Godhand007

cfranko said:


> I think it isn't worth the risk you're taking for 100-200 mhz which make like a 5 fps difference, also considering the reference has the worst pcb of all.


But I want those 5 FPS . It is worth it to me as I want the most out of my GPU but I agree it's not generally advisable.


----------



## J7SC

Godhand007 said:


> But I want those 5 FPS . It is worth it to me as I want the most out of my GPU but I agree it's not generally advisable.


With the voltages and temps shown, you might want to water-cool your card, no ? I know it's not for everyone, but 1.256v on air would make me a tad nervous.


----------



## Godhand007

J7SC said:


> With the voltages and temps shown, you might want to water-cool your card, no ? I know it's not for everyone, but 1.256v on air would make me a tad nervous.


That 1.256mv is transient which I never see during voltage monitoring. About water cooling, too many hassles of one kind or another. One thing I will say regarding temps; I have seen people being overly cautious regarding them at least from my perspective. I have been playing around with GPUs since 2006-07 and never saw one go bad due to high temps as long as they were under the manufacturer's rated threshold for temps.


----------



## ZealotKi11er

J7SC said:


> With the voltages and temps shown, you might want to water-cool your card, no ? I know it's not for everyone, but 1.256v on air would make me a tad nervous.


I think it depends on the gpu. If you have a slow xtx (low leakage) you can apply more voltage. If you have a fast xtxh, running 1.25-1.3v on it will require cold air + wc.


----------



## Godhand007

ZealotKi11er said:


> I think it depends on the gpu. If you have a slow xtx (low leakage) you can apply more voltage. If you have a fast xtxh, running 1.25-1.3v on it will require cold air + wc.


Could you explain these terms xtx (low leakage)/ fast xtxh? Are you talking about clocks, cause 2750Mhz plus on my card is up there with xtxh models.


----------



## jonRock1992

Godhand007 said:


> Just an FYI to those who might be interested in TDVmin; these are the settings that I am running on my reference RX6900XT 24/7. They are on the extreme end but I wanted those 2700MHz plus clocks since last year when I got this card (around this time). I will make sure to post if my card goes kaput so that at least others benefit from my experience. If all goes well then others have an option to try this as well.
> 
> These settings are stable for GT2 (60-70 loops ) and a bunch of Halo Infinite multiplayer with lots of randomnesses.
> Voltage stays around ~1.230mv (HWiNFO) during 3d loads.
> Hotspot temps are under 110C for proper gaming sessions. It goes above that when I am running GT2 loops.
> 
> 
> View attachment 2536718
> View attachment 2536719
> View attachment 2536720
> 
> 
> Also, noticed one weird behavior with 3dMark Port Royal. It crashes with the aforementioned settings if the power target is low. If I increase it beyond 450W it works properly with the same clocks.


I've been running 1287mV in my games @2875MHz with 450W PL. I only game for like 1 or 2 hours a few days a week, but everything good so far. My temps are really good for this config though, so that's not worrying for me (hotspot usually in low 60's in demanding games). I'll let you guys know if anything happens lol. Been running this config for over a month. I definitely don't recommend that people do this with their gpu's though. Also, on a side note, coil whine is not quite as noticeable at this voltage.


----------



## J7SC

ZealotKi11er said:


> I think it depends on the gpu. If you have a slow xtx (low leakage) you can apply more voltage. If you have a fast xtxh, running 1.25-1.3v on it will require cold air + wc.


Yeah, that's along the lines of yesteryear's GPUz ASIC value description and makes good sense...if it were me, 1.256 peak v, 107 C max Hotspot (shown) and also 3,000 rpm fans would send me straight to the order page for water-cooling parts. But I fully recognize that other folks have different opinions / priorities. I pretty much w-cool anything in sight. I will say though that there's a relationship between high voltage, high temps and (long-term) degradation. That said, long-term might mean years...


----------



## ZealotKi11er

Godhand007 said:


> Could you explain these terms xtx (low leakage)/ fast xtxh? Are you talking about clocks, cause 2750Mhz plus on my card is up there with xtxh models.


My xtxh does 2850mhz at default voltage 1.2v. If I set 1.25.-1.3v assuming I can keep cool, it would be 29xx. Your card will never hit those clks.


----------



## CS9K

J7SC said:


> Yeah, that's along the lines of yesteryear's GPUz ASIC value description and makes good sense...if it were me, 1.256 peak v, 107 C max Hotspot (shown) and also 3,000 rpm fans would send me straight to the order page for water-cooling parts. But I fully recognize that other folks have different opinions / priorities. I pretty much w-cool anything in sight. I will say though that there's a relationship between high voltage, high temps and (long-term) degradation. That said, long-term might mean years...


Aye, high voltage at light current won't do _too_ much unless the voltage is excessive. High voltage and High Current will eat silicon for breakfast, especially if temperatures aren't controlled properly; heat will exacerbate the issue.


----------



## Godhand007

ZealotKi11er said:


> My xtxh does 2850mhz at default voltage 1.2v. If I set 1.25.-1.3v assuming I can keep cool, it would be 29xx. Your card will never hit those clks.


Well, you are wrong there. As you said, it's a matter of keeping things cool along with supplying the necessary voltage. BTW, I was referring to stock and factory OCs for XTXH chips like these. I know XTXH chips are better-binned ones. I am not competing with those chips. Hope it clears any misconception.

@CS9K @J7SC Can you explain the _xtx (low leakage)/ fast xtxh _point?


----------



## Godhand007

jonRock1992 said:


> I've been running 1287mV in my games @2875MHz with 450W PL. I only game for like 1 or 2 hours a few days a week, but everything good so far. My temps are really good for this config though, so that's not worrying for me (hotspot usually in low 60's in demanding games). I'll let you guys know if anything happens lol. Been running this config for over a month. I definitely don't recommend that people do this with their gpu's though. Also, on a side note, coil whine is not quite as noticeable at this voltage.


So there is a great deal of difference among binned chips as well, hmm. You had to up your voltage by 87mv to get to almost the same clocks as someone who can do it at default (refer to this post)
I reckon I can get around 25 MHz more from my GPU by some tweaking to get it up to 2775 MHz but I want to test these clocks rigidly. Are your clocks GT2 stable? Do you encounter issues with 3d mark tests if you lower your max power to around ~400 W?

BTW, Nice setup, temps-wise. Maybe one day I will get into water cooling as well.


----------



## Godhand007

CS9K said:


> Aye, high voltage at light current won't do _too_ much unless the voltage is excessive. High voltage and High Current will eat silicon for breakfast, especially if temperatures aren't controlled properly; heat will exacerbate the issue.


So do my settings fall under "_High voltage and High Curren_t" scenario? They don't seem to be unless the bad effects scale exponentially with voltage.


----------



## cfranko

jonRock1992 said:


> I've been running 1287mV in my games @2875MHz with 450W PL. I only game for like 1 or 2 hours a few days a week, but everything good so far. My temps are really good for this config though, so that's not worrying for me (hotspot usually in low 60's in demanding games). I'll let you guys know if anything happens lol. Been running this config for over a month. I definitely don't recommend that people do this with their gpu's though. Also, on a side note, coil whine is not quite as noticeable at this voltage.


What’s your coolant temperature after 1-2 hours of gaming?


----------



## J7SC

Godhand007 said:


> Well, you are wrong there. As you said, it's a matter of keeping things cool along with supplying the necessary voltage. BTW, I was referring to stock and factory OCs for XTXH chips like these. I know XTXH chips are better-binned ones. I am not competing with those chips. Hope it clears any misconception.
> 
> @CS9K @J7SC Can you explain the _xtx (low leakage)/ fast xtxh _point?


...this old screenie via TPU should help:









GPUz no longer shows ASIC quality, but a stock MSI AB voltage curve can be used to compare GPUs 'quality' along the lines of the above


----------



## jonRock1992

Godhand007 said:


> So there is a great deal of difference among binned chips as well, hmm. You had to up your voltage by 87mv to get to almost the same clocks as someone who can do it at default (refer to this post)
> I reckon I can get around 25 MHz more from my GPU by some tweaking to get it up to 2775 MHz but I want to test these clocks rigidly. Are your clocks GT2 stable? Do you encounter issues with 3d mark tests if you lower your max power to around ~400 W?
> 
> BTW, Nice setup, temps-wise. Maybe one day I will get into water cooling as well.


My bin is not the greatest for an XTXH chip, but I was able to get above 26k for Timespy GPU score. I have not tried decreasing power limit that low in Timespy because I was trying for higher scores, not lower ones. It's stable at these clocks in my games and back to back Timespy runs. I don't wanna loop Timespy with these settings.


----------



## TaunyTiger

jonRock1992 said:


> I've been running 1287mV in my games @2875MHz with 450W PL. I only game for like 1 or 2 hours a few days a week, but everything good so far. My temps are really good for this config though, so that's not worrying for me (hotspot usually in low 60's in demanding games). I'll let you guys know if anything happens lol. Been running this config for over a month. I definitely don't recommend that people do this with their gpu's though. Also, on a side note, coil whine is not quite as noticeable at this voltage.


I'm about to try 450W on my 6900XT liquid devil ultimate, what are you running on TDC limits? GFX & SOC?


----------



## jonRock1992

TaunyTiger said:


> I'm about to try 450W on my 6900XT liquid devil ultimate, what are you running on TDC limits? GFX & SOC?


450/450/77


----------



## TaunyTiger

jonRock1992 said:


> 450/450/77


And voltage @ 1287mV?


----------



## jonRock1992

TaunyTiger said:


> And voltage @ 1287mV?


Lol yeah. But like I said, I don't recommend it lol. I don't care if my card degrades a little early.


----------



## CS9K

Godhand007 said:


> So do my settings fall under "_High voltage and High Curren_t" scenario? They don't seem to be unless the bad effects scale exponentially with voltage.
> 
> View attachment 2536775
> View attachment 2536776


Generally speaking, I wouldn't say it scales exponentially, but temperature and current _do_ compound one another, which is further compounded as voltage increases (relative to "normal" voltage for the specific asic)

This is a good place to start reading, followed up by "Transistor Aging":





Reliability (semiconductor) - Wikipedia







en.wikipedia.org


----------



## Godhand007

J7SC said:


> ...this old screenie via TPU should help:
> 
> 
> 
> 
> 
> 
> 
> 
> 
> GPUz no longer shows ASIC quality, but a stock MSI AB voltage curve can be used to compare GPUs 'quality' along the lines of the above


Oh, I remember this. Basic ASIC quality stuff. Would have got it in one if it was being discussed more clearly. GTX 680 was the first _top-of-the-line_ card that I bought, It's been 10 years since then (getting old).


----------



## J7SC

Godhand007 said:


> Oh, I remember this. Basic ASIC quality stuff. Would have got it in one if it was being discussed more clearly. GTX 680 was the first _top-of-the-line_ card that I bought, It's been 10 years since then (getting old).


...XOCers would / will often prefer 'high leakage' chips, also per earlier screenie....but apparently, GPUz stopped publishing it because too many folks would RMA their low ASIC % cards, even if they performed at or beyond spec.


----------



## Henrik9979

Hello I'm new on this forum. I have a MSI Gaming Trio X 6900 xt. I have modified a full cover water block from at trio X 6800 xt and I really want to put it to the test.
But I can't seem to get above 1175mv every time I try it either crashes or locks on low powerstate. I use MPT by the way.


----------



## Blameless

ZealotKi11er said:


> My xtxh does 2850mhz at default voltage 1.2v. If I set 1.25.-1.3v assuming I can keep cool, it would be 29xx. Your card will never hit those clks.


My XTXH will never hit those clocks either. At default 1.2v, even if I keep temps in check, Time Spy will crash past ~2750MHz actual, and Night Raid GT2 can't loop at 2600MHz actual.



jonRock1992 said:


> My bin is not the greatest for an XTXH chip, but I was able to get above 26k for Timespy GPU score.


I can barely crack 24k on mine, with a lot of work, at settings that aren't anywhere near what I'd call stable.



Henrik9979 said:


> Hello I'm new on this forum. I have a MSI Gaming Trio X 6900 xt. I have modified a full cover water block from at trio X 6800 xt and I really want to put it to the test.
> But I can't seem to get above 1175mv every time I try it either crashes or locks on low powerstate. I use MPT by the way.


1175mV is the highest you can set on a 6800 XT without leveraging temperature dependent vmin.


----------



## lestatdk

Henrik9979 said:


> Hello I'm new on this forum. I have a MSI Gaming Trio X 6900 xt. I have modified a full cover water block from at trio X 6800 xt and I really want to put it to the test.
> But I can't seem to get above 1175mv every time I try it either crashes or locks on low powerstate. I use MPT by the way.


I have the same card and also a modified 6800XT Alphacool block. Guess it's you from the Reddit thread ?? 

You need to use the temperature dependent voltage tweak in MPT to go above 1175mV. But it's only recommended for benchmarks, as it will induce a stable high voltage to the card which could degrade it over time.

Personally I'm just satisfied with a good OC and my temps have dropped 60C or so from the terrible stock cooler.


----------



## ZealotKi11er

RDNA2 getting destroyed.


----------



## lestatdk

ZealotKi11er said:


> RDNA2 getting destroyed.


A little bit better in 1440p.


----------



## Godhand007

lestatdk said:


> A little bit better in 1440p.
> 
> View attachment 2536882


What's the reason for the discrepancy among reviewers? It seems that they are testing same area in-game.


----------



## bloot

ZealotKi11er said:


> RDNA2 getting destroyed.
> 
> View attachment 2536881


Well, they disabled SAM on AMD cards... besides that, Radeon cards perform much worse than anywhere else on their tests.


----------



## Blameless

Could some of those with XTXH parts revert to stock settings (MPT and Wattman) and tell me what frequency the "Automatic Tuning - Overclock GPU" reports? Also, if any of you are so inclined, a report of the maximum max frequency you can pass at least 10 minutes of looped Night Raid graphics test #2 would be appreciated.

I'm trying to decide if I should keep this card or sell it and try another. I was hoping to match the clocks of my 6800 XT, but it's falling a solid 100-125MHz short at similar voltage/temps. Only potentially saving grace is how well the SoC and FCLK can clock, but that's a very minor performance uplift.

Edit: Even comparisons with standard 6900 XT silicon would prove useful here.


----------



## ZealotKi11er

bloot said:


> Well, they disabled SAM on AMD cards... besides that, Radeon cards perform much worse than anywhere else on their tests.


I have 3080 Ti and I know how it performs. I have not tried single player but both 3080 Ti Oced vs 6900 XT OCed, the 6900 XT is 15-20% faster. I did not even have SAM on.


----------



## bloot

ZealotKi11er said:


> I have 3080 Ti and I know how it performs. I have not tried single player but both 3080 Ti Oced vs 6900 XT OCed, the 6900 XT is 15-20% faster. I did not even have SAM on.


I tried both single player and multi player, single is much more demanding in outdoor scenarios.

I don't know what's going on wirh HU results, but they seem way off. I have SAM always enabled, haven't tried it off.

Enviado desde mi Pixel 3 mediante Tapatalk


----------



## ZealotKi11er

bloot said:


> I tried both single player and multi player, single is much more demanding in outdoor scenarios.
> 
> I don't know what's going on wirh HU results, but they seem way off. I have SAM always enabled, haven't tried it off.
> 
> Enviado desde mi Pixel 3 mediante Tapatalk


They had same weird results with BF2042. Also there is no reason not to enable SAM now considering you can turn it off via Radeon settings.


----------



## ZealotKi11er

Finally playing with 6900 XT. I have it set to 2800MHz @ 1.2v which gives me about 2725-2735MHz actual clock. Power is set to 400W + 15%. I am usually 350W-400W. Playing Halo ~ 20 mins my card is hitting 75C. I do not want to even know how much the hotspot is but probably 20-30C+. This is with waterblock with 2x 360 RAD and stock 12900K. Fans are ~ 1130rpm on the radiator.


----------



## J7SC

ZealotKi11er said:


> Finally playing with 6900 XT. I have it set to 2800MHz @ 1.2v which gives me about 2725-2735MHz actual clock. Power is set to 400W + 15%. I am usually 350W-400W. Playing Halo ~ 20 mins my card is hitting 75C. I do not want to even know how much the hotspot is but probably 20-30C+. This is with waterblock with 2x 360 RAD and stock 12900K. Fans are ~ 1130rpm on the radiator.


Two quick questions:
1.) I can't recall if your card is an XTXH, so 1.2v fixed via TempDepVmin, or free to swing via XTXH bios?
2.) Does the 400W +- include the additional 'shadow' 45W-50W discussed in this thread before when comparing ie. HWInfo between AMD 6900XT and NVidia RTX 3090?

I find that even when making allowances for the extra Watts, AMD's GPU chip heats up just a bit more than the NVidia one with similar paste/putty prep and w-cooling. On Hotspot, that delta is about 8C to 10C on my setups.


----------



## ZealotKi11er

J7SC said:


> Two quick questions:
> 1.) I can't recall if your card is an XTXH, so 1.2v fixed via TempDepVmin, or free to swing via XTXH bios?
> 2.) Does the 400W +- include the additional 'shadow' 45W-50W discussed in this thread before when comparing ie. HWInfo between AMD 6900XT and NVidia RTX 3090?
> 
> I find that even when making allowances for the extra Watts, AMD's GPU chip heats up just a bit more than the NVidia one with similar paste/putty prep and w-cooling. On Hotspot, that delta is about 8C to 10C on my setups.



Yes is this 400W + 15% + VRM efficiency so most likely ~ 450-500W. Ran TimeSpy GT2 loop and system shut down from overtemperature. My 3080 Ti at 350W hits less temp on Core than this does with water. 

This is 1.2v free swing not forced. I most likely need to try liquid metal and see what improvements I get.


----------



## tolis626

J7SC said:


> Two quick questions:
> 1.) I can't recall if your card is an XTXH, so 1.2v fixed via TempDepVmin, or free to swing via XTXH bios?
> 2.) Does the 400W +- include the additional 'shadow' 45W-50W discussed in this thread before when comparing ie. HWInfo between AMD 6900XT and NVidia RTX 3090?
> 
> I find that even when making allowances for the extra Watts, AMD's GPU chip heats up just a bit more than the NVidia one with similar paste/putty prep and w-cooling. On Hotspot, that delta is about 8C to 10C on my setups.


Well, I mean, NVidia's dies are much bigger than AMD's. Of course they're easier to cool given the same power.


----------



## J7SC

tolis626 said:


> Well, I mean, NVidia's dies are much bigger than AMD's. Of course they're easier to cool given the same power.


Yes, that certainly could explain some of it, but not all of it, IMO, in particular about the AMD Hotspot temp as compared to its overall GPU temp...even at 560W, my Strix 3090 runs cooler (apart from VRAM)

Nvidia RTX 3090 - 628 mm² / 8 nm and 28,300 million transistors

AMD 6900XT - 520 mm² / 7 nm and 26,800 million transistors


----------



## ZealotKi11er

Part of the reason AMD runs hotter is clock speeds. Another thing to note is Nvidia core is actually more efficient in terms of perf/watt but G6x/memory controller uses too much of that power.


----------



## aslidop

Hey all - heading up to the mountains next week with an NCase M1 stuffed with a 5800X (cooled with an NZXT X53) and a Red Devil 6900XT (deshrouded with NF-A12s set to intake) running off a Corsair SF750W. Hoping to find a few hours to huddle in the garage to "enjoy" ambient temps below 0° C to try for some personal best scores in 3DMark. In ~22° C ambient I managed a TimeSpy graphics score of 22,778 so far - would love to see >23K!

Right now I'm running MPT @ 350W, 1130mV, 2400 min / 2650MHz max core with memory at 2120MHz (normal timing, fast timing always locks up when benching). With fans set to max and core temp never goes above 60°, hotspot hits about 85°. Any suggestions on how to extract the most from this card as possible on air? What should I be expecting to hit? Should I be disassembling the cooler entirely and repasting/new pads? How high should I push MPT? Thanks in advance for any guidance!


----------



## cfranko

Unless I set a minimum frequency in radeon software call of duty warzone stutters a lot and the core clock randomly drops to 1000 mhz and jumps back to 2200. Why do I have to do some manual tuning to play this game makes no sense.


----------



## tolis626

cfranko said:


> Unless I set a minimum frequency in radeon software call of duty warzone stutters a lot and the core clock randomly drops to 1000 mhz and jumps back to 2200. Why do I have to do some manual tuning to play this game makes no sense.


Well, that's easy. It's because Warzone is fundamentally a problematic game. Yes, I get how much harder it is on the CPU to cope with 150 players in a huge map, but stuff like this infuriates me. The card downclocks itself when it's waiting on the CPU, so that's why it drops in frequency. But honestly, with setting min 2500/max 2600, I don't see any difference in performance or smoothness. The game runs pretty much the same, but it's just that the GPU clock graph is a nice straight line instead of a jagged mess. Utilization is still a mess. And now I'm crashing in menus or in the pregame lobby in 1/5 matches, so I'm going to chuck this into the "Warzone problems" bin and forget about it. I can't wait for BF 2042 to be fixed, I will quit Warzone in a heartbeat.


----------



## Henrik9979

lestatdk said:


> I have the same card and also a modified 6800XT Alphacool block. Guess it's you from the Reddit thread ??
> 
> You need to use the temperature dependent voltage tweak in MPT to go above 1175mV. But it's only recommended for benchmarks, as it will induce a stable high voltage to the card which could degrade it over time.
> 
> Personally I'm just satisfied with a good OC and my temps have dropped 60C or so from the terrible stock cooler.
> 
> 
> View attachment 2536868


Thank you! Yes it is me from Reddit. 😁 This waterblock is amazing! Combined with thermal grizzly conductonaut.
I am trying to run timespy with stable 2.8ghz with a pretty high voltage at 1285mv.
The highest powerdraw is 470w and the hotspot peaks at 81c. It hits those numbers for a few seconds in the benchmark and stabilize at around 430w with 71c hotspot.
So it is pretty awesome! My water temperatures is around 35c
I am curious to see how sustainable those numbers are doing a game. I could try make the voltage control more temperature dependent to minimise the stress on the core.


----------



## cfranko

tolis626 said:


> Well, that's easy. It's because Warzone is fundamentally a problematic game. Yes, I get how much harder it is on the CPU to cope with 150 players in a huge map, but stuff like this infuriates me. The card downclocks itself when it's waiting on the CPU, so that's why it drops in frequency. But honestly, with setting min 2500/max 2600, I don't see any difference in performance or smoothness. The game runs pretty much the same, but it's just that the GPU clock graph is a nice straight line instead of a jagged mess. Utilization is still a mess. And now I'm crashing in menus or in the pregame lobby in 1/5 matches, so I'm going to chuck this into the "Warzone problems" bin and forget about it. I can't wait for BF 2042 to be fixed, I will quit Warzone in a heartbeat.


Yeah and my 5900X sits at 80c with a custom loop, Its torturing the cpu. Pretty bad game in terms of performance as you said.


----------



## Henrik9979

cfranko said:


> Yeah and my 5900X sits at 80c with a custom loop, Its torturing the cpu. Pretty bad game in terms of performance as you said.


Try use the program called Process Lasso, and set the first 4-7 cores/threads to be utilised by all you programs and let the game have the rest for it self.
It had help me in many scenarios where I had stutters do to CPU bottlenecks.


----------



## LtMatt

ZealotKi11er said:


> Finally playing with 6900 XT. I have it set to 2800MHz @ 1.2v which gives me about 2725-2735MHz actual clock. Power is set to 400W + 15%. I am usually 350W-400W. Playing Halo ~ 20 mins my card is hitting 75C. I do not want to even know how much the hotspot is but probably 20-30C+. This is with waterblock with 2x 360 RAD and stock 12900K. Fans are ~ 1130rpm on the radiator.


Sounds very similar to my Toxic running Halo.

Running 2760/2860, 1.212v using Vmin, power limit set to 347W in MPT, +15% in RS =399W.

Generally below 400W according to my HWINFO64 overlay.

Halo Infinite - Multiplayer | 6900 XT Toxic Extreme | 2160P Max Settings - YouTube

I actually see higher temperature spikes in Call of Duty Vanguard, think it's due to the higher FPS at 4K max settings.


----------



## Blameless

I think I'm going to try my luck with another sample. Could probably justify sending this one back on account of the coil whine alone--which sounds like someone is holding an angry cicada over a candle flame--but I've got a buddy who will buy this one at a modest discount, so I'll just buy another while they're still in stock.

At 1.2v the highest maximum clock that would consistently pass a bench run of Time Spy was 2700, and that was no where near long term stable. Memory also wasn't fully stable past 2124 fast. I did get a seemingly stable SoC clock of 1350, FCLK of 2160, and PhyClk of 891, out of it however, but even with all of these at ~10% over stock, the performance gains only amounted to ~1%.


----------



## LtMatt

Blameless said:


> I think I'm going to try my luck with another sample. Could probably justify sending this one back on account of the coil whine alone--which sounds like someone is holding an angry cicada over a candle flame--but I've got a buddy who will buy this one at a modest discount, so I'll just buy another while they're still in stock.
> 
> At 1.2v the highest maximum clock that would consistently pass a bench run of Time Spy was 2700, and that was no where near long term stable. Memory also wasn't fully stable past 2124 fast. I did get a seemingly stable SoC clock of 1350, FCLK of 2160, and PhyClk of 891, out of it however, but even with all of these at ~10% over stock, the performance gains only amounted to ~1%.


Seems like you don't have the greatest sample, so have another roll of the dice.

The Toxic I am currently using is the best XTXH I've had yet.

I think it would be very good under proper water cooling seeing as it can run almost 100Mhz higher core clocks than my last Toxic (which I sold).

It definitely runs hotter than my last one, so perhaps a leakier chip too.


----------



## Skinnered

Got my Toxic EE LQ, testing and playing mostly games. Most games are stable with 2820 core set in radeonsettings.
Mem is on 2132 fast timings.

Only Bf2042 is an other story, forget 2820, and had to lower to 2730!

Mpt 420, 420 W, 65Soc ,1400 fclk and 2719 clk (on the phone, so by heart).

Getting near rtx3090 perf., better in a few (FC6, Quake Darkplaces with pom + rtgi reshade, Outriders, Halo infinite Campaign, Hitman3) and find the rx6900xt often smoother and less lag, but overall the 3090 is still king. All with 5k and reshade with rtgi.

I also do notice some minor better quality, more fidelity with the Radeon, but its difficult to be sure without side by side comparison.

I sold the 3090, needed the money for an other hobby .

Will do a Timespy bench when winterweather is coming about 7-10 ,days


----------



## IIISLIDEIII

hi everyone, i'm considering the purchase of 6900xt, i'm undecided between gigabyte aorus xtreme waterforce wb and powercolor liquid devil ultimate, has anyone here had these models and can you kindly give me an opinion?
thank you


----------



## Godhand007

Skinnered said:


> Got my Toxic EE LQ, testing and playing mostly games. Most games are stable with 2820 core set in radeonsettings.
> Mem is on 2132 fast timings.
> 
> Only Bf2042 is an other story, forget 2820, and had to lower to 2730!
> 
> Mpt 420, 420 W, 65Soc ,1400 fclk and 2719 clk (on the phone, so by heart).
> 
> Getting near rtx3090 perf., better in a few (FC6, Quake Darkplaces with pom + rtgi reshade, Outriders, Halo infinite Campaign, Hitman3) and find the rx6900xt often smoother and less lag, but overall the 3090 is still king. All with 5k and reshade with rtgi.
> 
> I also do notice some minor better quality, more fidelity with the Radeon, but its difficult to be sure without side by side comparison.
> 
> I sold the 3090, needed the money for an other hobby .
> 
> Will do a Timespy bench when winterweather is coming about 7-10 ,days


I have seen lower stable memory timings on my XTXH cards compared to reference 6900XTs. I and many others can run 2150 (Fast Timings) without any issue on reference 6900XTs.
On a side note, at which particular areas/locations do you find BF 2042 crashing? It's a bad game but I might just buy it for OC testing purposes.


----------



## Blameless

LtMatt said:


> It definitely runs hotter than my last one, so perhaps a leakier chip too.


Leakier is expected, but leakier _plus_ lower clocking is certainly undesirable.



Skinnered said:


> Mem is on 2132 fast timings.


2124 fast should almost always be faster than anything in the 2130s, due to where the timing straps are. Pretty small difference though.



Skinnered said:


> Only Bf2042 is an other story, forget 2820, and had to lower to 2730!


Out of curiosity, does 3DMark Night Raid pass test 2 at 2820?


----------



## J7SC

Blameless said:


> (...)
> Out of curiosity, does 3DMark Night Raid pass test 2 at 2820?


I think I mentioned before that - respectfully ! - wonder a bit about the usefulness of Night Raid as a test for 'big bruiser' GPUs such as 6900XTs and 3090s, as much as I like the visuals of it. UL/Benchmarks, the parent of 3DMark, tags it this way on their support pages: "3DMark Night Raid is a DirectX 12 _benchmark for laptops, notebooks, tablets and other mobile computing devices with integrated graphics. 3DMark Night Raid includes two Graphics tests and a CPU test." _

This doesn't mean that it can't be run on big discreet GPUs rather than the intended integrated graphics, but I feel there's an increased risk, given the fps (well over 1100) it can reach, when some less-experienced folks than you are already running on the ragged edge via MPT PL, TempDepVmin etc.

Again, that's just my opinion, but there have been reports re. super-high fps (Amazon's New World comes to mind) whereby certain discreet GPUs started to operate beyond their ability to catch transient spikes and broke. FYI, I have never lost a GPU to Night Raid (and don't even have Amazon's New World), but on an older EVGA 980 Classified w/ a custom bios, it is the only app that creates audible coil whine.


----------



## Blameless

J7SC said:


> I think I mentioned before that - respectfully ! - wonder a bit about the usefulness of Night Raid as a test for 'big bruiser' GPUs such as 6900XTs and 3090s, as much as I like the visuals of it. UL/Benchmarks, the parent of 3DMark, tags it this way on their support pages: "3DMark Night Raid is a DirectX 12 _benchmark for laptops, notebooks, tablets and other mobile computing devices with integrated graphics. 3DMark Night Raid includes two Graphics tests and a CPU test." _
> 
> This doesn't mean that it can't be run on big discreet GPUs rather than the intended integrated graphics, but I feel there's an increased risk, given the fps (well over 1100) it can reach, when some less-experienced folks than you are already running on the ragged edge via MPT PL, TempDepVmin etc.
> 
> Again, that's just my opinion, but there have been reports re. super-high fps (Amazon's New World comes to mind) whereby certain discreet GPUs started to operate beyond their ability to catch transient spikes and broke. FYI, I have never lost a GPU to Night Raid (and don't even have Amazon's New World), but on an older EVGA 980 Classified w/ a custom bios, it is the only app that creates audible coil whine.


As far as practicality goes, I expect to be able to run any title in my library on my $1600 video card. Some of these are old. Some of them need wrappers, that have the side effect of increasing the number draw calls that can be issued on modern hardare, to run correctly on modern OSes. Some of these titles can see mid four-digit frame rates (I can see six-thousand fps in UT2003 at points) and I shouldn't feel compelled to cap them to run them on modern hardware.

For testing purposes, I find Night Raid to be quite useful, as--once I've dialed in rock solid memory clocks with other tests (usually mining)--it's proving to be the only GPU test I need for Navi21. If it will pass an hour of Night Raid, it will run any game I've yet encountered (that doesn't have a software bug) and any of the benchmarks targeting higher performance parts, without fail, for as long as I care to run them. Something about it means it crashes 50-150MHz below anything else I can find.

More philosophically, I'm of the school of thought that if a piece of software can be executed on a part at all, that part should be able to execute that software, 24/7, for the entire duration of it's useful life.

If power densities make this impractical uncapped (as it generally has with the more demanding permutations of things like OCCT or FurMark for the last decade or so), then current limiters are an unfortunate, but acceptable way to address this issue, while ensuring clocks high enough to be competitive in most real-world tasks. And that's what ATi/AMD and NVIDIA have been doing since the Radeon HD 6000 and GTX 500 series, when they started introducing power limiters (something that irked me to no end at first, but which I've come to accept as a practical necessity).

A VRM that cannot keep transients under control, at any frame rate, either has a design flaw, manufacturing defect, is well past it's design lifespan, or is being run way out of spec in some other way. Most of these XTXH parts have massively overbuilt VRMs if any test that can cooled acceptable damages them due to power delivery, they card was extremely broken.

Regarding New World; that title has mostly been killing parts acknowledged to have defects--with early EVGA 3090 FTW3 boards being notable examples. Few other models, nor revisions of that model, seem to have unusually high failure rates...QA is not perfect, some parts die, and if New World didn't kill them, something else almost certainly would have.

Personally, when I'm spending this much on a part, I don't have much tolerance for fundamental design flaws or aggressive cost cutting. I'm already annoyed at the general lack of encapsulated chokes on these parts. Encapsulating them and properly securing them to the board would make them dead silent. Not doing so saves these AIBs what, five dollars? The whine was so bad on my 6800 XT that I filled in the chokes myself, and wasn't able to do a perfect job of it, though it was a major improvement. I can excuse that oversight as it won't actually cause harm to the card. Filtering capacitors are another matter...if it doesn't have enough filtering to survive worst case real-world transients, I will bin for a sample that does at the AIB's expense, or until they cut me off. There is no excuse for part in this class to have crappy electrical design. I don't personally expect this to be an issue with the ASRock OCF as the VRM (aside from not using fully encapsulated chokes) seems to be among, if not, the best available.

_All that said, you do have a point_. My standards are likely not typical and there is a very real possibility that someone with these parts is more interested in mainstream gaming than making sure the part they are gaming on doesn't have any hidden defects that may never show up in their daily use.

@Skinnered regarding my prior post, be aware that Night Raid will reach low four figure frame rates and that this could conceivably reveal issues with weak samples.



IIISLIDEIII said:


> hi everyone, i'm considering the purchase of 6900xt, i'm undecided between gigabyte aorus xtreme waterforce wb and powercolor liquid devil ultimate, has anyone here had these models and can you kindly give me an opinion?
> thank you







Anecdotally, I'd also prefer the PowerColor because they tend to use better thermal pads, in my experience.


----------



## CS9K

Blameless said:


> As far as practicality goes, I expect to be able to run any title in my library on my $1600 video card. Some of these are old. Some of them need wrappers, that have the side effect of increasing the number draw calls that can be issued on modern hardare, to run correctly on modern OSes. Some of these titles can see mid four-digit frame rates (I can see six-thousand fps in UT2003 at points) and I shouldn't feel compelled to cap them to run them on modern hardware.
> 
> For testing purposes, I find Night Raid to be quite useful, as--once I've dialed in rock solid memory clocks with other tests (usually mining)--it's proving to be the only GPU test I need for Navi21. If it will pass an hour of Night Raid, it will run any game I've yet encountered (that doesn't have a software bug) and any of the benchmarks targeting higher performance parts, without fail, for as long as I care to run them. Something about it means it crashes 50-150MHz below anything else I can find.
> 
> More philosophically, I'm of the school of thought that if a piece of software can be executed on a part at all, that part should be able to execute that software, 24/7, for the entire duration of it's useful life.


Keep in mind that GDDR6 has a basic, but effective, version of ECC running, so merely passing a test isn't a good measure of memory speed, performance, nor stability. 

It's a _massive_ time-sink, but Unigine Superposition hammers the absolute crap out of video memory, and thus far is the best objective _measure_ of gpu memory and gpu-memory-stability given how consistent the results are, assuming one sets a core clock speed, and is not power limited at all during the test run.

Test until you find a plateau, then use the setting one step _before_ the plateau, to ensure you're not running into ECC-caused performance degradation.

That said, Superposition sucks for core-clock stability, as it can usually pass runs 30-50MHz faster than Time Spy 1 and 2 can, so use a clock speed that you _know_ is stable for your particular GPU.


----------



## Azyrion

Hi guys,
I managed to snag myself a sapphire rx 6900 xt nitro se at a "reasonable" price but I have an issue with it as it randomly freezes and reboots. It doesn't matter if it is during gaming or just idling in desktop/browsing the net. Do you think a 850W PSU is too weak for the spikes of this gpu? Btw. I am not daisy chaining the sockets.

My specs:
CPU: Ryzen 7 3700X (stock clock)
GPU: Sapphire Nitro+ RX 6900 XT SE
MB: MSI X570 Tomahawk Wifi
RAM: GSkill 16GB CL14 3200 TridentZ (XMP)
PSU: Corsair RM850x (Black and white label)


----------



## lawson67

LtMatt said:


> Seems like you don't have the greatest sample, so have another roll of the dice.
> 
> The Toxic I am currently using is the best XTXH I've had yet.
> 
> I think it would be very good under proper water cooling seeing as it can run almost 100Mhz higher core clocks than my last Toxic (which I sold).
> 
> It definitely runs hotter than my last one, so perhaps a leakier chip too.


I had to send my toxic extreme back as it was artifacting all the time, at first only when overclocked then it became all the time even with Radeon chill engaged and FPS limited in world of tanks a Dirext x 9 game! it was even artifacing at stock clocks, anyhow the replacement they sent is as fast as my old powercolor, stock clock sits at 2619mhz the powercolor sat at 2634mhz and both these cards can bench TS at 2870mhz most the time but both are stable @2850 in TS @1130mv, I've seen some people say that a higher stock clock don't mean anything but after having four RX 6900 XT i have to disagree, maybe not everyone knows how to get the best out of these cards but in my experance the higher the stock clock the better the silicon is!

Matt The reason your new card is running slightly hotter than your old card is because it clocks higher, the higher it clocks the more power it draws


----------



## lawson67

Azyrion said:


> Hi guys,
> I managed to snag myself a sapphire rx 6900 xt nitro se at a "reasonable" price but I have an issue with it as it randomly freezes and reboots. It doesn't matter if it is during gaming or just idling in desktop/browsing the net. Do you think a 850W PSU is too weak for the spikes of this gpu? Btw. I am not daisy chaining the sockets.
> 
> My specs:
> CPU: Ryzen 7 3700X (stock clock)
> GPU: Sapphire Nitro+ RX 6900 XT SE
> MB: MSI X570 Tomahawk Wifi
> RAM: GSkill 16GB CL14 3200 TridentZ (XMP)
> PSU: Corsair RM850x (Black and white label)


DDU the driver and reinstall in safe mode, if its still doing it format PC, if its still doing it and you bought it brand new from a store send it back, 850w psu should just about be ok as long as you don't add more wattage via MPT but i would sooner go with 1000w PSU, however if your crashing on the desktop that is not your PSU as the card is sleeping most the time your browsing the web or sat on the desktop


----------



## LtMatt

lawson67 said:


> I had to send my toxic extreme back as it was artifacting all the time, at first only when overclocked then it became all the time even with Radeon chill engaged and FPS limited in world of tanks a Dirext x 9 game! it was even artifacing at stock clocks, anyhow the replacement they sent is as fast as my old powercolor, stock clock sits at 2619mhz the powercolor sat at 2634mhz and both these cards can bench TS at 2870mhz most the time but both are stable @2850 in TS @1130mv, I've seen some people say that a higher stock clock don't mean anything but after having four RX 6900 XT i have to disagree, maybe not everyone knows how to get the best out of these cards but in my experance the higher the stock clock the better the silicon is!
> 
> Matt The reason your new card is running slightly hotter than your old card is because it clocks higher, the higher it clocks the more power it draws


That's probably true if you water cool with a higher stock clock. For stock cooling I've found the opposite is true, which makes sense. 

Congrats on the new sample, sounds great. Is that using the stock cooler or under water?

I've found that undervolting stops working once you go past 2600Mhz (ish) core clock. Have to undervolt using MPT at high core clocks. 

Is that what you are doing?


----------



## Blameless

CS9K said:


> Keep in mind that GDDR6 has a basic, but effective, version of ECC running, so merely passing a test isn't a good measure of memory speed, performance, nor stability.
> 
> It's a _massive_ time-sink, but Unigine Superposition hammers the absolute crap out of video memory, and thus far is the best objective _measure_ of gpu memory and gpu-memory-stability given how consistent the results are, assuming one sets a core clock speed, and is not power limited at all during the test run.


I find that any of the Ethash/dagger-hashimoto based miners are even better than Superposition. Instability is revealed via hashrate that fail to climb with clock speed, or spontaneously collapse to much lower levels. Generally, I can pass loops of Superposition 4k with memory clocks that are still scaling up in performance, that are often immediately unstable in ethash.

Normally, I'll use whatever cooling settings I need to keep the GDDR6 around 85C (which is not practical in superposition because it overheats the core before the memory gets so hot) then turn up memory clocks until hashrate stops improving, and back off slightly. I'll then reset wattman settings and reload the OC several times during the test, sometimes stopping mining to run another test, to make sure the hash rate always returns to elevated levels

Sometimes even mining can take a while to diagnose issues (days), as there will be rare errors, seemingly out of nowhere, that either cause hashrates to drop, or can even reboot the system (especially if ReBAR is enabled). I don't really mind though, as using mining as a test doesn't seem like as much lost time as most other tools that I don't get paid to run.




CS9K said:


> Test until you find a plateau, then use the setting one step _before_ the plateau, to ensure you're not running into ECC-caused performance degradation.
> 
> That said, Superposition sucks for core-clock stability, as it can usually pass runs 30-50MHz faster than Time Spy 1 and 2 can, so use a clock speed that you _know_ is stable for your particular GPU.


Also a phenomena I've noticed. Things like Superpostion and Heaven will pass at absurd clocks. Time Spy 2 is significantly worse, but Night Raid has proven to be about 100MHz below Time Spy, which is why it's become my primary core clock test.


----------



## lawson67

LtMatt said:


> That's probably true if you water cool with a higher stock clock. For stock cooling I've found the opposite is true, which makes sense.
> 
> Congrats on the new sample, sounds great. Is that using the stock cooler or under water?
> 
> I've found that undervolting stops working once you go past 2600Mhz (ish) core clock. Have to undervolt using MPT at high core clocks.
> 
> Is that what you are doing?


The first Sapphire extreme i had the one i sent back due to artifacting was never on the waterblock as i believed right from the get go it had a problem therefore did not want to take it apart due to having thoughts of RMA it, the new one once tested was put on the waterblock, on the stock cooler stock clocks were 2615mhz putting it on the waterblock seen the stock clock go to 2619mhz so slight bump up, and as for the undervolting I've found with both the powercolor and my new Toxic that the wattman offset does work over 2600mhz, if you add to much voltage in wattman you can see crashes in GT1, but GT2 needs more voltage so there is a voltage balance for you to find where there is not to much voltage for GT1 that can course over-boosting crashes but enough voltage to complete GT2 consistently, the better silicon samples will be able to run less voltage and complete GT1 and GT1 consistently once you find this spot, for both the powercolor and the new sapphire this was around the 1130mv area, as for MPT i am simply using it to bump up wattage and flck for my daily use


----------



## LtMatt

lawson67 said:


> The first Sapphire extreme i had the one i sent back due to artifacting was never on the waterblock as i believed right from the get go it had a problem therefore did not want to take it apart due to having thoughts of RMA it, the new one once tested was put on the waterblock, on the stock cooler stock clocks were 2615mhz putting it on the waterblock seen the stock clock go to 2619mhz so slight bump up, and as for the undervolting I've found with both the powercolor and my new Toxic that the wattman offset does work over 2600mhz, if you add to much voltage in wattman you can see crashes in GT1, but GT2 needs more voltage so there is a voltage balance for you to find where there is not to much voltage for GT1 that can course over-boosting crashes but enough voltage to complete GT2 consistently, the better silicon samples will be able to run less voltage and complete GT1 and GT1 consistently once you find this spot, for both the powercolor and the new sapphire this was around the 1130mv area, as for MPT i am simply using it to bump up wattage and flck for my daily use


Interesting. In all my Toxic sample, whenever my core clock is around 2650Mhz+, certainly 2700Mhz and higher undervolting does nothing. I can move the slider down to 800 and it still runs at maximum voltage. Have to force it with MPT if I want voltage to reduce. That is solved by reducing core clock, soon as I go down to 2600Mhz ish or lower, undervolting starts working again with the voltage slider. Voltage levels tracked via HWINFO64 overlay, so I know when it is and is not reducing voltage.


----------



## lawson67

LtMatt said:


> Interesting. In all my Toxic sample, whenever my core clock is around 2650Mhz+, certainly 2700Mhz and higher undervolting does nothing. I can move the slider down to 800 and it still runs at maximum voltage. Have to force it with MPT if I want voltage to reduce. That is solved by reducing core clock, soon as I go down to 2600Mhz ish or lower, undervolting starts working again with the voltage slider. Voltage levels tracked via HWINFO64 overlay, so I know when it is and is not reducing voltage.


Yes even at 1130mv it can and will still draw more voltage when it needs it the voltage never stays the same depending on workload, for example yesterday i played metro exodus enhanced edition for 2 hours, that game is great for testing stability of your cards overclock and HWINFO reported a max of core draw of 1180mv after 2 hours of playing @2870mhz, but the offset is working, if i put the wattman slider all the way to 1.2v HWINFO is showing max core draw of 1.2v, the wattman slider is just an undervolt offset so i guess if you want strict voltage limits use MPT, however if you have no overheating problems in my mind there is no need


----------



## LtMatt

lawson67 said:


> Yes even at 1130mv it can and will still draw more voltage when it needs it the voltage never stays the same depending on workload, for example yesterday i played metro exodus enhanced edition for 2 hours, that game is great for testing stability of your cards overclock and HWINFO reported a max of core draw of 1180mv after 2 hours of playing @2870mhz, but the offset is working, if i put the wattman slider all the way to 1.2v HWINFO is showing max core draw of 1.2v, the wattman slider is just an undervolt offset so i guess if you want strict voltage limits use MPT, however if you have no overheating problems in my mind there is no need


I think you misunderstood what I was saying. The undervolt offset does not change when i use RS slider, only when the core clock speed is reduced to a certain point does it reduce voltage/adjust the offset. I track voltage changes via HWINFO64 overlay so I know what voltage is under different types of load etc.

For example, I can run 2700Mhz core clock in games with 1125mv set in MPT rather than 1.2. This gives an in game voltage of 1.060c-1.080v ish or thereabouts. . Here the undervolt/offset works as I am not using RS and used MPT to adjust it.

If i leave MPT at default 1.2v and try to use the voltage slider in RS, it does not reduce the voltage/adjust the offset if my core clock is set too high. Even with the voltage slider at 800, or 1000, etc. It just does nothing once my core clock goes past a certain point. Obviously it's tied to the frequency/voltage curve somehow. That is what I am trying to say. It was the same on both my Toxics, and the Merc (regular XTX) I had before that.

I guess most people don't run 2700Mhz and higher core clocks and undervolt so perhaps not very commonly seen or tried. Either that, or people don't verify with additional software to track what the actual voltage is, and just assume the slider is working.


----------



## lestatdk

Azyrion said:


> Hi guys,
> I managed to snag myself a sapphire rx 6900 xt nitro se at a "reasonable" price but I have an issue with it as it randomly freezes and reboots. It doesn't matter if it is during gaming or just idling in desktop/browsing the net. Do you think a 850W PSU is too weak for the spikes of this gpu? Btw. I am not daisy chaining the sockets.
> 
> My specs:
> CPU: Ryzen 7 3700X (stock clock)
> GPU: Sapphire Nitro+ RX 6900 XT SE
> MB: MSI X570 Tomahawk Wifi
> RAM: GSkill 16GB CL14 3200 TridentZ (XMP)
> PSU: Corsair RM850x (Black and white label)


I have the same PSU and motherboard as you. My 5800X CPU is overclocked and even if using MPT of 400W on my GPU (6900XT on water) I have no issue at all. The PSU should not be the issue.

If you crash on the desktop the GPU is idling , so not using much power anyway.


----------



## 99belle99

Azyrion said:


> Hi guys,
> I managed to snag myself a sapphire rx 6900 xt nitro se at a "reasonable" price but I have an issue with it as it randomly freezes and reboots. It doesn't matter if it is during gaming or just idling in desktop/browsing the net. Do you think a 850W PSU is too weak for the spikes of this gpu? Btw. I am not daisy chaining the sockets.
> 
> My specs:
> CPU: Ryzen 7 3700X (stock clock)
> GPU: Sapphire Nitro+ RX 6900 XT SE
> MB: MSI X570 Tomahawk Wifi
> RAM: GSkill 16GB CL14 3200 TridentZ (XMP)
> PSU: Corsair RM850x (Black and white label)


I also have a 3700X and 6900 XT with the same 850W PSU and everything is fine for many months now.


----------



## ZealotKi11er

LtMatt said:


> I think you misunderstood what I was saying. The undervolt offset does not change when i use RS slider, only when the core clock speed is reduced to a certain point does it reduce voltage/adjust the offset. I track voltage changes via HWINFO64 overlay so I know what voltage is under different types of load etc.
> 
> For example, I can run 2700Mhz core clock in games with 1125mv set in MPT rather than 1.2. This gives an in game voltage of 1.060c-1.080v ish or thereabouts. . Here the undervolt/offset works as I am not using RS and used MPT to adjust it.
> 
> If i leave MPT at default 1.2v and try to use the voltage slider in RS, it does not reduce the voltage/adjust the offset if my core clock is set too high. Even with the voltage slider at 800, or 1000, etc. It just does nothing once my core clock goes past a certain point. Obviously it's tied to the frequency/voltage curve somehow. That is what I am trying to say. It was the same on both my Toxics, and the Merc (regular XTX) I had before that.
> 
> I guess most people don't run 2700Mhz and higher core clocks and undervolt so perhaps not very commonly seen or tried. Either that, or people don't verify with additional software to track what the actual voltage is, and just assume the slider is working.


With Radeon Settings you can either overclock or undervolt. You can not do both. I posted a while back a table for 6800XT and how it works.


----------



## J7SC

Blameless said:


> As far as practicality goes, I expect to be able to run any title in my library on my $1600 video card.
> 
> 
> 
> Spoiler
> 
> 
> 
> Some of these are old. Some of them need wrappers, that have the side effect of increasing the number draw calls that can be issued on modern hardare, to run correctly on modern OSes. Some of these titles can see mid four-digit frame rates (I can see six-thousand fps in UT2003 at points) and I shouldn't feel compelled to cap them to run them on modern hardware.
> 
> For testing purposes, I find Night Raid to be quite useful, as--once I've dialed in rock solid memory clocks with other tests (usually mining)--it's proving to be the only GPU test I need for Navi21. If it will pass an hour of Night Raid, it will run any game I've yet encountered (that doesn't have a software bug) and any of the benchmarks targeting higher performance parts, without fail, for as long as I care to run them. Something about it means it crashes 50-150MHz below anything else I can find.
> 
> More philosophically, I'm of the school of thought that if a piece of software can be executed on a part at all, that part should be able to execute that software, 24/7, for the entire duration of it's useful life.
> 
> If power densities make this impractical uncapped (as it generally has with the more demanding permutations of things like OCCT or FurMark for the last decade or so), then current limiters are an unfortunate, but acceptable way to address this issue, while ensuring clocks high enough to be competitive in most real-world tasks. And that's what ATi/AMD and NVIDIA have been doing since the Radeon HD 6000 and GTX 500 series, when they started introducing power limiters (something that irked me to no end at first, but which I've come to accept as a practical necessity).
> 
> A VRM that cannot keep transients under control, at any frame rate, either has a design flaw, manufacturing defect, is well past it's design lifespan, or is being run way out of spec in some other way. Most of these XTXH parts have massively overbuilt VRMs if any test that can cooled acceptable damages them due to power delivery, they card was extremely broken.
> 
> Regarding New World; that title has mostly been killing parts acknowledged to have defects--with early EVGA 3090 FTW3 boards being notable examples. Few other models, nor revisions of that model, seem to have unusually high failure rates...QA is not perfect, some parts die, and if New World didn't kill them, something else almost certainly would have.
> 
> Personally, when I'm spending this much on a part, I don't have much tolerance for fundamental design flaws or aggressive cost cutting. I'm already annoyed at the general lack of encapsulated chokes on these parts. Encapsulating them and properly securing them to the board would make them dead silent. Not doing so saves these AIBs what, five dollars? The whine was so bad on my 6800 XT that I filled in the chokes myself, and wasn't able to do a perfect job of it, though it was a major improvement. I can excuse that oversight as it won't actually cause harm to the card. Filtering capacitors are another matter...if it doesn't have enough filtering to survive worst case real-world transients, I will bin for a sample that does at the AIB's expense, or until they cut me off. There is no excuse for part in this class to have crappy electrical design. I don't personally expect this to be an issue with the ASRock OCF as the VRM (aside from not using fully encapsulated chokes) seems to be among, if not, the best available.
> 
> 
> 
> _All that said, you do have a point_. My standards are likely not typical and there is a very real possibility that someone with these parts is more interested in mainstream gaming than making sure the part they are gaming on doesn't have any hidden defects that may never show up in their daily use.
> 
> @Skinnered regarding my prior post, be aware that Night Raid will reach low four figure frame rates and that this could conceivably reveal issues with weak samples.(...)


Your points are well taken...in fact, I usually make the argument myself that I paid a lot of money for cards such as my 6900XT, thus I should not be restricted in some ways, ie. when it gets to the '2150' VRAM cap on XTX which is one of my recurring nags.

It is worth pointing the risks out though as a public service message' with certain apps such as Night Raid 'as there are new users out there who don't have the same understanding.


Spoiler



We all start somewhere, and my first PC (a long time ago) wouldn't start anymore within one hour of unpacking it and after I 'adjusted' all the bios settings to my oc liking...I was a young computer-inexperienced lad then and in those days, there was no 'clear CMOS' or dual bios, just a plug-in IC that had to be removed and re-flashed / exchanged at the shop. How embarrassing...

I have seen posts at the RTX3k threads whereby folks wanted to shunt mod their brand-new card, followed by 'I've never soldered before, what's flux' ? Others slam a 1000W XOC vbios on an air-cooled card for their long gaming sessions...even if said XOC bios typically 'only' pull between 600W and 700W w/o power limit, that's still a heck of a lot on air.

Also, there are expensive cards out there which do seem to have problems, ie. on power balancing. The EVGA 3090 FTW comes to mind. Even before 'New World', there was already a recall / 'come-RMA- with no questions asked' situation early in 2021. And a friend of mine just told me that his FTW3 seems to have degraded to the point that it won't hold any oc anymore, though final analysis of that is still pending. FYI, he's very experienced with hardware and does do a lot of folding. Another case would be certain RTX 2080 Ti failures which were initially blamed on Micron GDDR6 until NVidia published an explanation re. 'test escapes' which shouldn't have made it to the production line. The way the GPU market still is these days (ie. replacements), especially re. the top end, should also play somewhat of a role when pushing the limits w 1000+ fps.

Then, there's buggy software...I've been in the software business for a long time now, and there are multiple examples of either badly-written and/or bloated code, quite apart from 'New World'...for example, when Cyberpunk 2077 came out, I don't know how many folks despaired when others insisted it was their 'oc' not passing stability tests...I can't remember how many patches and hot-fixes it took for C2077.

The Initial release of Win 11 re. Ryzen L3 is another one, and the list is long. I also still wonder, as discussed on occasion before with @CS9K , about TimeSpy GT2...granted, if it passes at stock or lower 'oc' but not a higher 'oc' then that could be the end of that story. Still, no matter whether we're talking about older or newer AMD or NVidia cards, every time GT2 does crash and I immediately run GT2 again as a custom test with the same settings and w/o cool-down, it passes - very interesting.

With my latest builds all geared towards 4K, Time Spy has less meaning and I also tend to look to Time Spy Extreme and Superpoistion 4K and especially 8K...even then, those only uses a fraction of the 16 GB of the 6900XT or the 24 GB of the 3090. I actually use FS2020 on 4K (or more) and Ultra settings w/HDR...that produces some of the highest VRAM 'allotment and usage' of any non-work apps I have.


----------



## LtMatt

ZealotKi11er said:


> With Radeon Settings you can either overclock or undervolt. You can not do both. I posted a while back a table for 6800XT and how it works.


Can you link to that post?


----------



## CS9K

LtMatt said:


> I think you misunderstood what I was saying. The undervolt offset does not change when i use RS slider, only when the core clock speed is reduced to a certain point does it reduce voltage/adjust the offset. I track voltage changes via HWINFO64 overlay so I know what voltage is under different types of load etc.
> 
> For example, I can run 2700Mhz core clock in games with 1125mv set in MPT rather than 1.2. This gives an in game voltage of 1.060c-1.080v ish or thereabouts. . Here the undervolt/offset works as I am not using RS and used MPT to adjust it.
> 
> If i leave MPT at default 1.2v and try to use the voltage slider in RS, it does not reduce the voltage/adjust the offset if my core clock is set too high. Even with the voltage slider at 800, or 1000, etc. It just does nothing once my core clock goes past a certain point. Obviously it's tied to the frequency/voltage curve somehow. That is what I am trying to say. It was the same on both my Toxics, and the Merc (regular XTX) I had before that.
> 
> I guess most people don't run 2700Mhz and higher core clocks and undervolt so perhaps not very commonly seen or tried. Either that, or people don't verify with additional software to track what the actual voltage is, and just assume the slider is working.





ZealotKi11er said:


> With Radeon Settings you can either overclock or undervolt. You can not do both. I posted a while back a table for 6800XT and how it works.


You can absolutely do both. What both of yall have observed is expected behavior from the firmware/drivers. One _can_ in fact, both have an undervolt to the curve at partial loads, _and_ overclock the top end, you just can't change the voltage at the top end once the firmware/driver's LLC kicks in.

This curve is from an RX 480 I believe, so ignore the absolute values of the voltage and core clocks, just know that the shape and behavior of the curve is accurate for RDNA2 GPU's










I borrowed this image from Igor's Lab.


----------



## ZealotKi11er

CS9K said:


> You can absolutely do both. What both of yall have observed is expected behavior from the firmware/drivers. One _can_ in fact, both have an undervolt to the curve at partial loads, _and_ overclock the top end, you just can't change the voltage at the top end once the firmware/driver's LLC kicks in.
> 
> This curve is from an RX 480 I believe, so ignore the absolute values of the voltage and core clocks, just know that the shape and behavior of the curve is accurate for RDNA2 GPU's
> 
> 
> View attachment 2537340
> 
> I borrowed this image from Igor's Lab.


You can but its different for every card and gets complicated if you are power limited.


----------



## Skinnered

Godhand007 said:


> I have seen lower stable memory timings on my XTXH cards compared to reference 6900XTs. I and many others can run 2150 (Fast Timings) without any issue on reference 6900XTs.
> On a side note, at which particular areas/locations do you find BF 2042 crashing? It's a bad game but I might just buy it for OC testing purposes.


Saw the same thing, the reference I got reached higher mem. frequencies ~2150 with ease.

BF2042 just freezes when loading the gamemenu allready with to high oc.


----------



## Skinnered

Blameless said:


> Leakier is expected, but leakier _plus_ lower clocking is certainly undesirable.
> 
> 
> 
> 2124 fast should almost always be faster than anything in the 2130s, due to where the timing straps are. Pretty small difference though.
> 
> 
> 
> Out of curiosity, does 3DMark Night Raid pass test 2 at 2820?


Hmm, will test again but I thought I saw the higjest perf. with 2132. Tested this ingame.

Have not tested Night Raid yet.
But will take a look.


----------



## LtMatt

CS9K said:


> You can absolutely do both. What both of yall have observed is expected behavior from the firmware/drivers. One _can_ in fact, both have an undervolt to the curve at partial loads, _and_ overclock the top end, you just can't change the voltage at the top end once the firmware/driver's LLC kicks in.
> 
> This curve is from an RX 480 I believe, so ignore the absolute values of the voltage and core clocks, just know that the shape and behavior of the curve is accurate for RDNA2 GPU's
> 
> 
> View attachment 2537340
> 
> I borrowed this image from Igor's Lab.


Thanks for providing a graph, very helpful. This would explain what I saw over a few different 6900 XTs.


----------



## bloot

24 Gbps Samsung GDDR6 that would be great for RDNA 3


----------



## tolis626

bloot said:


> 24 Gbps Samsung GDDR6 that would be great for RDNA 3


Well that's very interesting. We're talking about a 50% increase in bandwidth given the same bus width. This would most certainly alleviate the bandwidth restraints of RDNA2. Couple that with a probably much bigger infinity cache, and RDNA3 could be the high resolution beast that RDNA2 just barely couldn't be. And that's before we account for any increases in IPC, clocks, hardware density... Interesting indeed!


----------



## bunnybutt

Hi Guys, glad to be joining you all.

I have a Toxic 6900xt Extreme Edition.

I have no idea where to start with this thing. Any suggestions what to expect out of this thing/how far it can go?

Currently (for gaming) I seem to be running a boost of 2830mhz at 1150v. Memory is Fast Timing 2150mhz. Power % is all the way up. Using Radeon Software. (using Metro Exodus as my main game to test stability. Also using Timespy as well, but timespy reliability does not match actual gaming.)

(Also, They try to sell these things as only 500 made... Is that True? I seem to be seeing a lot of them popping up for sale. Does anyone know the actual MSRP? Is it closer to $1300?)


----------



## Blameless

bloot said:


> 24 Gbps Samsung GDDR6 that would be great for RDNA 3


I honestly wasn't expecting more than 18-20Gbps out of GDDR6, given that NVIDIA and Micron had to come up with their own GDDR6X standard to reach 19Gbps+, so the announcement of 24Gbps entering sampling came as a surprise.

Timing would put mass production on track for inclusion in RNDA3.


----------



## J7SC

I know it is a cost issue, but HBM3 would be nice for select RDNA3 'prosumer' cards. Also, I wonder if the Samsung announcement above is in fact the 'GDDR6+' referenced below, and also when GDDR7 will come out


----------



## LtMatt

bunnybutt said:


> Hi Guys, glad to be joining you all.
> 
> I have a Toxic 6900xt Extreme Edition.
> 
> I have no idea where to start with this thing. Any suggestions what to expect out of this thing/how far it can go?
> 
> Currently (for gaming) I seem to be running a boost of 2830mhz at 1150v. Memory is Fast Timing 2150mhz. Power % is all the way up. Using Radeon Software. (using Metro Exodus as my main game to test stability. Also using Timespy as well, but timespy reliability does not match actual gaming.)
> 
> (Also, They try to sell these things as only 500 made... Is that True? I seem to be seeing a lot of them popping up for sale. Does anyone know the actual MSRP? Is it closer to $1300?)


Welcome to the Toxic Extreme club!

What is your stock max frequency core clock as listed in RS? And what clock speed do you have it set to, to achieve 2830mhz? S that actual core clock under gaming load?

Im not sure if there were only 500 made, sounds false to me. Where did you hear that?

Use the middle quiet BIOS if you want to lower fan speed below 38%. If you do that you’ll want to use MPT to up the power limit. 400W is a good limit for the stock cooler under gaming load.

Check Timespy graphics score to see if your memory overclocking is actually adding more points or not.


----------



## jonRock1992

Skinnered said:


> Saw the same thing, the reference I got reached higher mem. frequencies ~2150 with ease.
> 
> 
> 
> BF2042 just freezes when loading the gamemenu allready with to high oc.


There's something wrong with the way BF2042 is coded. It is way too sensitive to changes in clocks. I was also experiencing DXGI crashes during the loading screens. And the fix was to lower the core clock way down from where it's at in most other games.


----------



## bunnybutt

LtMatt said:


> Welcome to the Toxic Extreme club!
> 
> What is your stock max frequency core clock as listed in RS? And what clock speed do you have it set to, to achieve 2830mhz? S that actual core clock under gaming load?
> 
> Im not sure if there were only 500 made, sounds false to me. Where did you hear that?
> 
> Use the middle quiet BIOS if you want to lower fan speed below 38%. If you do that you’ll want to use MPT to up the power limit. 400W is a good limit for the stock cooler under gaming load.
> 
> Check Timespy graphics score to see if your memory overclocking is actually adding more points or not.



the Stock frequency is supposed to be 2350 "gaming", 2525 "boost", and 2730 for "toxic boost" Sometimes when I load up RS after startup, it says the clock is "2660" ?

Reviewers keep saying only limited 250-500 were made, and ive seen the same from forums and such... im starting to feel like thats a "technically theres 500 cause of the shortage". I paid $2000 shipped for mine.

Also, I have def seen a timespy score change if my memory goes over 1250 (I have it set at 1260) in RS and that seems to hit 1248-1250.


----------



## LtMatt

bunnybutt said:


> the Stock frequency is supposed to be 2350 "gaming", 2525 "boost", and 2730 for "toxic boost" Sometimes when I load up RS after startup, it says the clock is "2660" ?
> 
> Reviewers keep saying only limited 250-500 were made, and ive seen the same from forums and such... im starting to feel like thats a "technically theres 500 cause of the shortage". I paid $2000 shipped for mine.
> 
> Also, I have def seen a timespy score change if my memory goes over 1250 (I have it set at 1260) in RS and that seems to hit 1248-1250.


Can you link to said reviewer? First I’ve heard they seem readily available for only 500 made worldwide.

That’s a high stock core clock. Could you clarify what you meant when you said it runs at 2830? Is that what you set in RS or actual clock speed in game?


----------



## bunnybutt

Of course when I try to find all the old links I was reading, or forum posts... I cant find them... I was honestly led to believe that these things were actual "collectors Items" with limited production, or I wouldn't have jumped at it so fast.
I freaked out and bought one cause of this reasoning, but its not looking like that anymore.... considering getting a return and buying again to get a cheaper price since they keep playing games.

This is what I own:








TOXIC AMD Radeon™ RX 6900 XT Extreme Edition


<p style="margin-bottom: 0.0001pt;"><span style="color: #131313;">TOXIC AIO Cooling Technology - One Click TOXIC BOOST Up to 2730 MHz</span></p>




www.sapphiretech.com





Yes, I currently have it set at 2830, and that allows it to boost to 2750-2790, up to actual 2830mhz.

They advertise 2730 boost, but even with a COLD GPU running a first time stress test, in factory settings the card would only hit 2670mhz. It wont hit the stated "2730mhz" or higher, unless I turn up the clock speed to "2830"mhz

I emailed Sapphire to ask about it, and thats when I realized they are heavily using the advertising phrase "up to" to discuss the Boost clock speed.

Just curious if I could cause harm by turning up the clock so high. I just came from Nvidia. the last AMD/ATi card I had was when I was 12


----------



## LtMatt

bunnybutt said:


> Of course when I try to find all the old links I was reading, or forum posts... I cant find them... I was honestly led to believe that these things were actual "collectors Items" with limited production, or I wouldn't have jumped at it so fast.
> I freaked out and bought one cause of this reasoning, but its not looking like that anymore.... considering getting a return and buying again to get a cheaper price since they keep playing games.
> 
> This is what I own:
> 
> 
> 
> 
> 
> 
> 
> 
> TOXIC AMD Radeon™ RX 6900 XT Extreme Edition
> 
> 
> <p style="margin-bottom: 0.0001pt;"><span style="color: #131313;">TOXIC AIO Cooling Technology - One Click TOXIC BOOST Up to 2730 MHz</span></p>
> 
> 
> 
> 
> www.sapphiretech.com
> 
> 
> 
> 
> 
> Yes, I currently have it set at 2830, and that allows it to boost to 2750-2790, up to actual 2830mhz.
> 
> They advertise 2730 boost, but even with a COLD GPU running a first time stress test, in factory settings the card would only hit 2670mhz. It wont hit the stated "2730mhz" or higher, unless I turn up the clock speed to "2830"mhz
> 
> I emailed Sapphire to ask about it, and thats when I realized they are heavily using the advertising phrase "up to" to discuss the Boost clock speed.
> 
> Just curious if I could cause harm by turning up the clock so high. I just came from Nvidia. the last AMD/ATi card I had was when I was 12


That’s perfectly fine. Did you set the power limit to +15%? If not you are probably bouncing off the power limit and that would explain not hitting the advertised Toxic Boost clock.


----------



## bunnybutt

I will do more testing but maybe you are correct. All I did was hit the "toxic boost button" and not even the fans changed pitch noise unless I modified in RS.

So maybe I need to put it back to stock, and just raise the power limit to 15%, and see what it does.


----------



## bunnybutt

same thing. 
*I loaded completely default settings, and even with full 15% power limit, I still averaged 2660mhz clock.Timespy score 20820
*I lowered voltage from 1200 to 1150 and averaged 2670mhz. Timespy score 21083
*Then adding "fast timing" and putting memory at 2150mhz (Actual was 2138-40), although my clock was now hitting 2650ish (tests are back to back to back), timespy score was 21289
*Finally changing clock speed to "2830" resulted in wild ranges from 25xx-2777mhz, but a timespy score of 21413.

The highest I saw temps on 3DMark was 55-57c.

The only thing not adjusted in this small test was, I left Minimum Clock at 1000 and I left fans at default. I know theres so much more objectiveness to this, like long term gaming reliability.... Thats why I was kinda expecting a "smooth" 2730mhz as advertised when I bought this card. As you can see from above, Im having to do quite a bit of tinkering to match what was advertised on the side of the box. Thats not exactly what I was hoping for, but I know its not terrible. I think its a pretty amazing card.
Thats my opinion and I am always welcome to let someone maybe educate me otherwise, if viable.


I think my question is, since the technology is so new.... How do you tell if you have good silicon lottery or not?


----------



## mrjayviper

PCB info (power phases/pics/etc) for AMD 6900XT LC? just trying to find more info on the card and wondering what 3rd party water block are available. Thank you


----------



## LtMatt

mrjayviper said:


> PCB info (power phases/pics/etc) for AMD 6900XT LC? just trying to find more info on the card and wondering what 3rd party water block are available. Thank you
> 
> View attachment 2537586


I think it's the same as the stock 6900 XT MBA PCB.


----------



## Blameless

bunnybutt said:


> I think my question is, since the technology is so new.... How do you tell if you have good silicon lottery or not?


It's not that new any more. We have a year of end-user documentation and precedent to compare and contrast with. Trick is finding similar tests done in similar conditions, to minimize the number of variables involved.



LtMatt said:


> I think it's the same as the stock 6900 XT MBA PCB.


Should be.


----------



## J7SC

...managed to break 18,000 in Superposition 4K and also added a good run for Port Royal. The Port Royal result would have been tops for a 6900XT / 3950X combo at 3DM, noting that this is actually the daily home-office grinder w/ 24/7 settings and 24 C ambient for this regular XTX, albeit with very good w-cooling. These are w/o 'TempDep vmin' voltage, so 1.175v max

Weirdly, before I updated 3D Mark per their prompts today, I could upload results to my account there - but since updating, I can't - it says s.th. about 'professional license' required for uploading results ?! . I hope it's not another corporate money-grab as I have paid for the advanced edition plus extra tests. It would be annoying if they disabled that feature now for users who could do it before.


----------



## Enzarch

mrjayviper said:


> PCB info (power phases/pics/etc) for AMD 6900XT LC? just trying to find more info on the card and wondering what 3rd party water block are available. Thank you





LtMatt said:


> I think it's the same as the stock 6900 XT MBA PCB.


Yes, I can confirm this, I have a reference EK block on my 6900XTLC
The only PCB differences I noticed were; an additional header for the cooler, RAM chips of course, and different chokes (Its possible it has better power stages, but I forgot to look in my rush)


----------



## J7SC

Enzarch said:


> Yes, I can confirm this, I have a reference EK block on my 6900XTLC
> The only PCB differences I noticed were; an additional header for the cooler, RAM chips of course, and different chokes (Its possible it has better power stages, but I forgot to look in my rush)


That EK block makes a lot of sense for the 6900 XTLC. Why they gave such a rare and beautiful card such a puny rad is beyond me...a 280mm or 360mm would have been more appropriate as offered on other XTXH AIO.


----------



## mrjayviper

more feedback please:

deciding between Asrock formula OC vs strix 6900xt. Looking at various reviews (mainly techpowerup and 1 derbauer strix 6900 review), I feel the asrock has a lot more potential.

same price for both which translate to $1625 shipped (local currency conversion to USD)


----------



## LtMatt

J7SC said:


> That EK block makes a lot of sense for the 6900 XTLC. Why they gave such a rare and beautiful card such a puny rad is beyond me...a 280mm or 360mm would have been more appropriate as offered on other XTXH AIO.


Easier to fit in a case for OEMs, but I would have preferred a larger rad too if I was able to buy one.


----------



## ZealotKi11er

LtMatt said:


> Easier to fit in a case for OEMs, but I would have preferred a larger rad too if I was able to buy one.


The rad can handle card at stock limits 289W + 15%. At 100% fan can still be used for 400-450W quick runs for benchmarks. That is crazy is the RAD is smaller than any other AIO card from AMD before. My Fury X which pulls more power runs much cooler because of the bigger RAD.


----------



## J7SC

mrjayviper said:


> more feedback please:
> 
> deciding between Asrock formula OC vs strix 6900xt. Looking at various reviews (mainly techpowerup and 1 derbauer strix 6900 review), I feel the asrock has a lot more potential.
> 
> same price for both which translate to $1625 shipped (local currency conversion to USD)


FYI, if you do go for the Strix, there are two identical-looking models with near-identical model names, but the XTXH I believe has 'TOP' as part of the name.

In terms of choice between the Strix and the ASRock, I figure that at that level, it comes down more to the chip quality in a given sample than PCB differences.


----------



## mrjayviper

J7SC said:


> FYI, if you do go for the Strix, there are two identical-looking models with near-identical model names, but the XTXH I believe has 'TOP' as part of the name.
> 
> In terms of choice between the Strix and the ASRock, I figure that at that level, it comes down more to the chip quality in a given sample than PCB differences.


Thanks for the tip.


I believe the Strix I'm looking at is definitely the XTXH one (boost clock is 2525).
I believe according to techpowerup review that the Asrock is also an XTXH


----------



## Balsagna

So is there a clear winner on what is the fastest XTXH non reference cards, rather a general idea that X card seems to be the faster of the bunch?


----------



## tolis626

I'm trying out the mess that is BF 2042 now that it has a free weekend (I refuse to buy it in this state, even though I'm a BF fan), and boy oh boy does it not like overclocks. Although, regardless of overclocking, the thing crashed on me two times in the menu. Then I got to play about 15 minutes before it crashed again. What a fustercluck.


----------



## Godhand007

Question to all: I have received TDR errors twice ( in two weeks) while watching videos on the browser and a game running in the background. This never happens during any stress testing. Is this due to clock instability or something else? And if this is due to clock instability, how can I consistently reproduce it?


----------



## jonRock1992

tolis626 said:


> I'm trying out the mess that is BF 2042 now that it has a free weekend (I refuse to buy it in this state, even though I'm a BF fan), and boy oh boy does it not like overclocks. Although, regardless of overclocking, the thing crashed on me two times in the menu. Then I got to play about 15 minutes before it crashed again. What a fustercluck.


I had a similar experience in BF2042. I could only do 2700min/2800max at 1287mV without crashing. It's way too sensitive to overclocks. I just don't think it's programmed right.


----------



## Memmento Mori

Hi Guys,
having an XFX Radeon RX 6900 XT 16GB Speedster MERC 319 Black (RX-69XTACBD9).

Does anyone tried to change the thermal pads and repaste? Is there any benefit? Worth the trouble?

Im fine with the temps and also have it downclocked as i found out that I loose just 10 FPS but the power draw is 100W-130W lover 

For any experience shared i would be grateful..


----------



## Godhand007

tolis626 said:


> I'm trying out the mess that is BF 2042 now that it has a free weekend (I refuse to buy it in this state, even though I'm a BF fan), and boy oh boy does it not like overclocks. Although, regardless of overclocking, the thing crashed on me two times in the menu. Then I got to play about 15 minutes before it crashed again. What a fustercluck.


Surprisingly clocks are pretty stable for me. 2730 Mhz without an issue on my reference card. The game is a CPU hog though and ugly in many places.


----------



## ZealotKi11er

jonRock1992 said:


> I had a similar experience in BF2042. I could only do 2700min/2800max at 1287mV without crashing. It's way too sensitive to overclocks. I just don't think it's programmed right.


My XTXH does 2800/2700 @ 1.2v in BF2042.


----------



## 99belle99

Godhand007 said:


> Surprisingly clocks are pretty stable for me. 2730 Mhz without an issue on my reference card. The game is a CPU hog though and ugly in many places.


You got a good card my reference is bad. Crashes at those clocks in TS. Maybe I need more voltage but I do not want to ruin the card so didn't try it. I did try 1250mV and it still crashed. Maybe it needs more but not willing to try.


----------



## Godhand007

99belle99 said:


> You got a good card my reference is bad. Crashes at those clocks in TS. Maybe I need more voltage but I do not want to ruin the card so didn't try it. I did try 1250mV and it still crashed. Maybe it needs more but not willing to try.


Not sure if it is good or just the absurd voltage (1281mv) working in my favor. It might also be my relatively weak CPU that does not allow my 6900XT to go full throttle.


----------



## mrjayviper

More feedback/info please 😁

So I found a gigabyte aorus waterforce model that's cheaper the the ASRock and strix LC.

Looking for information on (but not limited to) power phases, quality of the block and etc.

I intend to watercool anyway so getting this model would be advantageous/cheaper. Thanks


----------



## J7SC

mrjayviper said:


> More feedback/info please 😁
> 
> So I found a gigabyte aorus waterforce model that's cheaper the the ASRock and strix LC.
> 
> Looking for information on (but not limited to) power phases, quality of the block and etc.
> 
> I intend to watercool anyway so getting this model would be advantageous/cheaper. Thanks


...I can add some partial feedback as I run the Gigabyte 6900XT Gaming OC which has the same 3x8 pin, dual bios PCB as the XTXH Waterforce, according to among others Buildzoid. It comes with a regular XTX chip though - which can still exceed 2800 MHz in some of my apps, btw. I bought this card primarily for work but am still having some fun with it via water-cooling and ample MPT PL. When Buildzoid looked at this PCB, his major gripe was the (in his opinion) excessive input filtering - which I don't mind at all. 

Others here with the XTXH Waterforce version have reported both very good and also only average oc potential. As I stated before, it seems to really come down to the specific chip sample you end up with. 

PCB quality wise, I give it two thumbs up. Also, I have two 2080 Ti Gigabyte Aorus Waterforce Extreme since December '18...no issues with water-block quality, performance (or anything else). I don't know though if Gigabyte uses the same OEM though for the 6900XT Waterforce block.


Spoiler


----------



## ZealotKi11er

Godhand007 said:


> Not sure if it is good or just the absurd voltage (1281mv) working in my favor. It might also be my relatively weak CPU that does not allow my 6900XT to go full throttle.


What resolution do you play?


----------



## Godhand007

ZealotKi11er said:


> What resolution do you play?


1440p


----------



## ptt1982

Comment deleted, found a solution!  Enjoy your day!


----------



## Balsagna

Hey all.

looking at getting a 6900 xt reference card or spending $400 more for an xtxh card, preferably the red devil ultimate, or a water cooled one.
What are the reference ones clocking too if you’re under water, vs air cooled?

I’m leaning towards an xtxh card, but if the reference cards can get close, I can literally get the block and other parts and still spare some money.


----------



## jonRock1992

Balsagna said:


> Hey all.
> 
> looking at getting a 6900 xt reference card or spending $400 more for an xtxh card, preferably the red devil ultimate, or a water cooled one.
> What are the reference ones clocking too if you’re under water, vs air cooled?
> 
> I’m leaning towards an xtxh card, but if the reference cards can get close, I can literally get the block and other parts and still spare some money.


From personal experience, and from the horror stories of others, I'd say steer clear of the Powercolor XTXH gpu's. There are better GPU's for the price. My Red Devil was basically unusable with the stock cooler. It was extremely loud and throttled all the time. And the thermal paste application on the heatsink was decent. It's just the cooler design that is bad. It looks like it would be good, but it's not. My Red Devil only started to shine with a water block and MPT, otherwise it's not that great.


----------



## Balsagna

jonRock1992 said:


> From personal experience, and from the horror stories of others, I'd say steer clear of the Powercolor XTXH gpu's. There are better GPU's for the price. My Red Devil was basically unusable with the stock cooler. It was extremely loud and throttled all the time. And the thermal paste application on the heatsink was decent. It's just the cooler design that is bad. It looks like it would be good, but it's not. My Red Devil only started to shine with a water block and MPT, otherwise it's not that great.


you think a reference card under water would be able to hit what xtxh cards do?


----------



## jonRock1992

Balsagna said:


> you think a reference card under water would be able to hit what xtxh cards do?


I wouldn't count on it. But it's possible. I'd go with some XTXH card.


----------



## Balsagna

I decided on the reference card. I really don’t do benchmarks too much, and the cost difference allows me to get a water block and fittings and still have money left over.

From my limited research, an underwater reference card can be pretty close to an air cooled xtxh at worst case


----------



## Memmento Mori

If you are not an score hunter or FPS hunter than you made a wise decision... rather invest the left over in a better airflow


----------



## CS9K

Memmento Mori said:


> If you are not an score hunter or FPS hunter than you made a wise decision... rather invest the left over in a better airflow


This is about how I feel. If you were able to get a reference RX 6900 XT at a sane cost @Balsagna , you won't be disappointed in the daily-driver performance, especially on water


----------



## J7SC

Bench-marking can be a double-edged sword sometimes. Besides, I've posted a custom PCB-plain-Vanilla 6900XT exceeding 2800MHz effective clocks - certainly not in everything, but it usually gets up there...I put the money I saved into water-cooling the card...

trying out my new GoPro hero 10, no idea how to work focus zones etc yet


----------



## gecko991

Noice.


----------



## J7SC

Thanks gents  ...don't want to flood the 6900XT zone with more pics, so spoiler for some 'daytime / glass panels on views. I had to discard half of them because either it was full of reflections, or the USB cable was accidentally dangling in front of the shot - I'll learn some day



Spoiler


----------



## mrjayviper

J7SC said:


> Spoiler
> 
> 
> 
> 
> View attachment 2538130


What's that finned heatsink? Thanks a lot for the reply


----------



## tolis626

I have a sort of stupid question for you guys. From your experience, what are the chances that my card is so much of a dud that it can't even run 2600MHz stable in games (2550-2580MHz actual, 1.175V or 1.15V, for memory I've tried 2100 all the way to 2150MHz with no difference made) vs the chance that something else is causing my crashes? For some details, I've been playing games for months with 2600MHz core at 1.15V and memory at 2150MHz, and only changed PL to 330W in MPT (+0% in the driver). These settings have seen a lot of hours with no problems. I did try higher overclocks with no success, but I always came back to these settings that were known to be good (tested rather extensively with games and benchmarks). However, lately I've been having problems. First obvious culprit is BF2042, but that's obviously the game's fault, at least partially, because most of the times I'll crash while in the menu or in loading screens when there's no load on the GPU. Then there's Warzone. Even after the map update, when everyone was having problems, I didn't (at least with my GPU). But these past few days it keeps crashing again and again and again. Without warning, it will freeze then crash to desktop with an AMD driver message saying that the driver was reset. I'd appreciate some insight on what else could be potentially wrong, because I'm pulling my hair out and I don't have the luxury to keep doing that much longer (no male pattern baldness, but I'm losing hair density because of excessive stress  ).

Thanks in advance!


----------



## ZealotKi11er

Simple. Your OC was never stable. Remember AMD advertises 6900 XT as 2015MHz game clk and 2250MHz boost clock. You are already 2600MHz. That is a big OC. Personally what I would do is just increase MPT to the max your cooling allows and dont touch radeon settings. Let the card boost as high as it can natully.


----------



## tolis626

ZealotKi11er said:


> Simple. Your OC was never stable. Remember AMD advertises 6900 XT as 2015MHz game clk and 2250MHz boost clock. You are already 2600MHz. That is a big OC. Personally what I would do is just increase MPT to the max your cooling allows and dont touch radeon settings. Let the card boost as high as it can natully.


DON'T SAY THAT! YOU'RE HURTING MY FEELINGS!

I know that this could very well be the case, but I can't figure out why it's occuring now. I mean, I know for a fact that 2650MHz is unstable in some games (not all), but I used to run it in Warzone to experiment and it would only occassionaly crash. 2600MHz never crashed, and now it is crashing constantly. Hell, even 2700MHz works, but again, it will crash eventually (although there's stuff like Witcher 3 and Unigine Superposition that I can run up to that speed, mostly without problems). It's the sudden change from rock solid stable to crashing every few matches that's strange to me. Even if it's an unstable overclock, why was it working fine all this time accross multiple games and benchmarks but now it isn't?

I swear, this particular card is infuriating. I regret not shelling out the 250€ extra for the Toxic Extreme.


----------



## ZealotKi11er

Just use TimeSpy for stability. What ever clk you can pass the stress test should be your maximum. You can just say one game does this and another does that. 100MHz is only going to give you 2-3% more perf which is not worth the stress and stability.


----------



## tolis626

ZealotKi11er said:


> Just use TimeSpy for stability. What ever clk you can pass the stress test should be your maximum. You can just say one game does this and another does that. 100MHz is only going to give you 2-3% more perf which is not worth the stress and stability.


Well, that's the thing. Given enough PL, 2600MHz is where I can reliably pass TimeSpy (when power isn't high enough it sometimes crashes possibly due to wild transients, while clocks are lower). Although, to be fair, I can't stress test with it because with the raised power limits it needs, my card eventually gets too hot. But TS is how I settled on 2600MHz. 2625MHz passes sometimes and 2650MHz almost never makes it through the whole thing (mostly GT2). Also, my card doesn't seem to care about memory. I dropped from 2150MHz to 2100MHz and it didn't change a thing for stability. If anything, it seems to be crashing more frequently with 2100MHz.

I'm suspecting there's something else going on. Maybe it has to do with my CPU? Sometimes Ryzens are weird. I'll try messing with it a bit and see how that goes. But again, I haven't changed anything CPU related for months, no new BIOS, nothing.


----------



## CS9K

ZealotKi11er said:


> Just use TimeSpy for stability. What ever clk you can pass the stress test should be your maximum. You can just say one game does this and another does that. 100MHz is only going to give you 2-3% more perf which is not worth the stress and stability.





tolis626 said:


> Well, that's the thing. *Given enough PL, 2600MHz is where I can reliably pass TimeSpy* (when power isn't high enough it sometimes crashes possibly due to wild transients, while clocks are lower).* Although, to be fair, I can't stress test with it because with the raised power limits it needs, my card eventually gets too hot.* But TS is how I settled on 2600MHz. 2625MHz passes sometimes and 2650MHz almost never makes it through the whole thing (mostly GT2). Also, my card doesn't seem to care about memory. I dropped from 2150MHz to 2100MHz and it didn't change a thing for stability. If anything, it seems to be crashing more frequently with 2100MHz.
> 
> I'm suspecting there's something else going on. Maybe it has to do with my CPU? Sometimes Ryzens are weird. I'll try messing with it a bit and see how that goes. But again, I haven't changed anything CPU related for months, no new BIOS, nothing.


You answered your own question here. You _can't_ pass Time Spy because you run out of thermal headroom. Your card isn't actually staying clocked-up to the MHz speed you set because of thermals.

Set a lower clock speed, raise the setting on your voltage slider to the max (which is where it should be anyway when testing max-clock-speed overclocks), and find a speed where Time Spy hits neither thermal nor power limits, and can pass both TS 1 and 2, then the 20minute TS1 benchmark. Once you find that, you're done with your max-clock overclock testing. Go game.

If you think the GPU isn't performing like it should with the temperature vs the wattage being used, go search around google to see what others' temps are. If your temperatures are wildly higher than others, then you may want to consider RMA'ing your GPU.


----------



## Godhand007

tolis626 said:


> I have a sort of stupid question for you guys. From your experience, what are the chances that my card is so much of a dud that it can't even run 2600MHz stable in games (2550-2580MHz actual, 1.175V or 1.15V, for memory I've tried 2100 all the way to 2150MHz with no difference made) vs the chance that something else is causing my crashes? For some details, I've been playing games for months with 2600MHz core at 1.15V and memory at 2150MHz, and only changed PL to 330W in MPT (+0% in the driver). These settings have seen a lot of hours with no problems. I did try higher overclocks with no success, but I always came back to these settings that were known to be good (tested rather extensively with games and benchmarks). However, lately I've been having problems. First obvious culprit is BF2042, but that's obviously the game's fault, at least partially, because most of the times I'll crash while in the menu or in loading screens when there's no load on the GPU. Then there's Warzone. Even after the map update, when everyone was having problems, I didn't (at least with my GPU). But these past few days it keeps crashing again and again and again. Without warning, it will freeze then crash to desktop with an AMD driver message saying that the driver was reset. I'd appreciate some insight on what else could be potentially wrong, because I'm pulling my hair out and I don't have the luxury to keep doing that much longer (no male pattern baldness, but I'm losing hair density because of excessive stress  ).
> 
> Thanks in advance!


Are you not at default voltages? I would first try things with max default voltages. Also, one contrarian point, thermals should not cause freeze/TDR. In case of thermals getting high, the card just shuts itself down. At least that's what my experience has been. I have received TDR well below the thermal threshold of the GPU, like when running full HD playback on a browser with _Hardware Acceleration _enabled but I was at 2750Mhz (actual clock) to be fair.


----------



## tolis626

CS9K said:


> You answered your own question here. You _can't_ pass Time Spy because you run out of thermal headroom. Your card isn't actually staying clocked-up to the MHz speed you set because of thermals.
> 
> Set a lower clock speed, raise the setting on your voltage slider to the max (which is where it should be anyway when testing max-clock-speed overclocks), and find a speed where Time Spy hits neither thermal nor power limits, and can pass both TS 1 and 2, then the 20minute TS1 benchmark. Once you find that, you're done with your max-clock overclock testing. Go game.
> 
> If you think the GPU isn't performing like it should with the temperature vs the wattage being used, go search around google to see what others' temps are. If your temperatures are wildly higher than others, then you may want to consider RMA'ing your GPU.


I should've worded that properly. It's not failing the test, nor is it dropping clocks due to thermals. If I set the PL high enough, it will maintain proper clocks and complete the test. It's just that I feel uncomfortable running the same benchmark over and over again when the GPU is cooking at over 105C at some points (maybe I should stop being a wuss and let it do its thing). As for my card's temperatures being problematic, I wouldn't say that. Sure, the cooler isn't up to snuff when pushing 350W+ through the damn thing and it will heat soak (my Evolv X isn't doing it any favors either), but it's not that bad to warrant an RMA and it's totally fine at stock PL.

Having said all that, the thing seems to have fixed itself, at least for now. In my frustration I changed a lot of things, including setting the memory back to 2150MHz, pushing my CPU vSOC to 1.15V (It doesn't even need 1.1V but whatever, it doesn't hurt to try), 1.8V PLL to 1.83V (same story as SoC, I had it run stable at 1.75V) and disabling some services, including the AMD Crash Handler (I came across this suggestion somewhere), and I also messed with the Warzone config files. If it continues to work, one of all of these fixed it and it runs stably and smoothly. My fear is that it's the memory, that setting it higher triggers ECC and it doesn't crash completely. But I should be losing performance, and I'm not. I've tested 2150MHz since the beginning, and it's always a small but measurable difference going there from 2100 or 2120 (always with fast timings). My go to test for testing memory is Superposition's 4K and 8K tests, both of which showed a consistent performance increase at 2150MHz mem across multiple runs. If it turns out to be that, I'll be at a complete loss.


Godhand007 said:


> Are you not at default voltages? I would first try things with max default voltages. Also, one contrarian point, thermals should not cause freeze/TDR. In case of thermals getting high, the card just shuts itself down. At least that's what my experience has been. I have received TDR well below the thermal threshold of the GPU, like when running full HD playback on a browser with _Hardware Acceleration _enabled but I was at 2750Mhz (actual clock) to be fair.


Yes, for all of the above I'm talking about the full 1.175V. I had it at 1.15V but honestly saw no appreciable difference and set it back to 1.175V for peace of mind. Also, as I said above, thermals aren't great, but not to the point they're problematic. The card isn't overheating uncontrollably, it's just that with a high PL the cooler gets overwhelmed at some point. I'm sorry if it came across like the thing was thermal throttling or something, it's not, I'm just not comfortable seeing it over 100C, let alone 105C.

With that said, I did go ahead and went a little crazy. I fired up Witcher 3 and went to a spot near Beauclair were I used to stress test my 5700XT (where instabilities would take hours to show up in other games, here they would show up almost immediately). Every setting on its highest option, top of a hill so that it renders a lot of distant stuff, some texture mods, some lighting mods and the thing tortures GPUs like they did something terrible. I was getting angry and needed a sanity check, so I put the GPU at 2700MHz, PL at 375W, memory at 2150MHz FT and let it rip. Constant ~2650MHz actual clocks, card was near the PL but not bouncing off of it constantly, and it was stable at around 100-105C hotspot. It didn't budge, and I left it there quite a while. Witcher 3 might juts be easy peasy for the 6900XT and not really stressing it, I dunno, but its behavior here doesn't match what I'm seeing in Warzone and Battlefield.

Whatever, I'm probably getting on your guys' nerves with all my incoherent rumbling and failing to accept that my card is, after all, a dud when it comes to overclocking. I posted more in case I was missing something, like, say, someone else having similar issues with low CPU vSOC or something random like that. Seems like that's not the case.

PS : Out of curiosity, what kind of SoC voltages are you guys seeing on your GPUs under full load? I was curious so I set up a graph in HWiNFO64 and it never went above 1.08V, mostly staying around 1.05V.


----------



## Godhand007

tolis626 said:


> I should've worded that properly. It's not failing the test, nor is it dropping clocks due to thermals. If I set the PL high enough, it will maintain proper clocks and complete the test. It's just that I feel uncomfortable running the same benchmark over and over again when the GPU is cooking at over 105C at some points (maybe I should stop being a wuss and let it do its thing). As for my card's temperatures being problematic, I wouldn't say that. Sure, the cooler isn't up to snuff when pushing 350W+ through the damn thing and it will heat soak (my Evolv X isn't doing it any favors either), but it's not that bad to warrant an RMA and it's totally fine at stock PL.
> 
> Having said all that, the thing seems to have fixed itself, at least for now. In my frustration I changed a lot of things, including setting the memory back to 2150MHz, pushing my CPU vSOC to 1.15V (It doesn't even need 1.1V but whatever, it doesn't hurt to try), 1.8V PLL to 1.83V (same story as SoC, I had it run stable at 1.75V) and disabling some services, including the AMD Crash Handler (I came across this suggestion somewhere), and I also messed with the Warzone config files. If it continues to work, one of all of these fixed it and it runs stably and smoothly. My fear is that it's the memory, that setting it higher triggers ECC and it doesn't crash completely. But I should be losing performance, and I'm not. I've tested 2150MHz since the beginning, and it's always a small but measurable difference going there from 2100 or 2120 (always with fast timings). My go to test for testing memory is Superposition's 4K and 8K tests, both of which showed a consistent performance increase at 2150MHz mem across multiple runs. If it turns out to be that, I'll be at a complete loss.
> 
> Yes, for all of the above I'm talking about the full 1.175V. I had it at 1.15V but honestly saw no appreciable difference and set it back to 1.175V for peace of mind. Also, as I said above, thermals aren't great, but not to the point they're problematic. The card isn't overheating uncontrollably, it's just that with a high PL the cooler gets overwhelmed at some point. I'm sorry if it came across like the thing was thermal throttling or something, it's not, I'm just not comfortable seeing it over 100C, let alone 105C.
> 
> With that said, I did go ahead and went a little crazy. I fired up Witcher 3 and went to a spot near Beauclair were I used to stress test my 5700XT (where instabilities would take hours to show up in other games, here they would show up almost immediately). Every setting on its highest option, top of a hill so that it renders a lot of distant stuff, some texture mods, some lighting mods and the thing tortures GPUs like they did something terrible. I was getting angry and needed a sanity check, so I put the GPU at 2700MHz, PL at 375W, memory at 2150MHz FT and let it rip. Constant ~2650MHz actual clocks, card was near the PL but not bouncing off of it constantly, and it was stable at around 100-105C hotspot. It didn't budge, and I left it there quite a while. Witcher 3 might juts be easy peasy for the 6900XT and not really stressing it, I dunno, but its behavior here doesn't match what I'm seeing in Warzone and Battlefield.
> 
> Whatever, I'm probably getting on your guys' nerves with all my incoherent rumbling and failing to accept that my card is, after all, a dud when it comes to overclocking. I posted more in case I was missing something, like, say, someone else having similar issues with low CPU vSOC or something random like that. Seems like that's not the case.
> 
> PS : Out of curiosity, what kind of SoC voltages are you guys seeing on your GPUs under full load? I was curious so I set up a graph in HWiNFO64 and it never went above 1.08V, mostly staying around 1.05V.


Don't worry about asking questions. We all get to that point of frustration sometimes while tweaking PC components when they don't work as expected, especially if errors are random or not consistently reproducible. It is one of the things that you have to live with as a PC enthusiast. 

What are your fan speeds? I have reference 6900XT and I can game in Bf2042 with around 2750 MHz clocks. The secret to these settings is voltage increase through MPT and running fans on full speed (I don't mind the noise).

So here is my suggestion: 
1. Reset everything to default including all the MPT stuff and memory.
2. Put fans to max. Set clocks to 2550Mhz actual.
3. Only increase power limit and TDC (proportionally) in MPT. Don't touch the voltages.
4. Run TS GT2 in the loop at least 20 times. 100% fans should keep the hotspot under 105C. 
5. Run Port Royal a few times as well.

Come back with the result.


----------



## ZealotKi11er

I think people put to much emphasis on overclocks for these cards. Just know that cards these days are already pushed pretty hard out of the box in terms of that they can achieve. Power is the most beneficial thing and then there is voltage. Technically rdna2 is voltage locked but you can hack it. I personally do not recommend. As for overclocking, more people here talk about TS runs for benchmarking. My card managed 2850MHz with water cooling with cold air. Thing is for BF stable I had to drop 2800 ( ~ 2730MHz actual clock) to be stable for 30-40 mins I usually play. This is XTXH which tun 1.2v and is in theory binned. Trying to achieve the same thing with air cooler, 1.175v, XTX is asking for disappointment. My 6900 XT (XTX) could only pass TS at 2600MHz (2540MHz actual clock). 

Some questions I can ask you. 
What is the default clk for your card? You can see this when you try to OC. The clk that the card is set by default.


----------



## jonRock1992

tolis626 said:


> DON'T SAY THAT! YOU'RE HURTING MY FEELINGS!
> 
> I know that this could very well be the case, but I can't figure out why it's occuring now. I mean, I know for a fact that 2650MHz is unstable in some games (not all), but I used to run it in Warzone to experiment and it would only occassionaly crash. 2600MHz never crashed, and now it is crashing constantly. Hell, even 2700MHz works, but again, it will crash eventually (although there's stuff like Witcher 3 and Unigine Superposition that I can run up to that speed, mostly without problems). It's the sudden change from rock solid stable to crashing every few matches that's strange to me. Even if it's an unstable overclock, why was it working fine all this time accross multiple games and benchmarks but now it isn't?
> 
> I swear, this particular card is infuriating. I regret not shelling out the 250€ extra for the Toxic Extreme.


If you're on 21.12.1, that's probably what's causing you to not get the same clocks stable. This driver reduced stability slightly for me.


----------



## CS9K

jonRock1992 said:


> If you're on 21.12.1, that's probably what's causing you to not get the same clocks stable. This driver reduced stability slightly for me.


This is a known phenomenon, but I'll counter this with "yes, one's peak stable clock speed varies from driver to driver, but performance often does not, for those of us not chasing benchmark numbers", @tolis626.


----------



## bunnybutt

im just tinkering around on the Toxic Extreme.

I just passed with 2750 min / 2850 max. gonna see about more later.


----------



## Thanh Nguyen

Why my pc shutdown when run timespy guys?


----------



## ZealotKi11er

Thanh Nguyen said:


> Why my pc shutdown when run timespy guys?


Overheating most likely.


----------



## gtz

Thanh Nguyen said:


> Why my pc shutdown when run timespy guys?


Like ZealotKi11er stated or PSU tripping.


----------



## bunnybutt

hmmm. if thats true, then I _think_ I may have tripped my new seasonic 1000w with the Extreme. hoping I dont need a 1300w, wow....

My temps for anything usually never go higher than 55-60c. I may see bursts on cpu to 77c, but thats rare.


----------



## bunnybutt

doublepostbad


----------



## CS9K

bunnybutt said:


> hmmm. if thats true, then I _think_ I may have tripped my new seasonic 1000w with the Extreme. hoping I dont need a 1300w, wow....
> 
> My temps for anything usually never go higher than 55-60c. I may see bursts on cpu to 77c, but thats rare.
> 
> View attachment 2538942


Double-check your CPU overclocks, too. Seemingly random instant-black-screen + restart, is also a symptom of not enough Vcore. 

On PSU, you should be fine, my tuned 5800x + RX 6900 XT @ 2700MHz 400W limit barely even registered on my Seasonic TX-750, and the TX-850 I have now basically laughs at my pathetic power-draw attempts.

Also, your PC looks _awesome_!


----------



## gtz

bunnybutt said:


> hmmm. if thats true, then I _think_ I may have tripped my new seasonic 1000w with the Extreme. hoping I dont need a 1300w, wow....
> 
> My temps for anything usually never go higher than 55-60c. I may see bursts on cpu to 77c, but thats rare.
> 
> View attachment 2538942


I think you are fine, since you are only running a 5900X.

I use to have a 1000w seasonic, but saw power consumption at the wall hit 1100 once. So I replaced it with a 1300 watt unit. But I also have lots of fans and a power hungry 9980XE. This thing can suck 500w+ alone at 5.0Ghz.


----------



## bunnybutt

Im going to absolutely look into your advice, now that Ive changed out the PSU and GPU, Maybe I need to make some changes. I have a 5900x.
I DO feel like I see a few "Instant black screens + restart" or at least that may have been what I saw that one time.
I may have Vcore changed but I thought I left it on Auto. I know for a fact I have PBO disabled and all of that stuff, plus Per core with each around 20-15.


Also thank you very much for the compliment guys. When you keep to yourself and just focus on building a toy that makes you happy, you forget to show other people.

EDIT: yours is really clean looking for what you have, I know your performance has to be REALLY nice with a custom line. that looks really nice also, looks super light weight.


----------



## Godhand007

CS9K said:


> Double-check your CPU overclocks, too. Seemingly random instant-black-screen + restart, is also a symptom *of not enough Vcore*.
> 
> On PSU, you should be fine, my tuned 5800x + RX 6900 XT @ 2700MHz 400W limit barely even registered on my Seasonic TX-750, and the TX-850 I have now basically laughs at my pathetic power-draw attempts.
> 
> Also, your PC looks _awesome_!


This. I wasted a lot of time finding out this the hard way.


----------



## CS9K

Godhand007 said:


> This. I wasted a lot of time finding out this the hard way.


Same here. I _feel_ you. :<


----------



## J7SC

I tend to go for bigger PSUs (1300W & up, two of those in the dual mobo build below) because:

a.) things an add up quickly; for the 6900XT system (top left below):...220W+ for the CPU Package Power, 460W nominal (+ 50W for effective?) for the 6900XT, multiple D5 pumps, 20+ 120mm fans etc etc. All this is before headroom for 'spikes'...

b.) I prefer to be in the efficiency sweet-spot range, rather than in the upper third of the available power...below is from Seasonic's site 









c.) A high-capacity quality PSU not run near its limit all the time tends to last a bit longer, IMO

d.) I switch things around a lot, ie. from work-to-play-to-work machines, including those with HEDT CPUs and multiple GPUs...a bigger PSU gives me more options. 


Spoiler


----------



## Trevbev

I scored 38 940 in Fire Strike


AMD Ryzen 5 5600X, AMD Radeon RX 6900 XT x 1, 32768 MB, 64-bit Windows 10}




www.3dmark.com




I'm quite pleased with this score. I think the graphics score is higher than some of the HOF but the cpu lets it down.
Also going for the time spy graphics score: I scored 18 361 in Time Spy


----------



## jonRock1992

bunnybutt said:


> hmmm. if thats true, then I _think_ I may have tripped my new seasonic 1000w with the Extreme. hoping I dont need a 1300w, wow....
> 
> My temps for anything usually never go higher than 55-60c. I may see bursts on cpu to 77c, but thats rare.
> 
> View attachment 2538942


Definitely check to see if you have any whea errors in the event viewer. Those can cause random restarts. If you do, then try lowering CPU FCLK or increase cpu soc voltage.


----------



## Thanh Nguyen

ZealotKi11er said:


> Overheating most likely.


114c hotspot is overheating? Dont know the temp when it shutdown.


----------



## lawson67

CS9K said:


> Double-check your CPU overclocks, too. Seemingly random instant-black-screen + restart, is also a symptom of not enough Vcore.
> 
> On PSU, you should be fine, my tuned 5800x + RX 6900 XT @ 2700MHz 400W limit barely even registered on my Seasonic TX-750, and the TX-850 I have now basically laughs at my pathetic power-draw attempts.
> 
> Also, your PC looks _awesome_!


Surly your 750w Seasonic PSU must struggle with your card pulling 400w, my PC with my card set only slight above 400w is pulling around 725w from the wall especially in GT2 while running Time spy.


----------



## lawson67

Thanh Nguyen said:


> 114c hotspot is overheating? Dont know the temp when it shutdown.


Most if not all RX 6900 XT will shut down the PC once temp hits 118c and i would of thought that would be for Edge or hotspot temp and of course it will be the hotspot which hits that temp first, if your hovering around 114c hotspot its about to happen at anytime the 118c spike will come in before you even see it in your OSD and boom shut down


----------



## lestatdk

lawson67 said:


> Most if not all RX 6900 XT will shut down the PC once temp hits 118c and i would of thought that would be for Edge or hotspot temp and of course it will be the hotspot which hits that temp first, if your hovering around 114c hotspot its about to happen at anytime the 118c spike will come in before you even see it in your OSD and boom shut down


I can verify this. My card with the stock cooler + MPT tweaks would hit the shutdown temp. Which was the main reason I decided to put it on water


----------



## Thanh Nguyen

lawson67 said:


> Most if not all RX 6900 XT will shut down the PC once temp hits 118c and i would of thought that would be for Edge or hotspot temp and of course it will be the hotspot which hits that temp first, if your hovering around 114c hotspot its about to happen at anytime the 118c spike will come in before you even see it in your OSD and boom shut down


My red devil ultimate is on bykski block and has delta 15c on die when run timespy but dont know why hotspot is so high. What do u do with your card to get lower temp?


----------



## lawson67

Thanh Nguyen said:


> My red devil ultimate is on bykski block and has delta 15c on die when run timespy but dont know why hotspot is so high. What do u do with your card to get lower temp?


A delta of 15c is not to bad at all, anything over 20 - 25 is not good, my card is on a water block and has been coated with LM my delta is about 12c to edge and I've never seen the hotspot go over 77c at 430w which is fine, most XTXH cards don't throttle until they reach 95c and XTX cards throttle at 115c and shut down at 118c, your card being a RED Devil wont throttle until its hits 115c even if its the XTXH Ultimate edition


----------



## gtz

Thanh Nguyen said:


> My red devil ultimate is on bykski block and has delta 15c on die when run timespy but dont know why hotspot is so high. What do u do with your card to get lower temp?


I think a 15 degree delta is ok, but hitting 114 on water high. Are you pushing the card to its or is this regular settings?


----------



## J7SC

Thanh Nguyen said:


> My red devil ultimate is on bykski block and has delta 15c on die when run timespy but dont know why hotspot is so high. What do u do with your card to get lower temp?


The only time in the past I have experienced Hotspot way up there while general temps and deltas were ok is when a GPU block wasn't mounted properly (or die is uneven)...something which can be amplified by thinner paste and pump-out. It doesn't take much for one corner or side of the die not to make perfect contact with the block while the other sides are already too snug. A remount and re_paste usually do the job. Also, as already mentioned in this thread before, use a thicker thermal paste, such as Gelid GC Extreme.

Below are temps / hotspot for my Bykski blocked 6900XT w/1.175V and just over 430W (incl. 15% PL)


----------



## bunnybutt

wow your max temps. those are insane!!
I hit 87c junction quite often it seems, it stresses me out. Makes me wonder how I should be treating the card for long term use. 

Would Noctua NTH2 be a good choice to use? wondering about taking apart the toxic extreme and redoing their work.


----------



## J7SC

bunnybutt said:


> wow your max temps. those are insane!!
> I hit 87c junction quite often it seems, it stresses me out. Makes me wonder how I should be treating the card for long term use.
> 
> Would Noctua NTH2 be a good choice to use? wondering about taking apart the toxic extreme and redoing their work.


Tx; big custom loop, thermal putty for the VRAM, Gelid on the die and heatsink on the back all help


----------



## bunnybutt

That is AMAZING. I wonder if AMD will start to be “showcased” as a brand that “needs” custom loop, but performs excellently for many years of use? They seem to run scorching and that heat is the limiting factor of great performance. (if you don’t mind pulling 1.8 jiggawatts from the wall.)


----------



## bunnybutt

With big respect, I know I cant really compete in this numbers game with you guys, but for trying to build a stable gaming rig, thats where Im interested.

Im starting to test lowering my fan curve to get some decent quiet time while I game. I managed to get this to pass, it almost looks like theres no real difference between having the fans screaming, and using something more reasonable. Ill go play around in some games and see how things turn out. 

Each day Im happier I chose an AIO version, now to learn how to take advantage of its benefits and hope I can repair it down the line if it gets hit with "typical AIO woes"


----------



## J7SC

bunnybutt said:


> With big respect, I know I cant really compete in this numbers game with you guys, but for trying to build a stable gaming rig, thats where Im interested.
> 
> Im starting to test lowering my fan curve to get some decent quiet time while I game. I managed to get this to pass, it almost looks like theres no real difference between having the fans screaming, and using something more reasonable. Ill go play around in some games and see how things turn out.
> 
> Each day Im happier I chose an AIO version, now to learn how to take advantage of its benefits and hope I can repair it down the line if it gets hit with "typical AIO woes"
> 
> View attachment 2539220
> 
> 
> View attachment 2539221


Nice ! 

Re. 'typical AIO woes, I never had an AIO fail (I have three, all for CPU though. in more mundane office-type systems as I prefer custom loops for my hi-po builds). The oldest AIO is from late 2012, a 240mm ThermalTake. It finally needed a flush / refill in the fall of 2020, and after carefully taking the cold-plate off for access, did just that. It's humming away happily, looking forward to its 10th birthday


----------



## CS9K

lawson67 said:


> Surly your 750w Seasonic PSU must struggle with your card pulling 400w, my PC with my card set only slight above 400w is pulling around 725w from the wall especially in GT2 while running Time spy.


With only a 5800x CPU, nah, barely got close to 700W at all, and system as it sits only barely just reaches above 600W in games (measured @ UPS).

Seasonic's Titanium PSU's are comically over-engineered, and I'm okay with that


----------



## Trifire1

Hi, I currently have a 6900XT reference card and am wondering just how much benefit there is to changing to something like the Sapphire Toxic version. Mine seems to be a pretty good overclocker as it is hitting around 23900 on Time Spy. Can't seem to get it to the 24k mark.

My concern with swapping is that I end up with something worse, or are these higher priced ones really that much better?

Cheers!


----------



## bunnybutt

@Trifire1 I am not as educated as the rest of these PCMR dudes, but as an owner of an actual Toxic, The reason why I went for it, is because I see tech a little differently. I may build a new system every 5 or even 10 years. By "overpaying" for expensive tech, I have a much better chance of that particular piece of hardware aging very nicely long term.

having a Toxic simply "guarantees" I will have a card that should overclock/boost at max, with little to no instability. that is literally why I paid more money, I wanted guaranteed reliability. With no disrespect to anyone, but Im not 12 years old anymore, looking to scream down the Information SuperHighway in my blastpipe exhaust primer paint civic with no interior and a V8 sticking out of the hood.

I rather cruise with ample power on tap, while sitting in my leather Hellcat.

Thats how I can personally describe why I chose a more expensive (but relatively unneeded) card.
To my experience, it seems as if the current gen cards are ALL THE SAME, they all seem to hit the same specs just about. I just didint want to scream at the bleeding edge just to make sure I could join the pack.

(TBH, years ago I bought a Asus ROG STRIX GTX1070 OC.... and I just felt like I lost the sillicon lottery. That thing never did much, and if I touched it, it crashed. I still have it and I put it inside of my server. I spent so many years disappointed, that I swore I would not end up with that same mistake on the new AMD cards. I chose to buy "THE BEST.")

It actually sounds like you have a better card than even I do. I cant seem to get past 21,552... Its probably my fault, but it just proves that buying the "best" may not always be true.

Also @J7SC Do you think the Toxic is able to be refilled?


----------



## Trifire1

Makes perfect sense and thank you for the information, there are other reasons as to why I am looking at it but just want to be sure it's not going to be a worse overclocker. I have run this card for a year with a relatively high OC and not had any issues. It's been great!

My reason for getting it was I had upgraded from a 2070 to a 3070 and felt the price for the amount of performance gained was a complete joke. So I went with the idea get the best for not much more and maybe it will be a substantial improvement, I was right and it is a huge uplift.

EDIT: Just seen your revised post. I don't think I have done anything special to be honest. When these cards first came out most were undervolting and overclocking however since MPT I have seen a lot of posts where people say run max mV. 

Breakdown of my settings 

MPT
Powerlimit W = 335
TDC Limit A = 335
SOC = 65

Wattman 
Min Frequency = 2600
Max Frequency = 2760
Voltage = 1015

Power slider at max
VRAM Fast Timing / 2150

Running higher voltage in my experience just makes scores worse and seems to heat the card up more.

Maybe this might help you!


----------



## J7SC

bunnybutt said:


> (...)
> Also @J7SC Do you think the Toxic is able to be refilled?


I don't know that particular AIO and whether it has already more user-friendly filler plugs (such as some BeQuiet CPU AIOs have), but anything with a screwed-on cold plate can be refilled.



Trifire1 said:


> Hi, I currently have a 6900XT reference card and am wondering just how much benefit there is to changing to something like the Sapphire Toxic version. Mine seems to be a pretty good overclocker as it is hitting around 23900 on Time Spy. Can't seem to get it to the 24k mark.
> 
> My concern with swapping is that I end up with something worse, or are these higher priced ones really that much better?
> 
> Cheers!


It's kind of hard to say. If you already have a good-clocking reference model, I probably would focus a bit more on (w-?) -cooling to afford and sustain 'big MPT PL' increases to get the 24K mark. That said, the same reasoning can be applied to a XTXH...at the end of the day, the individual chip quality will play a big role. If benching is your thing, XTXH may be worth it to you, though.


----------



## Trifire1

Cool, thank you. I know it probably sounds a bit pointless trying to get another 50 points or so but I find it really fun.
If these XTXH cards are better then I will probably give it a go... at this point I am just not getting anywhere further no matter what I do with this card.

I basically got this card did a lot of benchmarking and overclocking, left it for a year and have started looking into it again. Now seem to be addicted!


----------



## Godhand007

Trifire1 said:


> Running higher voltage in my experience just makes scores worse and seems to heat the card up more.
> 
> Maybe this might help you!


That's true. I get worse scores due to high voltage on my reference card but without that voltage, I can't sustain ~2730Mhz clocks in games. BTW, If those clocks are actually stable, as in multiple GT2 loops stable (not just one run), then you have a golden XTX chip.


----------



## Godhand007

Trifire1 said:


> Cool, thank you. I know it probably sounds a bit pointless trying to get another 50 points or so but I find it really fun.
> If these XTXH cards are better then I will probably give it a go... at this point I am just not getting anywhere further no matter what I do with this card.
> 
> I basically got this card did a lot of benchmarking and overclocking, left it for a year and have started looking into it again. Now seem to be addicted!


I have had almost the same experience since the voltage trick for MPT came out.


----------



## kairi_zeroblade

Trifire1 said:


> Cool, thank you. I know it probably sounds a bit pointless trying to get another 50 points or so but I find it really fun.
> If these XTXH cards are better then I will probably give it a go... at this point I am just not getting anywhere further no matter what I do with this card.
> 
> I basically got this card did a lot of benchmarking and overclocking, left it for a year and have started looking into it again. Now seem to be addicted!


Just the drivers AMD is releasing will let you down (sometimes they fix it up, sometimes they break it)


----------



## bunnybutt

Trifire1 said:


> Makes perfect sense and thank you for the information, there are other reasons as to why I am looking at it but just want to be sure it's not going to be a worse overclocker. I have run this card for a year with a relatively high OC and not had any issues. It's been great!
> 
> My reason for getting it was I had upgraded from a 2070 to a 3070 and felt the price for the amount of performance gained was a complete joke. So I went with the idea get the best for not much more and maybe it will be a substantial improvement, I was right and it is a huge uplift.
> 
> EDIT: Just seen your revised post. I don't think I have done anything special to be honest. When these cards first came out most were undervolting and overclocking however since MPT I have seen a lot of posts where people say run max mV.
> 
> Breakdown of my settings
> 
> MPT
> Powerlimit W = 335
> TDC Limit A = 335
> SOC = 65
> 
> Wattman
> Min Frequency = 2600
> Max Frequency = 2760
> Voltage = 1015
> 
> Power slider at max
> VRAM Fast Timing / 2150
> 
> Running higher voltage in my experience just makes scores worse and seems to heat the card up more.
> 
> Maybe this might help you!




Thank you for this. I really want to ask a million questions, as I am SO new to AMD- but I dont wanna tire people out, so Im trying to figure out on my own mostly.

I have a vague idea of what this "MPT" you guys talk about, is.. Im just afraid to try it out, I dont want to damage anything. (and I think I have gathered from my many crashes, "Wattman" is the Radeon program. Im just an Nvidia idiot trying to learn the new language.)

As for your higher voltage comment, I can run some pretty crazy benches at 1.120v and stuff. Ive tried it before, but havent gone back, since as described, my games dont like it one bit.

I do have one question... I know this is not going to be popular opinion- However is anyone able to point me into the right direction for overclocking AMD on Linux? I sure am having a lot of fun in Windows, but I actually specifically bought this GPU so I could game in Linux- This is supposed to be my Linux build. 
I think I have to use this MPT program but I am not sure.
Anyway- If anyone has any experience with OCing AMD GPUs in Linux, I would love to learn.


----------



## Trifire1

Godhand007 said:


> That's true. I get worse scores due to high voltage on my reference card but without that voltage, I can't sustain ~2730Mhz clocks in games. BTW, If those clocks are actually stable, as in multiple GT2 loops stable (not just one run), then you have a golden XTX chip.


It will go down a little sometimes but yes for the most part it is around that, like I say I ran this for the best part of a year with no issues in the games I play. Entered the rabbit hole again recently and doesn't seem I can extract anything else. When the cards first came out I held top stop in time spy with my setup for a month. Was really rewarding as I had never really got into this stuff before. Nvidia was normally just shove a +150 core clock and +800 mem then forget about it.

I am not interested in LN2 or any of the mega extreme stuff like modifying the card. It looks like great fun but I couldn't live with myself destroying this kit ( I would definitely screw it up ) To be honest even using MPT at first was terrifying. 

As I have played around with it more it seems that my card wont do anything more above 340w and even then the uplift from something like 330w is unnoticeable. 

It's tempting to grab once of these higher spec ones... Aesthetically some of them look nicer than the reference and the sound of my PC with 3000rpm noctua fans is wearing thin.


----------



## bunnybutt

Ive had a secret little stepchild for a few years now. picked up an old mining GTX1080 from a friend for a trade. totally rebuilt it with new thermal pads and paste, an an arctic xcelero cooler. As you said, it was a typical "150/800" kinda card, just sat silent around 2100mhz all day long while gaming-nothing more- but damn it was so ugly! I was just tired of that thing and I had no attachment to it, really. sold it and ive been day dreaming about an AMD gpu for years... just ask my friend...


----------



## Trifire1

haha! I was actually looking at getting one of them recently for a second PC but prices are so nuts its better to just buy a 3000 series new. I have always preferred AMD, don't know why can't explain it. It's funny because I have started helping friends overclock intel systems RAM & CPU and they do seem a lot easier to get what you want. AMD seems pretty complicated in comparison. Either way I will stick with AMD, I always thought it was better value for money.

Frustrating watching a lot of these youtube reviewer types as I had my heart set on a 3080 based on various well known tech channels a year ago, then when I managed to get a 6900xt for the same price and compared them I was happy I went for it, Really underrated cards.

I will post anything I find over the next few days but if I end up having a drink will probably reappear with a Toxic card


----------



## J7SC

Trifire1 said:


> It will go down a little sometimes but yes for the most part it is around that, like I say I ran this for the best part of a year with no issues in the games I play. Entered the rabbit hole again recently and doesn't seem I can extract anything else. When the cards first came out I held top stop in time spy with my setup for a month. Was really rewarding as I had never really got into this stuff before. Nvidia was normally just shove a +150 core clock and +800 mem then forget about it.
> 
> I am not interested in LN2 or any of the mega extreme stuff like modifying the card. It looks like great fun but I couldn't live with myself destroying this kit ( I would definitely screw it up ) To be honest even using MPT at first was terrifying.
> 
> As I have played around with it more it seems that my card wont do anything more above 340w and even then the uplift from something like 330w is unnoticeable.
> 
> It's tempting to grab once of these higher spec ones... Aesthetically some of them look nicer than the reference and the sound of my PC with 3000rpm noctua fans is wearing thin.


As you mentioned earlier, it can indeed be a bit addictive - there was a time when I did a fair amount of (subzero) HWbot etc and even had some vendor sponsorship re. hardware. Alas, it got to the point where it wasn't the pure fun anymore I had felt before. It may sound like a cliche, but it is supposed to be fun, not s.th. which ends up dominating your free time.

I certainly still like benchmarking, but compete mostly 'against myself'' to see if I can improve on my prior best results. These days, I prefer the process of a complex build project even more, though after completion make sure to max out of all components, w/o hardmodding or crazy voltages. My 6900XT is in the same case as a 3090 Strix (it's a dual mobo work-play build for my home office) and the 3090 has multiple bios available, all the way to XOC 1000W. The latter system is more for gaming etc while the 6900XT is the primary work system. Still, of course I throw MPT PL at the 6900XT, just for the fun of it, as both systems have extensive w-cooling.

The problem w/ getting a new card is that the process just repeats...a few weeks or even months of figuring out its secrets and getting higher scores, but then what. Presumably though, it starts all over again with RDNA3 and 4090s 

All that said, if you have the budget, the time and the inclination (and may be that drink you mentioned), go for the baddest, nicest XTXH you fancy, and enjoy...


----------



## bunnybutt

Offtopic: I decided to perform a fresh install of Radeon Software (I did not use DDU this time) after all of my beatings I was giving with my early days of OCing.

I noticed upon first boot it lists my cards max clock as 2609mhz. The card is supposed to be 2375mhz "game clock", 2525mhz "boost", and then the 2730mhz "toxic boost"

So where does this 2609mhz come from and did I see a post of someone say "this number represents your actual highest clock" ?


----------



## Godhand007

J7SC said:


> The problem w/ getting a new card is that the process just repeats...a few weeks or even months of figuring out its secrets and getting higher scores, but then what. Presumably though, it starts all over again with *RDNA3 and 4090s*


~Same here, I just hope I am not priced out of the top-end GPU market. I mean I can still buy them but probably won't be able to justify ~3000$. On a side note, I am upgrading from i7 8700 non-k (so only 4.3 GHz max clock) to 12700K (with DDR4, can't get DDR5). Do you think I will see some meaningful benefits? I am confident that multitasking would improve but not sure about games.


----------



## J7SC

bunnybutt said:


> Offtopic: I decided to perform a fresh install of Radeon Software (I did not use DDU this time) after all of my beatings I was giving with my early days of OCing.
> 
> I noticed upon first boot it lists my cards max clock as 2609mhz. The card is supposed to be 2375mhz "game clock", 2525mhz "boost", and then the 2730mhz "toxic boost"
> 
> So where does this 2609mhz come from and did I see a post of someone say "this number represents your actual highest clock" ?


...2609 as the 'untouched, stock' value shown is your pre-oc boost number; the highest I've seen mentioned here is 2650 but 2609 is very good, nothing to worry about.



Godhand007 said:


> ~Same here, I just hope I am not priced out of the top-end GPU market. I mean I can still buy them but probably won't be able to justify ~3000$. On a side note, I am upgrading from i7 8700 non-k (so only 4.3 GHz max clock) to 12700K (with DDR4, can't get DDR5). Do you think I will see some meaningful benefits? I am confident that multitasking would improve but not sure about games.


I would think the IPC improvements alone in a 12700K will make a sizable difference in games. If you have good 32GB of decent (=MHz, timings) DDR4, that will help. 

I just upgraded my systems and will wait a while before looking at anything requiring DDR5.


----------



## KGV

Anyone know, is there any water cooling system for ASROCK 6900XT Phantom gaming, exists? "From the box"
I know about this Bykski Full Coverage GPU Water Block and Backplate for Asrock RX 6800/6900 XT (A-AR6900XT-X)


----------



## Trifire1

J7SC said:


> As you mentioned earlier, it can indeed be a bit addictive - there was a time when I did a fair amount of (subzero) HWbot etc and even had some vendor sponsorship re. hardware. Alas, it got to the point where it wasn't the pure fun anymore I had felt before. It may sound like a cliche, but it is supposed to be fun, not s.th. which ends up dominating your free time.
> 
> I certainly still like benchmarking, but compete mostly 'against myself'' to see if I can improve on my prior best results. These days, I prefer the process of a complex build project even more, though after completion make sure to max out of all components, w/o hardmodding or crazy voltages. My 6900XT is in the same case as a 3090 Strix (it's a dual mobo work-play build for my home office) and the 3090 has multiple bios available, all the way to XOC 1000W. The latter system is more for gaming etc while the 6900XT is the primary work system. Still, of course I throw MPT PL at the 6900XT, just for the fun of it, as both systems have extensive w-cooling.
> 
> The problem w/ getting a new card is that the process just repeats...a few weeks or even months of figuring out its secrets and getting higher scores, but then what. Presumably though, it starts all over again with RDNA3 and 4090s
> 
> All that said, if you have the budget, the time and the inclination (and may be that drink you mentioned), go for the baddest, nicest XTXH you fancy, and enjoy...


Very true, that setup sounds really cool! would love to see some pics. I was thinking what on earth would temps be like in that until you said you have a decent water cooling setup.

At first I wasn't comfortable with the prices of these cards which is why I stuck with the reference version as I got it for retail from scan. Is it worth £700 more for an XTXH.... probably not, but I suspect this whole GPU shortage will happen again when the next line come out and won't lose much on it. ( Hopefully )

I disabled the sleep states in MPT today and it seems I am now hitting that 24K ... In TS I was getting a huge drop at the start of the second graphics test before, now thats gone the scores have gone up. Interesting because I am sure I read there was no need to do this in more recent driver versions.


----------



## cfranko

KGV said:


> Anyone know, is there any water cooling system for ASROCK 6900XT Phantom gaming, exists? "From the box"
> I know about this Bykski Full Coverage GPU Water Block and Backplate for Asrock RX 6800/6900 XT (A-AR6900XT-X)


I have a ASrock Phantom Gaming 6900 XT and using the waterblock that you linked. It's great except the included thermal pads, they are terrible.


----------



## KGV

cfranko said:


> I have a ASrock Phantom Gaming 6900 XT and using the waterblock that you linked. It's great except the included thermal pads, they are terrible.


Do it worth of buying?
How T dropped after installation?


----------



## cfranko

KGV said:


> Do it worth of buying?
> How T dropped after installation?


With the air cooler I used to get 76c edge temperature and 100c hotspot temperature. With the waterblock I get 50c edge temperature and 60c hotspot temperature. It is worth buying.


----------



## KGV

cfranko said:


> With the air cooler I used to get 76c edge temperature and 100c hotspot temperature. With the waterblock I get 50c edge temperature and 60c hotspot temperature. It is worth buying.


What kind of componets do u use to build the system? Do it separate from cpu, or not?


----------



## cfranko

KGV said:


> What kind of componets do u use to build the system? Do it separate from cpu, or not?


I have a 5900x, the cpu is in the loop as well.


----------



## KGV

cfranko said:


> I have a 5900x, the cpu is in the loop as well.


I see. 
Better to have gpu and cpu separated.
I have Arctic Liquid Freezer II 360 on my 12700kf.
So 6900xt will have his own water cooling system.
Anyway, thanks for info.


----------



## cfranko

KGV said:


> I see.
> Better to have gpu and cpu separated.
> I have Arctic Liquid Freezer II 360 on my 12700kf.
> So 6900xt will have his own water cooling system.
> Anyway, thanks for info.


Yeah having the cpu seperate is a better idea, I had better temps on my cpu before the custom loop


----------



## Blameless

Got my second ASRock 6900 XT OCF in a few days ago. It's slightly better than my first...less coil whine and about 25-50MHz more on the core, nearly identical potential in other clocks. Unconditionally stable clocks are still a bit disappointing, but I'm not buying another one, so it will have to suffice.


----------



## J7SC

Blameless said:


> Got my second ASRock 6900 XT OCF in a few days ago. It's slightly better than my first...less coil whine and about 25-50MHz more on the core, nearly identical potential in other clocks. Unconditionally stable clocks are still a bit disappointing, but I'm not buying another one, so it will have to suffice.


...CrossFire ? I know it's not popular anymore and will take some driver trickery, but heck, why not ?



Trifire1 said:


> Very true, that setup sounds really cool! *would love to see some pics*. I was thinking what on earth would temps be like in that until you said you have a decent water cooling setup(...)


...still working on >> pics of the build w/ a new camera for Christmas...speaking of which, everybody have a great holiday season, and stay safe:


----------



## Blameless

J7SC said:


> ...CrossFire ? I know it's not popular anymore and will take some driver trickery, but heck, why not ?


Even if I could get it to work, none of my decent systems have more than a single usable PCI-E 16x slot and there are very few games I play that support explicit multi-GPU for DX12, and AFR in DX11 was always a bit lack luster.

Anyway, I already sold off the first card...my non-hardware-enthusiast friends, who'll never know the difference, get my binning fails at a discount.


----------



## J7SC

Blameless said:


> Even if I could get it to work, none of my decent systems have more than a single usable PCI-E 16x slot and there are very few games I play that support explicit multi-GPU for DX12, and AFR in DX11 was always a bit lack luster.
> 
> Anyway, I already sold off the first card...my non-hardware-enthusiast friends, who'll never know the difference, get my binning fails at a discount.



...reminds me of a chap who picked up a used Dell from an office auction...turns out that it had one of the best-clocking 2080 Tis in it ever (other than the usual bios lock-downs)...for most of its previous life, that 2080 Ti ran Word Processing and accounting software, along with web browsing & email...there ought to be a law against that.


----------



## bunnybutt

So I checked my CPU vcore but its still on Auto (im honestly not even sure how to touch it.)
However, I went ahead and raised SOC from 1.0125, to 1.0250 or 350.
Also, as I mentioned, I have PBO disabled, the limits have been raised, and per core ranges from negative 15-20, with only one core at 5

My whole system sometimes feels like its being held together with ducttape and bubble gum. Sometimes I cant even believe what I am accomplishing with all of my "wild west cowboy" tactics..
Ive undervolted, or forced stock, all the voltages I can justify. Lots of random voltages have been locked in place as I found them quite above what factory was supposed to be (MSI mobo)

RAM is samsung Bdie 3200mhz 14-14-14, overclocked to 3600mhz 14-14-14 @1.44v (been in search of a stable 3800mhz)


I think this whole thing is a testament to AMD and the long hard road they have accomplished
_booty slaps upside the tincan_


----------



## tolis626

bunnybutt said:


> So I checked my CPU vcore but its still on Auto (im honestly not even sure how to touch it.)
> However, I went ahead and raised SOC from 1.0125, to 1.0250 or 350.
> Also, as I mentioned, I have PBO disabled, the limits have been raised, and per core ranges from negative 15-20, with only one core at 5
> 
> My whole system sometimes feels like its being held together with ducttape and bubble gum. Sometimes I cant even believe what I am accomplishing with all of my "wild west cowboy" tactics..
> Ive undervolted, or forced stock, all the voltages I can justify. Lots of random voltages have been locked in place as I found them quite above what factory was supposed to be (MSI mobo)
> 
> RAM is samsung Bdie 3200mhz 14-14-14, overclocked to 3600mhz 14-14-14 @1.44v (been in search of a stable 3800mhz)
> 
> 
> I think this whole thing is a testament to AMD and the long hard road they have accomplished
> _booty slaps upside the tincan_


Well, slap that booty all you want, it won't change the fact that that CPU vSOC is really, really low. 

Seriously, that's crazy low. I'm surprised you're not getting issues. I mean, my CPU is very, very lenient when it comes to SOC, but I doubt even mine could do 3600MHz mem/1800MHz fclk at that voltage without issues. At 3800MHz memory I'm using 1.1V (1.085-ish V after droop) and, although it's been extensively tested and is rock solid stable, it's among the lowest I've seen for those speeds. People here have been pushing over 1.15V SOC to get stable or get rid of issues. So any weirdness you might have going on, USB issues, hiccups, random crashes, whatever have you, it might be related to vSOC. Bump that up to like 1.075V or, if you want peace of mind, maybe 1.1V. It's safe and it should be mostly trouble free.

Although, to be fair, my 5900x might be good with SOC voltages, but it doesn't hold a candle to my previous CPU, a 3800X. I had OC'ed the same RAM I have now to 3800MHz from its stock 3600MHz (with tighter timings too) and I would run SOC at 1.125V because that's where I tested. Long story short, at some point I updated my BIOS and it couldn't load my previous profile, so I had to input everything again. So I did, but I made a mistake and instead of 1.125V I set SOC to 1.05V. It took a month before I realized. No crashes, no USB problems, no nothing. I even did some benchmarks and memory tests and everything came out clean, no errors. So I bumped it to 1.075V for added peace of mind and it worked perfectly like that up until I sold it. Crazy little CPU, I'm kinda sad I sold it.


----------



## bunnybutt

You just HAD to say something, didn’t you, @tolis626.

I mean, for a year straight, since the launch of the 5900x, I have NEVER had any issues. 

I read your comment and said to myself “hmm ive never seen issues? but maybe I’ll listen to this guy…”

…Then I went to play some “Deep Rock Galactic”.
I spent about 30 mins just hanging out in the game lobby and finally went to host a game. everyone joined, we started the match-and desktop crash. 

I ramped your my gpu fans and went back in, started to host and another crash. Got a HARD third crash which took out the system and decided to go into BIOS.

Decided “ok, I have nothing else to lose, let’s try SOC 1.0750”

Of course. OF COURSE….. the game is playing totally stable now. 3 online games of about an hours worth- no crashes. 

This totally changes things, I think it’s time for a total system OverclockOverhaul. 



So thank you very much for giving me hard number examples of where I should stay. 


One last thing- your icon is Linux, does this mean you are familiar with linux/amd OCing in linux? I sure would like some pointers if you wanted to take this to private message.


----------



## bloot

Well I'm on 1V on SOC since I bought this 5900X eight months ago or so and have had absolutely no crash problems nor whea errors at all, 3800MHz 16-17-17-17-37 RAM at 1.35V and memtest passed at 1000% with no errors either. I also have a gentoo linux drive and it compiles eveything without issues.


----------



## tolis626

bunnybutt said:


> You just HAD to say something, didn’t you, @tolis626.
> 
> I mean, for a year straight, since the launch of the 5900x, I have NEVER had any issues.
> 
> I read your comment and said to myself “hmm ive never seen issues? but maybe I’ll listen to this guy…”
> 
> …Then I went to play some “Deep Rock Galactic”.
> I spent about 30 mins just hanging out in the game lobby and finally went to host a game. everyone joined, we started the match-and desktop crash.
> 
> I ramped your my gpu fans and went back in, started to host and another crash. Got a HARD third crash which took out the system and decided to go into BIOS.
> 
> Decided “ok, I have nothing else to lose, let’s try SOC 1.0750”
> 
> Of course. OF COURSE….. the game is playing totally stable now. 3 online games of about an hours worth- no crashes.
> 
> This totally changes things, I think it’s time for a total system OverclockOverhaul.
> 
> 
> 
> So thank you very much for giving me hard number examples of where I should stay.
> 
> 
> One last thing- your icon is Linux, does this mean you are familiar with linux/amd OCing in linux? I sure would like some pointers if you wanted to take this to private message.


Hahahaha, yeah man, I've been known to have that effect on the world. I'm walking and talking bad luck sometimes.

Still, at least the issue occured during something non critical, so I guess it's a good thing it happened the way it did. Happy to help.

As for my avatar, yes, it's a Tux. I used to work a lot with Linux when I was younger. I do love tinkering with stuff, so I had an old desktop on which I had multiple OS's for experimenting. I've posted about it again, but at some point I had Windows XP, Windows Vista, Ubuntu/Kubuntu, Fedora, Gentoo and a Hackintosh installation. And somehow everything worked. Well, it did, until I overclocked the thing, fried its PSU and that took everything else with it.

Sadly, I've been out of the game for years. Med school didn't leave much time for Linux experiments and working as a doctor after that even less so, so I focused on hardware (which I'm more interested in anyway) and gaming when I had spare time. That said, I've had an itch to dual boot Linux on my current rig for quite a while, I just never got around to doing it because I realized I've forgotten almost everything and need to read up on a lot. For the time being, I've set up a couple of VMs to try out different distros. So far, I'm liking Kubuntu a lot. I hate how Ubuntu's UI has become, I am not big into Arch, but Kubuntu is kind of the best of both worlds. I just hope it doesn't have the glaring issues it once did.


bloot said:


> Well I'm on 1V on SOC since I bought this 5900X eight months ago or so and have had absolutely no crash problems nor whea errors at all, 3800MHz 16-17-17-17-37 RAM at 1.35V and memtest passed at 1000% with no errors either. I also have a gentoo linux drive and it compiles eveything without issues.


Well, you are either extremely lucky with your silicon, or you just haven't experienced problems YET, or it's just that because of your rather loose timing and low VDIMM there isn't much stress on the IMC. If it's the first, I would definitely try going higher with your RAM.


----------



## 99belle99

tolis626 said:


> Hahahaha, yeah man, I've been known to have that effect on the world. I'm walking and talking bad luck sometimes.
> 
> Still, at least the issue occured during something non critical, so I guess it's a good thing it happened the way it did. Happy to help.
> 
> As for my avatar, yes, it's a Tux. I used to work a lot with Linux when I was younger. I do love tinkering with stuff, so I had an old desktop on which I had multiple OS's for experimenting. I've posted about it again, but at some point I had Windows XP, Windows Vista, Ubuntu/Kubuntu, Fedora, Gentoo and a Hackintosh installation. And somehow everything worked. Well, it did, until I overclocked the thing, fried its PSU and that took everything else with it.
> 
> Sadly, I've been out of the game for years. Med school didn't leave much time for Linux experiments and working as a doctor after that even less so, so I focused on hardware (which I'm more interested in anyway) and gaming when I had spare time. That said, I've had an itch to dual boot Linux on my current rig for quite a while, I just never got around to doing it because I realized I've forgotten almost everything and need to read up on a lot. For the time being, I've set up a couple of VMs to try out different distros. So far, I'm liking Kubuntu a lot. I hate how Ubuntu's UI has become, I am not big into Arch, but Kubuntu is kind of the best of both worlds. I just hope it doesn't have the glaring issues it once did.
> 
> Well, you are either extremely lucky with your silicon, or you just haven't experienced problems YET, or it's just that because of your rather loose timing and low VDIMM there isn't much stress on the IMC. If it's the first, I would definitely try going higher with your RAM.


Sorry to intrude into your personal life but you said you're a doctor and then one time you said wages are really low in your country. Would you not move to another country as doctors are well paid in the majority of the world. I would if I was you and you have good English too so it's not a language thing that would hold you back.


----------



## bunnybutt

I understand what you mean. Ive had to give up some things in my life as well.

So, just to give you an update- Im not quite sure where I stand It may have NOT been my voltage, but I also dont know what to be chasing

:

I ended up playing this game non stop all night. I did some actual stress testing. Even after making the SOC adjustment, I started having a serious problem where the game kept HARD HARD crashing at the same "Game Event" (This is one of those games where the map is always generated random, but theres two main bosses you have to kill. Upon killing the last boss, wherever they are, the whole system would hard crash.)

This happened 3 times in a row at the same Event time and it took A LOT of game play time to get to these three points.

At that point, i pretty much moved ALL my adjusted voltages, except for DRAM, back to Auto, and was STILL crashing. Finally I did some googling and read up about this "C-State". I disabled it, went back to the game.... And passed that one "Game Event" without a crash, and the game continued to play without any more crashes.

So yes, now all of my voltages are back to Auto, but Im not sure if that was the fix. Maybe I need to leave C-State disabled, but then that gets into a whole lot of issue where I dont know the repercussions of doing this.

(these are not 6900xt issues, but I never know where to go for my help. the rest of this forum sometimes is a minefield.)


----------



## tolis626

99belle99 said:


> Sorry to intrude into your personal life but you said you're a doctor and then one time you said wages are really low in your country. Would you not move to another country as doctors are well paid in the majority of the world. I would if I was you and you have good English too so it's not a language thing that would hold you back.


Nah, no problem. Wages are indeed low here. And with how inflation is going, it's probably going to get ugly. My current plan is to move to Germany for work, but I've hit some hurdles on the way and it's been delayed. Also, my German is not quite perfect, so I'm also considering searching for a job in the UK, but that comes with its own set of problems. I'm in a tough spot currently, but I'll make it.

Thanks for your concern man, appreciate it. 


bunnybutt said:


> I understand what you mean. Ive had to give up some things in my life as well.
> 
> So, just to give you an update- Im not quite sure where I stand It may have NOT been my voltage, but I also dont know what to be chasing
> 
> :
> 
> I ended up playing this game non stop all night. I did some actual stress testing. Even after making the SOC adjustment, I started having a serious problem where the game kept HARD HARD crashing at the same "Game Event" (This is one of those games where the map is always generated random, but theres two main bosses you have to kill. Upon killing the last boss, wherever they are, the whole system would hard crash.)
> 
> This happened 3 times in a row at the same Event time and it took A LOT of game play time to get to these three points.
> 
> At that point, i pretty much moved ALL my adjusted voltages, except for DRAM, back to Auto, and was STILL crashing. Finally I did some googling and read up about this "C-State". I disabled it, went back to the game.... And passed that one "Game Event" without a crash, and the game continued to play without any more crashes.
> 
> So yes, now all of my voltages are back to Auto, but Im not sure if that was the fix. Maybe I need to leave C-State disabled, but then that gets into a whole lot of issue where I dont know the repercussions of doing this.
> 
> (these are not 6900xt issues, but I never know where to go for my help. the rest of this forum sometimes is a minefield.)


Hmmm... That's weird. I can't see why C-states would cause a hard crash. Slowdowns, sure, but a downright crash? Seems strange to me. Crashes are usually due to insufficient voltage/too high clocks. I would wager that some part of your system is voltage starved. It wouldn't harm to give it a try with increased vSOC, VDDP, VDDG etc. Also, you might want to take a look at your PBO settings.


----------



## bunnybutt

I actually found some info that says I should force/lower the VDDG/VGGP voltage. 
I used to have those voltages set lower(.800-.950), but I put back to auto long ago, and stopped touching. 

this is the can of worms i was afraid of getting back into. i’m probably going to load my old settings and just change Cstate, see what happens. Will take a whole day of gaming again. 

TBH, i wonder if the game I was playing is CPU dependent and was going to idle after so much gameplay, and the act of “acknowledging the completed task” and trying to wake it up, is what crashes it. 
If that’s the case. it’s not doing any sort of GPU stress testing this whole time.


----------



## J7SC

Enjoying a White Christmas ...and entirely too much food and sweets 









On SOC-v, all three AMD work-play systems in my home office run fine with lowish voltages for quad sticks, single-rank Sammy B-die. Since I also use them for work, I prefer absolute stability, no WHEAs etc. 

This helps overall temps and perhaps also longevity, though a bit more heat might not be such a bad thing right about now (dipping to -10 C over the next few days).

SOC 1.0313 2950X - 32GB (4x8) Quad Channel / 3466 - 2080 Tis
SOC 1.0375 3950X - 32GB (4x8) Dual Channel / 3800 - 6900XT
SOC 1.0438 5950X - 32GB (4x8) Quad Channel / 3800 - 3090


----------



## bunnybutt

I’m going to take a step back and try not to make knee jerk reactions. I reloaded my original BIOS settings and voltages, and instead I changed

“Global C-State Control”
“Power Supply Idle Control”
to
“Disabled”
“Typical Current Use”

It’s going to take time to stress test the same scenario, but i want to try these before I raise my voltages back up. I very much like undervolting










my basic BIOS settings:


----------



## J7SC

bunnybutt said:


> I’m going to take a step back and try not to make knee jerk reactions. I reloaded my original BIOS settings and voltages, and instead I changed
> 
> “Global C-State Control”
> “Power Supply Idle Control”
> to
> “Disabled”
> “Typical Current Use”
> 
> It’s going to take time to stress test the same scenario, but i want to try these before I raise my voltages back up. I very much like undervolting
> 
> View attachment 2539470
> 
> 
> my basic BIOS settings:
> 
> View attachment 2539471
> 
> View attachment 2539472
> 
> View attachment 2539473


You got a nice setup there, what with the 6900XT Toxic Extreme, 64 GB of TidentZ (I'm looking at your sig rig) etc. 

Fyi, the way I typically go about it is to first set the CPU w/ PBO and whatever other CPU options and do some quick stability testing on that, but with all other options on bios default (including other voltage, C-states etc). The trap one can fall into is to try to 'solve one equation with 20 unknowns', so step by step w/ intermittent testing ultimately helps reduce time spent on this. Once you have done the CPU setup and you're happy with it after running Cinebench R23 etc, then save that profile in your bios as 'base-1' or something. 

Then I would clock-up, then tighten the RAM, but without touching related voltages yet - I'm not sure if you already used >> this tool , but it is very helpful, including in dealing with 64GB of RAM which is a bit more tricky than 32GB, even four sticks of it. Preference would be to go for '1T' Command Rate, but with 64GB, you might have to settle for 2T. Btw, in the screenie you showed above, with the other primary RAM timings you set, tRAS should probably be '30' or so.

After setting the base RAM this way, more testing testing testing. When you have a basic RAM setting that works, save that profile as 'base-2'. Now you can go back and play more with RAM voltage and timing (more tests), then, after you have s.th. that you like and saved as 'base-3' profile, you can start lowering SOC-v until you can't get any lower w/o crashing in your fav apps and/or failing memtests. 

This is certainly not a detailed step-by-step, and some RAM parameters are related so what worked / didn't work before might have to be revisited, nor do I know how much experience you have with this - 'this'= RAM tuning being a rabbit hole you can lose yourself in rather quickly...it's just about creating 'steps' for yourself via base-1,2,3 profiles to work from for further optimization, or later re-visits.


----------



## Skinnered

J7SC said:


> ...CrossFire ? I know it's not popular anymore and will take some driver trickery,


You mean there is some hack or trick to enable it on other then expliciet mgpu supporting games?!


----------



## crastakippers

tolis626 said:


> I'm also considering searching for a job in the UK, but that comes with its own set of problems. I'm in a tough spot currently, but I'll make it.
> .


I recommend the North East. The people are great and the coast line is particularly beautiful.
And the North Sea is the same temperature all year round, Freezing 😀


----------



## crastakippers

J7SC said:


> Enjoying a White Christmas ...and entirely too much food and sweets
> View attachment 2539462


Is that your back yard?
I was just happily thinking, we ain't had much snow yet and then I see this. 😀


----------



## J7SC

Skinnered said:


> You mean there is some hack or trick to enable it on other then expliciet mgpu supporting games?!


...there are some registry changes you can do to enable and try out m-GPU for a game / benchmark, but it has been years since I used them re. enabling Cross/QuadFire (might want to check YouTube). I only have one AMD 6900XT so I haven't looked at it recently.

More generally, there are lists of games such as > this one which tell you what works re. AMD m-GPU...often, other games with the same underlying game engine will also work. And according to 3DM HoF, AMD CrossFire will work with 3DM Timespy, TimeSpy Ex, Nightraid etc.



crastakippers said:


> Is that your back yard?
> I was just happily thinking, we ain't had much snow yet and then I see this. 😀


...no, but it is where I visit most mornings and what I'm looking at from my home office...that said, we have a 'White Christmas' all over here on the West Coast, currently -5C and dropping, and more snow to come...


----------



## Skinnered

J7SC said:


> ...there are some registry changes you can do to enable and try out m-GPU for a game / benchmark, but it has been years since I used them re. enabling Cross/QuadFire (might want to check YouTube). I only have one AMD 6900XT so I haven't looked at it recently.
> 
> More generally, there are lists of games such as > this one which tell you what works re. AMD m-GPU...often, other games with the same underlying game engine will also work. And according to 3DM HoF, AMD CrossFire will work with 3DM Timespy, TimeSpy Ex, Nightraid etc.
> 
> 
> 
> ...no, but it is where I visit most mornings and what I'm looking at from my home office...that said, we have a 'White Christmas' all over here on the West Coast, currently -5C and dropping, and more snow to come...


Wow, can you maybe guide me through a website or something else so I can learn more of it?
This could work for 6900/6800 of cards?
I will google any further, but haven't found anything yet.
I have a Sapphire toxic Extreme and a second GPU should have the same bios for problem free perf for dx9-11games I mean?

Nice view. Its La Nina time, so that often results in a strong Aleoeten (*Aleutian* Islands ) high, with cold air streaming south from the artic to the North West area.
Here in the Netherlands, after two cold days, we're gooing to an extrme mild period, but winter comes closer again later on (MJO8).


----------



## Skinnered

These ones?

amdkmdag.sys
REG_SZ EnableCrossfireForNonProfileApps_DEF (set 1 to activate)
REG_SZ EnableCrossfireForNonProfileApps (set 1 to activate)
REG_DWORD EnableCrossFireAutoLink (set 1 to activate)
REG_DWORD EnableULPS (set 0 to disable)

and these which i not sure what registry type:
DisableCrossFire
Proxy_ExpandCrossFireSupport


----------



## bunnybutt

J7SC said:


> You got a nice setup there, what with the 6900XT Toxic Extreme, 64 GB of TidentZ (I'm looking at your sig rig) etc.
> 
> Fyi, the way I typically go about it is to first set the CPU w/ PBO and whatever other CPU options and do some quick stability testing on that, but with all other options on bios default (including other voltage, C-states etc). The trap one can fall into is to try to 'solve one equation with 20 unknowns', so step by step w/ intermittent testing ultimately helps reduce time spent on this. Once you have done the CPU setup and you're happy with it after running Cinebench R23 etc, then save that profile in your bios as 'base-1' or something.
> 
> Then I would clock-up, then tighten the RAM, but without touching related voltages yet - I'm not sure if you already used >> this tool , but it is very helpful, including in dealing with 64GB of RAM which is a bit more tricky than 32GB, even four sticks of it. Preference would be to go for '1T' Command Rate, but with 64GB, you might have to settle for 2T. Btw, in the screenie you showed above, with the other primary RAM timings you set, tRAS should probably be '30' or so.
> 
> After setting the base RAM this way, more testing testing testing. When you have a basic RAM setting that works, save that profile as 'base-2'. Now you can go back and play more with RAM voltage and timing (more tests), then, after you have s.th. that you like and saved as 'base-3' profile, you can start lowering SOC-v until you can't get any lower w/o crashing in your fav apps and/or failing memtests.
> 
> This is certainly not a detailed step-by-step, and some RAM parameters are related so what worked / didn't work before might have to be revisited, nor do I know how much experience you have with this - 'this'= RAM tuning being a rabbit hole you can lose yourself in rather quickly...it's just about creating 'steps' for yourself via base-1,2,3 profiles to work from for further optimization, or later re-visits.



Thank you VERY much for the compliments. I do appreciate it. 
I was really striving to try and build a full system that can last quite a while, as well.

I’m going to sit down with your info and check out my settings. 
I honestly do NOT know much about RAM OCing. I know that’s kind of a common response. I went ahead and changed my tRAS as you said, but i don’t know much about it. I thought I found some info about “doing math” and that’s the answer I came to. 

Also- I think I discovered my hard crashing/black screen+restart problem. 
I spent the day playing that same game, and kept getting those hard crashes. AGAIN, I put my SOC to 1.0750, but still the crashes.
Until. 
I remembered I changed the bottom fans to “PCIE” over “CPU”.
I think I was overheating the ram, since I no longer had “rising values for fans” based on CPU work. 

Once I put the fans back to CPU, i’ve been able to play non stop without the crashes. 

So is my RAM OC unstable?
Probably.
Will I change it?
Probably Not. Just mitigate the problem with cooling like I was doing before. 

Thanks for letting me know these were not GPU issues.


----------



## J7SC

Skinnered said:


> These ones?
> 
> amdkmdag.sys
> REG_SZ EnableCrossfireForNonProfileApps_DEF (set 1 to activate)
> REG_SZ EnableCrossfireForNonProfileApps (set 1 to activate)
> REG_DWORD EnableCrossFireAutoLink (set 1 to activate)
> REG_DWORD EnableULPS (set 0 to disable)
> 
> and these which i not sure what registry type:
> DisableCrossFire
> Proxy_ExpandCrossFireSupport


Sorry that I can't add much...the last time I used Cross/QuadFire was when HD7970s were new(ish), and I vaguely remember a piece of Radeon software (by an independent source) that acted not unlike NVidia Inspector does today for RTX cards...with that you could apply one game profile onto another, force CrossFire and mod the registry re. AMD etc...but I checked, and I don't even know what it was called or have a copy anymore. If I find it later, I will post it, though it may not work with Radeon 6800/6900XTs anyhow as this is (many) years ago.

...whatever you do re. the registry, make a back-up copy first before attempting to mod (you probably already know that). Also try the game link I posted above - as mentioned, sometimes different games use the same or slightly modified game engine and at times that can enable unofficial CrossFire.


----------



## bunnybutt

I’ve also come to realize I’ve been testing an experimental dx12 game…. Let’s talk about 6900xt stuff! I wanna see what most excellent things someone’s getting away with. Right now my daily settings seem to be 
2700/2800-1175v-2164mhzMem-15%PL
I’ve seen stable at 1155v in metro, so soon I want to test that again. I got carried away with issues in a dx12 game…


----------



## CS9K

bunnybutt said:


> I’ve also come to realize I’ve been testing an experimental dx12 game…. Let’s talk about 6900xt stuff! I wanna see what most excellent things someone’s getting away with. Right now my daily settings seem to be
> 2700/2800-1175v-2164mhzMem-15%PL
> I’ve seen stable at 1155v in metro, so soon I want to test that again. I got carried away with issues in a dx12 game…


2700MHZ; 2150Mhz + Fast Timings; "1145mV"; 400W Power Limit

Simple, effective, I'll take it for a reference model RX 6900 XT.


----------



## Thanh Nguyen

Anyone has a formula oc with bykski block? What is your delta? Dont know why my card has so high delta.


----------



## EastCoast

Thanh Nguyen said:


> Anyone has a formula oc with bykski block? What is your delta? Dont know why my card has so high delta.


can you post the delta?


----------



## J7SC

CS9K said:


> 2700MHZ; 2150Mhz + Fast Timings; "1145mV"; 400W Power Limit
> 
> Simple, effective, I'll take it for a reference model RX 6900 XT.


Per earlier discussions also, does the 400W power limit include a.) the 15% PL slider and b.) the extra 50W or so not shown for 6900XTs compared to RTX3K etc ?


----------



## CS9K

J7SC said:


> Per earlier discussions also, does the 400W power limit include a.) the 15% PL slider and b.) the extra 50W or so not shown for 6900XTs compared to RTX3K etc ?


Woops, I almost included that and didn't. My bad.

I get to 400W by setting 348W in MPT, then I set the slider to +15%


----------



## Skinnered

J7SC said:


> Sorry that I can't add much...the last time I used Cross/QuadFire was when HD7970s were new(ish), and I vaguely remember a piece of Radeon software (by an independent source) that acted not unlike NVidia Inspector does today for RTX cards...with that you could apply one game profile onto another, force CrossFire and mod the registry re. AMD etc...but I checked, and I don't even know what it was called or have a copy anymore. If I find it later, I will post it, though it may not work with Radeon 6800/6900XTs anyhow as this is (many) years ago.
> 
> ...whatever you do re. the registry, make a back-up copy first before attempting to mod (you probably already know that). Also try the game link I posted above - as mentioned, sometimes different games use the same or slightly modified game engine and at times that can enable unofficial CrossFire.


Oh its so long agoo, well it probably won't work for the rx 6000 series..
Does Shadow of the Tombraider support explicit multi gpu?


----------



## bunnybutt

CS9K said:


> Woops, I almost included that and didn't. My bad.
> 
> I get to 400W by setting 348W in MPT, then I set the slider to +15%


I dont feel ready to give that program a try, but I will happily read how you guys are using it. I guess youre kinda forcing the card to use the full limit? What does that offer, though? sustained boost clocks? This means I could use it to push 430w?


----------



## J7SC

CS9K said:


> Woops, I almost included that and didn't. My bad.
> 
> I get to 400W by setting 348W in MPT, then I set the slider to +15%


Thanks  - that's were I'm at (well, 360 W for 3x8 PCIe, but 12W = close enough). What are you setting the amperage at below the Wattage at MPT / power ?


----------



## bunnybutt

Testing a completely different game that is heavily GPU dependent.
Starting to settle down into real world use and Im realizing that the Sapphire fans REALLY ARE kinda bad... really loud. At this point Im tuning based off of fan noise. Heres some in-game specs of real world use. Wish I could get it cooler, but I guess its considered alright? The temps actually do not change, even if I crank the fans up. This is such a weird video card.

Theres so much performance, I dont know what else I could ask for, honestly. 
Other than an actual 1440p/144hz monitor to take advantage of this power.



















Im really curious to see if I can push more memory but the way everyone talks about the software lock, I dont even want to try.


----------



## Skinnered

I also find the fans very lound, I would like to change them, but then I need to find a fitting connector or attach the fans to the motherboard.


----------



## bunnybutt

I’m constantly getting black screen+restart, it’s in any game. Im not sure what it could be. low SOC, or something with RAM.
a new gpu (with SAM probably) is throwing off my cpu OC, that used to be stable. I think i’m gonna reset the whole system and start over, or go back to stock for a while.

It’s time to take a look at those “OC guide” settings listed earlier.


----------



## Skinnered

^ Are you sure its not overheating? I discovered with those 2700+ clocks in some games I really have to push the fanspeed.


----------



## bunnybutt

Would an overheating gpu push the PC to hard restart to bios?

I think i’m gonna minimize my RAM OC first, and keep the gpu the same. This is my first time to have a powerhouse gpu and it heats up the rest of the system like not before, and I don’t want vacuum fans on the PC just to survive. I’ll try first turning down ram OC and go from there.


----------



## Skinnered

Nevermind, I looked over the bios part, hard shutdown, restart yes, bios no, that is something with CPU or memory settings indeed.


----------



## bunnybutt

I decided to start fresh. BIOS reset (might update), and also used DDU to remove drivers and reinstall for GPU. 

With a totally stock(w/XMP) system, my TimeSpy score dumped to 20,462. 

Before my “stable” OC system had hit 21,791

I think my ram wants a little more voltage than stock to hit XMP. Maybe I gave it a good whallop with all of my testing.
I might just stop poking around and just leave it alone.


----------



## ZealotKi11er

bunnybutt said:


> Would an overheating gpu push the PC to hard restart to bios?
> 
> I think i’m gonna minimize my RAM OC first, and keep the gpu the same. This is my first time to have a powerhouse gpu and it heats up the rest of the system like not before, and I don’t want vacuum fans on the PC just to survive. I’ll try first turning down ram OC and go from there.


Overheating GPU will shut down the system.


----------



## bunnybutt

ZealotKi11er said:


> Overheating GPU will shut down the system.


I manually entered my XMP settings into bios, as i’ve done before, and I kept crashing on desktop, when starting up. I think it was a voltage issue, I added some and it didn’t crash… but I also saw it happening after Radeon loaded up- so maybe my drivers were corrupt. That’s why I did a fresh install.

What is considered overheating though?? I’ve been testing a very light game and temps were around 50. I tested a demanding game and Junction temps were 81, with 57 edge



EDIT: You know what I think the trouble was? Trying to force Command Rate 1T.
In all my past, I’ve always left on Auto, even in manual XMP settings.
Voltages are starting to drop if I leave on CR on auto. 
So then it looks like I just realized my ram simply needs more voltage to actually run 1T. 
So maybe my RAM is not bad, and maybe the GPU is not overheating.


----------



## CS9K

bunnybutt said:


> EDIT: You know what I think the trouble was? Trying to force Command Rate 1T.


Can confirm, true 1T (GDM _off_) at anything over DDR4 3500, even with B-die, takes a buttload of voltage to make happen


----------



## frankangelillo01

I don't think it's a limited edition card. More like it's so expensive that only the elite few can buy it maybe haha. I have a Nitro+ SE I've never even come close to boosting to 2800. Most I've seen so far WITHOUT MPT is close to 2700 in game. On timespy though it will crash sometimes. Still fine tuning it..




bunnybutt said:


> Of course when I try to find all the old links I was reading, or forum posts... I cant find them... I was honestly led to believe that these things were actual "collectors Items" with limited production, or I wouldn't have jumped at it so fast.
> I freaked out and bought one cause of this reasoning, but its not looking like that anymore.... considering getting a return and buying again to get a cheaper price since they keep playing games.
> 
> This is what I own:
> 
> 
> 
> 
> 
> 
> 
> 
> TOXIC AMD Radeon™ RX 6900 XT Extreme Edition
> 
> 
> <p style="margin-bottom: 0.0001pt;"><span style="color: #131313;">TOXIC AIO Cooling Technology - One Click TOXIC BOOST Up to 2730 MHz</span></p>
> 
> 
> 
> 
> www.sapphiretech.com
> 
> 
> 
> 
> 
> Yes, I currently have it set at 2830, and that allows it to boost to 2750-2790, up to actual 2830mhz.
> 
> They advertise 2730 boost, but even with a COLD GPU running a first time stress test, in factory settings the card would only hit 2670mhz. It wont hit the stated "2730mhz" or higher, unless I turn up the clock speed to "2830"mhz
> 
> I emailed Sapphire to ask about it, and thats when I realized they are heavily using the advertising phrase "up to" to discuss the Boost clock speed.
> 
> Just curious if I could cause harm by turning up the clock so high. I just came from Nvidia. the last AMD/ATi card I had was when I was 12


----------



## frankangelillo01

It seems and I may be wrong here but as a whole people generally like the XFX card but it's a long card so check your space. I personally have a Nitro+ 6900xt for cosmetic reasons but I love it.



Balsagna said:


> So is there a clear winner on what is the fastest XTXH non reference cards, rather a general idea that X card seems to be the faster of the bunch?


----------



## J7SC

CS9K said:


> Can confirm, true 1T (GDM _off_) at anything over DDR4 3500, even with B-die, takes a buttload of voltage to make happen


...At 3800 + for Ryzen DDR4, I much prefer GDM on and a slight undervolt on the RAM, especially w/ 4 stick of b-die SR. At DDR4 3200 or so, it can be a different matter. Btw, I also have direct RAM fans to keep it below 40 C under heavy loads. 

...here is an oldie but still relevant > article which also looks at frequency vs timing and other parameters


----------



## jonRock1992

bunnybutt said:


> I manually entered my XMP settings into bios, as i’ve done before, and I kept crashing on desktop, when starting up. I think it was a voltage issue, I added some and it didn’t crash… but I also saw it happening after Radeon loaded up- so maybe my drivers were corrupt. That’s why I did a fresh install.
> 
> What is considered overheating though?? I’ve been testing a very light game and temps were around 50. I tested a demanding game and Junction temps were 81, with 57 edge
> 
> 
> 
> EDIT: You know what I think the trouble was? Trying to force Command Rate 1T.
> In all my past, I’ve always left on Auto, even in manual XMP settings.
> Voltages are starting to drop if I leave on CR on auto.
> So then it looks like I just realized my ram simply needs more voltage to actually run 1T.
> So maybe my RAM is not bad, and maybe the GPU is not overheating.


Try playing red dead redemption 2 with the vulkan renderer with max settings and an uncapped frame rate. This exposed an instability in my memory OC that even 4 passes of memtest86 didn't detect. I had to dig through the event viewer to figure out that the crashes were related to ram.


----------



## Blameless

bunnybutt said:


> Would an overheating gpu push the PC to hard restart to bios?


GPU memory/cache errors can induce WHEA cache hierarchy errors for the CPU and cause the system to reset, especially if SAM is enabled.


----------



## CS9K

J7SC said:


> ...At 3800 + for Ryzen DDR4, I much prefer GDM on and a slight undervolt on the RAM, especially w/ 4 stick of b-die SR. At DDR4 3200 or so, it can be a different matter. Btw, I also have direct RAM fans to keep it below 40 C under heavy loads.
> 
> ...here is an oldie but still relevant > article which also looks at frequency vs timing and other parameters


I prefer to not fiddle with GDM, but to each their own. I have excellent case airflow, so running 3733 @ 1T is fine, I'm under 1.5V Vdimm and 45C under daily use; I'll take it.


----------



## bunnybutt

J7SC said:


> ...At 3800 + for Ryzen DDR4, I much prefer GDM on and a slight undervolt on the RAM, especially w/ 4 stick of b-die SR. At DDR4 3200 or so, it can be a different matter. Btw, I also have direct RAM fans to keep it below 40 C under heavy loads.
> 
> ...here is an oldie but still relevant > article which also looks at frequency vs timing and other parameters


Thank you for the info and the link. I love reading all the RAM info I can ever find. I haven’t seen this one yet.









@Blameless Ill start checking event viewer next time something happens.
Currently I put all my ram settings to “manual xmp/auto”, with the typical undervolt I used to run. Going to REALLY take my time now with ram OCing.
Ever since I did this, although it’s only been a few days- all my troubles have stopped. GPU is still at the same settings, except 1155v instead of 1200.


Sucks. I’ve seen 4000mhz, ran 3800, then stepped to 3600 for better stability, but looks like i’m about to stick with xmp 3200, and focus on super tight racecar settings. Now I see why so many people in the past were trying to help and warn me that people don’t usually get high ram OCs.


----------



## 99belle99

bunnybutt said:


> Thank you for the info and the link. I love reading all the RAM info I can ever find. I haven’t seen this one yet.
> View attachment 2539929
> 
> 
> @Blameless Ill start checking event viewer next time something happens.
> Currently I put all my ran settings to “manual xmp/auto”, with the typical undervolt I used to run. Going to REALLY take my time now with ram OCing.
> Ever since I did this, although it’s only been a few days- all my troubles have stopped. GPU is still at the same settings, except 1155v instead of 1200.
> 
> 
> Sucks. I’ve seen 4000mhz, ram 3800, then stepped to 3600 for better stability, but looks like i’m about to stick with xmp 3200, and focus on super tight racecar settings. Now I see why so many people in the past were trying to help and warn me that people don’t usually get high ram OCs.


Isn't 3600MHz the sweet spot as you can have infinity fabric at 1800MHz for a 1:1:1. Ryzen runs better that way as at 3200Mhz you will not be in 1:1:1 mode.


----------



## bunnybutt

99belle99 said:


> Isn't 3600MHz the sweet spot as you can have infinity fabric at 1800MHz for a 1:1:1. Ryzen runs better that way as at 3200Mhz you will not be in 1:1:1 mode.


that’s why I was trying my hardest to get 3600 to run, but I was already at 1.44-1.48v, 14-14-14, and sometimes saw temps upwards of the 70c. 

I used to have a fan stuck to the back case to blow air directly on the back of the mobo, but i removed it when upgrading to nfa12x25 cause it was too loud. Maybe I need to put the fan back, but use the old noctua I had there before


----------



## lestatdk

My RAM is 4000 MHz with kind of crappy CL16-19-19-39 stock timings. Can run CL14-16-17-34 @ 3600 . 1T and GDM disabled. Around 1.46V if I remember correctly. Used the Ryzen DRAM calculator to get the timings. So far no issues with games or benches. Outperforms compared to the stock timings at 4000. Probably mostly related to being able to run 1:1:1


----------



## J7SC

bunnybutt said:


> that’s why I was trying my hardest to get 3600 to run, but I was already at 1.44-1.48v, 14-14-14, and sometimes saw temps upwards of the 70c.
> 
> I used to have a fan stuck to the back case to blow air directly on the back of the mobo, but i removed it when upgrading to nfa12x25 cause it was too loud. Maybe I need to put the fan back, but use the old noctua I had there before


I run 'helper fans' above the VRM and RAM on every single system I have, be they for work or play. I find that once RAM gets to the upper 40 C / low 50 C stage, random funny things start happening.



lestatdk said:


> My RAM is 4000 MHz with kind of crappy CL16-19-19-39 stock timings. Can run CL14-16-17-34 @ 3600 . 1T and GDM disabled. Around 1.46V if I remember correctly. Used the Ryzen DRAM calculator to get the timings. So far no issues with games or benches. Outperforms compared to the stock timings at 4000. Probably mostly related to being able to run 1:1:1


I tried w/ GDM on and off and settled for 'on' re. finer control at upper speeds - which is not to say the GDM off is 'bad' if you can get it working w/o much extra voltage 3800+ though my Aida results were within margin of error anyhow.

The. 5950X does have a decent IF controller and for 3800 and 4000 (1:1:1), GDM made more sense. I still have to revisit IF2000 / DDR4000 (RAM is 4000 CL15 nominal 4x8 Sam-B-die, SR) below, but early tests showed the odd WHEA error before other adjustments such as raising SoC just a tad, besides, it becomes a real trade-off between latency and bandwidth. For now, I undervolt the RAM at 3800 (1:1:1) 'tight' w/GDM on and enjoy a completely trouble-free system. FYI, I could lower tFAW down to 16 w/o RAM test failures, but in some GPU apps, it can hurt low 1% / 10% frame times.\


----------



## 99belle99

J7SC said:


> I run 'helper fans' above the VRM and RAM on every single system I have, be they for work or play. I find that once RAM gets to the upper 40 C / low 50 C stage, random funny things start happening.
> 
> 
> 
> I tried w/ GDM on and off and settled for 'on' re. finer control at upper speeds - which is not to say the GDM off is 'bad' if you can get it working w/o much extra voltage 3800+ though my Aida results were within margin of error anyhow.
> 
> The. 5950X does have a decent IF controller and for 3800 and 4000 (1:1:1), GDM made more sense. I still have to revisit IF2000 / DDR4000 (RAM is 4000 CL15 nominal 4x8 Sam-B-die, SR) below, but early tests showed the odd WHEA error before other adjustments such as raising SoC just a tad, besides, it becomes a real trade-off between latency and bandwidth. For now, I undervolt the RAM at 3800 (1:1:1) 'tight' w/GDM on and enjoy a completely trouble-free system. FYI, I could lower tFAW down to 16 w/o RAM test failures, but in some GPU apps, it can hurt low 1% / 10% frame times.\
> View attachment 2539937


Wow that's pretty good RAM you got there. Especially the 3800MHz as I can run 3800MHz myself but with looser timings than you. Never tried any higher than 3800MHz as I only have a 3700X and I doubt I could get 2000MHz IF to work.


----------



## J7SC

99belle99 said:


> Wow that's pretty good RAM you got there. Especially the 3800MHz as I can run 3800MHz myself but with looser timings than you. Never tried any higher than 3800MHz as I only have a 3700X and I doubt I could get 2000MHz IF to work.


Thanks. IF2000/4000 'looks good' but it is a lot trickier though not impossible to get error-free, compared to 3800, especially at tight timings. As I said, it's a trade-off. But no doubt the 4000 kit is a good one, even slightly better than a not dissimilar kit (3866) I use on the 3950X below (which is primarily a home-office work machine).

You might as well try '2000 / 4000' on your 3700X, after saving your current best bios profile. Who knows, it might do it. First, just try out s.th. like CL18-18-18 to see if the IF can handle 4000..If it does, time to tighten things until it's no longer error-free.

All that said, for now I'm happy that I'm not chasing near-nonexistent, super pricey DDR5. I have friends w/ new Z690 builds who are basically going mad re. DDR5


----------



## CS9K

bunnybutt said:


> Thank you for the info and the link. I love reading all the RAM info I can ever find. I haven’t seen this one yet.
> View attachment 2539929


Careful what you ask for there, Johnny Five, once you crack open JEDEC's JESD79-4a, you'll be having nightmares about memory timings 



J7SC said:


> I run 'helper fans' above the VRM and RAM on every single system I have, be they for work or play. I find that once RAM gets to the upper 40 C / low 50 C stage, random funny things start happening.


tRFC/tREFI are the main temperature sensitive settings. Even with B-die, the closer to 50C you get (and the tighter you have tRFC), the more risk one runs of losing data. It's a fine line to walk.

TL;DR for all: That 1:1:1 ratio is the goal. It can be met at 3733 or 3800, but that's about as high as your average Ryzen 5000 chip will go on fclk before it catches a case of the WHEAs. I actually had to back mine down off of 3800 because I kept having 1 or 2 wheas per day, and I can't live with my setup throwing _any_ errors.

Here's my current ZenTimings, for anyone curious:










With Intel, meanwhile, the sky is the limit for memory. I had a Z390 Aorus Pro, 9700K @ 5.1; 1.35V; no avx offset, 4x8GB of G.Skill 3200C14 B-die, and little baby Noctuas to keep my memory cool. I got that setup running at DDR4 4133 flat 16's on 1.48V. A bit spicy for the original 3200C14 B-die kit, but it was rock stable once I tuned in the secondaries. I miss that system, it was ridiculous for what it was 💗

Behold, the most adorable hackjob-engineered memory cooling fans ever


----------



## jonRock1992

Blameless said:


> GPU memory/cache errors can induce WHEA cache hierarchy errors for the CPU and cause the system to reset, especially if SAM is enabled.


Holy ****. I wonder if that's what's been going on with my system. I recently started using 2300 gpu FCLK in games, and I've gotten random hard crashes related to CPU cache heirarchy errors. I thought it was related to the bios update that I did. I did 40 passes of Timespy GT2 with these settings though. Still skeptical that it's related to the bios update though. Probably gonna go back to bios 3801 on my dark hero.


----------



## CS9K

jonRock1992 said:


> Holy ****. I wonder if that's what's been going on with my system. I recently started using 2300 gpu FCLK in games, and I've gotten random hard crashes related to CPU cache heirarchy errors. I thought it was related to the bios update that I did. I did 40 passes of Timespy GT2 with these settings though. Still skeptical that it's related to the bios update though.


Oh yeah dude, naw. With 1150mV as the max SOC voltage, 2100 fclk is about all an RX 6900 XT can do. Watch your SOC Voltage carefully in hardware info at 1940MHz fclk and go run Superposition "4k Optimized". Then 2000MHz fclk, then 2050MHz fclk. You'll see that you bonk up against your SOC max voltage _right_ quick.


----------



## jonRock1992

CS9K said:


> Oh yeah dude, naw. With 1150mV as the max SOC voltage, 2100 fclk is about all an RX 6900 XT can do. Watch your SOC Voltage carefully in hardware info at 1940MHz fclk and go run Superposition "4k Optimized". Then 2000MHz fclk, then 2050MHz fclk. You'll see that you bonk up against your SOC max voltage _right_ quick.


My soc voltage is set a lot higher though. I'll probably just drop that down to 2100MHz and decrease voltage. There isn't much performance gained from doing it.


----------



## CS9K

jonRock1992 said:


> My soc voltage is set a lot higher though. I'll probably just drop that down to 2100MHz and decrease voltage. There isn't much performance gained from doing it.


Ah okay, my bad. Ideally, the SoC voltage reading should never bonk up against whatever maximum SoC voltage is set; I've found instability and ever-so-slight performance regression when SoC voltage reading does meet the max and stay there.


----------



## bunnybutt

I’m kinda hoping for a long term purchase. I’m envisioning 64gb DDR4 5000mhz cl14 for maybe $300-500 right before DDR4 EOL. They advertise the numbers, so I don’t see why I can’t get a good DDR4 upgrade in the future, as they stress everyone out that they “need” DDR5.

I still have a system with overclocked DDR2, guys. Those 3T-3T-3T timings are killer, son!


Offtopic: I’m sure by this point, we ALL have a free copy of FarCry6, one way or another, from all the AMD stuff we have been buying…
I’m surprised that my “ultimate AMD” setup can barely run the game… Only “High” settings and I’m averaging like 120fps. 
it’s MORE than playable, but.. eeehh…. why such a performance hit when I feel like I’m just playing a new flavor of FC3?
Im definitely just bored of the farcry series at this point. it’s blatant cookie cutter mechanics/gameplay but with newly created characters. I can’t believe they won’t stop pushing this.

Done 1,2,3,blooddragon,4 (This is where I started to say”no thanks”)-Skipped 5, got 6 for free….Meh. I rather max out graphics on 1 or BloodDragon and live in nostalgia.


----------



## bunnybutt

CS9K said:


> Careful what you ask for there, Johnny Five, once you crack open JEDEC's JESD79-4a, you'll be having nightmares about memory timings
> 
> 
> 
> tRFC/tREFI are the main temperature sensitive settings. Even with B-die, the closer to 50C you get (and the tighter you have tRFC), the more risk one runs of losing data. It's a fine line to walk.
> 
> TL;DR for all: That 1:1:1 ratio is the goal. It can be met at 3733 or 3800, but that's about as high as your average Ryzen 5000 chip will go on fclk before it catches a case of the WHEAs. I actually had to back mine down off of 3800 because I kept having 1 or 2 wheas per day, and I can't live with my setup throwing _any_ errors.
> 
> Here's my current ZenTimings, for anyone curious:
> View attachment 2539950
> 
> 
> 
> With Intel, meanwhile, the sky is the limit for memory. I had a Z390 Aorus Pro, 9700K @ 5.1; 1.35V; no avx offset, 4x8GB of G.Skill 3200C14 B-die, and little baby Noctuas to keep my memory cool. I got that setup running at DDR4 4133 flat 16's on 1.48V. A bit spicy for the original 3200C14 B-die kit, but it was rock stable once I tuned in the secondaries. I miss that system, it was ridiculous for what it was 💗
> 
> Behold, the most adorable hackjob-engineered memory cooling fans ever
> View attachment 2539951



I’ve been having a hard find finding info about VDDP and VDDG… can’t be lower than this, but exceed that…. Can you do bad damage by setting the VDDs too high?
I’ve been really trying to learn for years…. I wish i could sit down in real time with you guys, Iget so confused by how to work over RAM OCs. 

Also: I actually have one of those fans. My buddy has a 3D printer but he asked me to bring my entire rig over to his house. He says he can make and design a small bracket that could mount the fan just above the ram and blow air through from the top. That way it won’t mess up my “pretty” setup. Trying to actually stick with Vapid principles and I’m trying to focus on form>function.


----------



## J7SC

CS9K said:


> (...)
> 
> Behold, the most adorable hackjob-engineered memory cooling fans ever
> View attachment 2539951


Nice, but what about the duct tape edition  ?

I still have those age-old GSkill RAM clip-on coolers which were meant for DDR3...for DDR4, the metal clamps came dangerously close to touching the back of the GPU (which has no backplate) and shortening out some solder point things, soooo...go big or go home on this Intel 5960X (8c/16t) setup with the 32GB kit the 3950X is now running (above). Also note the GPU box foam strip underneath the fans on top of the GPU - worked great w/ double-sided sticky tape


----------



## CS9K

bunnybutt said:


> I’ve been having a hard find finding info about VDDP and VDDG… can’t be lower than this, but exceed that…. Can you do bad damage by setting the VDDs too high?
> I’ve been really trying to learn for years…. I wish i could sit down in real time with you guys, Iget so confused by how to work over RAM OCs.


I struggled with VDDP and VDDG voltages for a while, but in the end, I found that no combination of VDDP nor VDDG voltages that would prevent the WHEAs at DDR4 3800 and above. I just left them on Auto at 3733, and life has been good. That said, I _did_ have to manually set SoC voltage and other things, but yeah, I wouldn't worry about them all too much.



J7SC said:


> Nice, but what about the duct tape edition  ?
> 
> I still have those age-old GSkill RAM clip-on coolers which were meant for DDR3...for DDR4, the metal clamps came dangerously close to touching the back of the GPU (which has no backplate) and shortening out some solder point things, soooo...go big or go home on this Intel 5960X (8c/16t) setup with the 32GB kit the 3950X is now running (above). Also note the GPU box foam strip underneath the fans on top of the GPU - worked great w/ double-sided sticky tape


SO proud 💗


----------



## 99belle99

J7SC said:


> Nice, but what about the duct tape edition  ?
> 
> I still have those age-old GSkill RAM clip-on coolers which were meant for DDR3...for DDR4, the metal clamps came dangerously close to touching the back of the GPU (which has no backplate) and shortening out some solder point things, soooo...go big or go home on this Intel 5960X (8c/16t) setup with the 32GB kit the 3950X is now running (above). Also note the GPU box foam strip underneath the fans on top of the GPU - worked great w/ double-sided sticky tape
> View attachment 2539955


I'm the same I have a double fan RAM cooler I got many years ago for x58 CL7 DDR3. Just reading through this thread and the talk and pictures of RAM coolers have me thinking of trying it on my current system.


----------



## LtMatt

CS9K said:


> Oh yeah dude, naw. With 1150mV as the max SOC voltage, 2100 fclk is about all an RX 6900 XT can do. Watch your SOC Voltage carefully in hardware info at 1940MHz fclk and go run Superposition "4k Optimized". Then 2000MHz fclk, then 2050MHz fclk. You'll see that you bonk up against your SOC max voltage _right_ quick.


2200 FCLK daily 24/7 gaming overclock club checking in. 2225Mhz eventually causes a crash though.

Pretty sure it makes bugger all difference in gaming for the most part.


----------



## IIISLIDEIII

was directed to buy powercolor liquid devil ultimate but today I found the reference 6900 with wb ek already mounted for half the price, I know that the reference level is not close to the liquid devil, but even half the price is important to me.
The card arrives on January 3, to understand if it will be a lucky reference or not, what are the overclocking levels that I should expect if the card is good?
thank you


----------



## Thanh Nguyen

LtMatt said:


> 2200 FCLK daily 24/7 gaming overclock club checking in. 2225Mhz eventually causes a crash though.
> 
> Pretty sure it makes bugger all difference in gaming for the most part.


Able to run bf2042 with that setting?


----------



## geriatricpollywog

Microcenter Tustin is fully stocked with 6800XT cards. Are any of these XTX-H?


----------



## ZealotKi11er

Ultimate is most likely XTXH


----------



## 99belle99

IIISLIDEIII said:


> was directed to buy powercolor liquid devil ultimate but today I found the reference 6900 with wb ek already mounted for half the price, I know that the reference level is not close to the liquid devil, but even half the price is important to me.
> The card arrives on January 3, to understand if it will be a lucky reference or not, what are the overclocking levels that I should expect if the card is good?
> thank you


Mid 2600-2700MHz is a decent card for reference.


----------



## LtMatt

Thanh Nguyen said:


> Able to run bf2042 with that setting?


I've not installed that game so can't comment, but I've heard it can cause stability problems with big overclocks.

What I will say is that it is stable running Warzone, Halo Infinite, Vanguard, Far Cry 6, etc for hours on end with power draw up to 400W.


----------



## CS9K

99belle99 said:


> Mid 2600-2700MHz is a decent card for reference.


Concur. 2700MHz is my daily-driver setting on water with a 400W power limit.


----------



## J7SC

geriatricpollywog said:


> Microcenter Tustin is fully stocked with 6800XT cards. Are any of these XTX-H?
> 
> View attachment 2540060
> 
> View attachment 2540062
> 
> View attachment 2540061


I also think the 'ultimate' series might be XTX'H', but to be certain, check Powercolor's site by comparing boost and game clocks of all their 6900XT models.

-----

Elsewhere, it's cold outside so a bit of open window bench shenanigans...I was trying to break my own PortRoyal high mark for the 6900XT and used TDVmin / 1,200v...got a new personal best score for the 6900XT (XTX) but may be add a bit more voltage to get past 2800 Mhz from 2780 MHz for the PortRoyal run ? That's the problem w/ TDVmin...just one more voltage boost 

Superposition4K usually allows for another 60 MHz or so but also could use some more MPT PL...but it's getting too cold in here (was -8C outside this morning).


----------



## jonRock1992

J7SC said:


> I also think the 'ultimate' series might be XTX'H', but to be certain, check Powercolor's site by comparing boost and game clocks of all their 6900XT models.
> 
> -----
> 
> Elsewhere, it's cold outside so a bit of open window bench shenanigans...I was trying to brake my own PortRoyal high mark for the 6900XT and used TDVmin / 1,200v...got a new personal best score for the 6900XT (XTX) but may be add a bit more voltage to get past 2800 Mhz from 2780 MHz for the PortRoyal run ? That's the problem w/ TDVmin...just one more voltage boost
> 
> Superposition4K usually allows for another 60 MHz or so but also could use some more MPT PL...but it's getting too cold in here (was -8C outside this morning).
> View attachment 2540070


Can confirm that the red devil ultimate is XTXH. It is terrible on the stock cooler though. Definitely recommend water-cooling with that GPU.


----------



## 99belle99

99belle99 said:


> I'm the same I have a double fan RAM cooler I got many years ago for x58 CL7 DDR3. Just reading through this thread and the talk and pictures of RAM coolers have me thinking of trying it on my current system.


Well hooked up the RAM cooler. Sits a bit high off the top of the RAM and I may just take it off again as I turned the fan speed down and stuck my finger in and cannot even feel air being pushed onto the RAM sticks if I turn up the fan speed air is being pushed but it is really noisy. My PC is silent while browsing the net normally so this new cooler has to go.


----------



## bunnybutt

could you guys give me a bit more details about what/how you mean when you speak of “SOC & FCLK” for the GPU?

I had no idea these things existed or how to find them, but I am curious what is the voltage ranges for these inputs.


----------



## ZealotKi11er

bunnybutt said:


> could you guys give me a bit more details about what/how you mean when you speak of “SOC & FCLK” for the GPU?
> 
> I had no idea these things existed or how to find them, but I am curious what is the voltage ranges for these inputs.


SOC does nothing. FCLK benefit is minimal so in general not worth the trouble to increase them.


----------



## nyk20z3

What's the market for a 6900XT Strix lightly used?


----------



## IIISLIDEIII

CS9K said:


> Concur. 2700MHz is my daily-driver setting on water with a 400W power limit.


could you please tell me how did you set MPT power limit and tdp to have 400w? thank you



99belle99 said:


> Mid 2600-2700MHz is a decent card for reference.


thanks, out of curiosity, how much can a custom xtxh reach instead?


----------



## ZealotKi11er

IIISLIDEIII said:


> could you please tell me how did you set MPT power limit and tdp to have 400w? thank you
> 
> 
> thanks, out of curiosity, how much can a custom xtxh reach instead?


XTXH will clock 2700-2800 (Set Clk). XTX will clock 2500-2750 (Set Clk).


----------



## lawson67

ZealotKi11er said:


> XTXH will clock 2700-2800 (Set Clk). XTX will clock 2500-2750 (Set Clk).


Not all, I've had four XTXH Rx 6900 XT cards now, and it luck of the draw how well even your XTXH is binned, my first card was a powercolor Ultimate which could time spy loop GT2 at 2870mhz and was stable in all games using that clock "that's a really good XTXH" then i had a powercolor Liquid devil Ultimate that was NOT stable at 2600 -2700mhz so sent it back, then i had a sapphire extreme that was stable at 2700 - 2800mhz but artifacted at anything over 2800mhz, so i sent that back now i have another Sapphire Extreme that is stable at 2750mhz-2850mhz, so its luck of the draw for anything over 2700mhz even with so called hand picked XTXH silicon, its also luck of the draw whether your Vram will go any faster than 2100mhz FT2 without errors and that's the same for XTX and XTXH cards


----------



## CS9K

lawson67 said:


> Not all, I've had four XTXH Rx 6900 XT cards now, and it luck of the draw how well even your XTXH is binned, my first card was a powercolor Ultimate which could time spy loop GT2 at 2870mhz and was stable in all games using that clock "that's a really good XTXH" then i had a powercolor Liquid devil Ultimate that was NOT stable at 2600 -2700mhz so sent it back, then i had a sapphire extreme that was stable at 2700 - 2800mhz but artifacted at anything over 2800mhz, so i sent that back now i have another Sapphire Extreme that is stable at 2750mhz-2850mhz, so its luck of the draw for anything over 2700mhz even with so called hand picked XTXH silicon, its also luck of the draw whether your Vram will go any faster than 2100mhz FT2 without errors and that's the same for XTX and XTXH cards


The truth be spoken. Hear hear! 

The silicon lottery™ is real, even with binned GPU's.


----------



## IIISLIDEIII

lawson67 said:


> Not all, I've had four XTXH Rx 6900 XT cards now, and it luck of the draw how well even your XTXH is binned, my first card was a powercolor Ultimate which could time spy loop GT2 at 2870mhz and was stable in all games using that clock "that's a really good XTXH" then i had a powercolor Liquid devil Ultimate that was NOT stable at 2600 -2700mhz so sent it back, then i had a sapphire extreme that was stable at 2700 - 2800mhz but artifacted at anything over 2800mhz, so i sent that back now i have another Sapphire Extreme that is stable at 2750mhz-2850mhz, so its luck of the draw for anything over 2700mhz even with so called hand picked XTXH silicon, its also luck of the draw whether your Vram will go any faster than 2100mhz FT2 without errors and that's the same for XTX and XTXH cards


in practice you are saying that there is no guarantee that by spending more to go to a custom xtxh you get a higher clock than a reference xtx, do I understand correctly?


----------



## Neoki

IIISLIDEIII said:


> in practice you are saying that there is no guarantee that by spending more to go to a custom xtxh you get a higher clock than a reference xtx, do I understand correctly?


Had a Gigabyte 6900xt Waterforce (XTXH, $2200) that ran at only 2710mhz core / but 2120mhz memory stable. But then had to RMA Return due to LED's not syncing. Got a XFX 6900xt Zero (XTXH, $1800) that runs 2780mhz Core / 2040mhz memory (yes ONLY 2040mhz) stable. So yes, very much luck of the draw even on the XTXH cards that run 1500+ USD. My Zero is a complete memory dud.


----------



## IIISLIDEIII

Neoki said:


> Had a Gigabyte 6900xt Waterforce (XTXH, $2200) that ran at only 2710mhz core / but 2120mhz memory stable. But then had to RMA Return due to LED's not syncing. Got a XFX 6900xt Zero (XTXH, $1800) that runs 2780mhz Core / 2040mhz memory (yes ONLY 2040mhz) stable. So yes, very much luck of the draw even on the XTXH cards that run 1500+ USD. My Zero is a complete memory dud.


sorry to hear this, i was a few days indecisive and went close to push button for a liquid devil ultimate, maybe i was lucky or maybe not but in the end i chose a very normal reference xtx with wb ek, i saved some money, definitely will go softer than a xtxh but I accept it.

Sorry if I ask but this is my first AMD, I'm no expert, when you say your xfx can only go to 2040mhz of memory, do you mean this adjustment on the vram ?:


does it make a lot of difference in game to be able to go up with the adjustment of the vram?
what is a good level of regulation?


----------



## _AntLionBR_

Hello everyone!

I have a question about the voltage.

What's the use of setting the voltage of my RX 6900XT Toxic EE in the SR to 1.15V and in the Time Spy/Fire Strike test in many moments the video card pulls the 1.20V? I would like to find the minimum voltage for 2730MHz, but apparently it doesn't work.

In the photo it was running Time Spy, I set 1.15V, but I was pulling 1.20V at that time.


----------



## ZealotKi11er

_AntLionBR_ said:


> Hello everyone!
> 
> I have a question about the voltage.
> 
> What's the use of setting the voltage of my RX 6900XT Toxic EE in the SR to 1.15V and in the Time Spy/Fire Strike test in many moments the video card pulls the 1.20V? I would like to find the minimum voltage for 2730MHz, but apparently it doesn't work.
> 
> In the photo it was running Time Spy, I set 1.15V, but I was pulling 1.20V at that time.


You have to change the voltage with MPT.

What that voltage slider does is adjust the voltage needed for stock clock. 

For example is the stock clock for you card is 2550MHz it needs 1.2v to hit that clock. If you set it to 1.15v than it will run 1.15v. Now if you increase the clk to 2650, you are moving from that voltage/frequency point so 2650MHz set at higher voltage.


----------



## _AntLionBR_

ZealotKi11er said:


> You have to change the voltage with MPT.
> 
> What that voltage slider does is adjust the voltage needed for stock clock.
> 
> For example is the stock clock for you card is 2550MHz it needs 1.2v to hit that clock. If you set it to 1.15v than it will run 1.15v. Now if you increase the clk to 2650, you are moving from that voltage/frequency point so 2650MHz set at higher voltage.


Thanks for replying and sorry for my english.

I don't know much about AMD as it's my first Radeon card I have. So if I want to lower the voltage of my video card is it only via MPT that will work for real?
What would that "clk" you say, what should I do? 

Here is the photo of the MPT with the Toxic EE.


----------



## ZealotKi11er

Lower GFX voltage to 1150. This will not allow the voltage to go over 1150.


----------



## LtMatt

For undervolting I set my Toxic Extreme to 1125mv in MPT, this allows me to 2760Mhz in Radeon Software which ensures 2700Mhz+ in games. 

Power usage generally around 300-350W peak at this voltage too so jobs a good un!


----------



## _AntLionBR_

ZealotKi11er said:


> Lower GFX voltage to 1150. This will not allow the voltage to go over 1150.


Thank you ZealotKi11er. I will do this!



LtMatt said:


> For undervolting I set my Toxic Extreme to 1125mv in MPT, this allows me to 2760Mhz in Radeon Software which ensures 2700Mhz+ in games.
> 
> Power usage generally around 300-350W peak at this voltage too so jobs a good un!


Could you please if possible give me all the configuration you use in MPT and Radeon software on your Toxic?


----------



## Skinnered

LtMatt said:


> For undervolting I set my Toxic Extreme to 1125mv in MPT, this allows me to 2760Mhz in Radeon Software which ensures 2700Mhz+ in games.
> 
> Power usage generally around 300-350W peak at this voltage too so jobs a good un!


Did it lower your temp much?
Gonna try 1150, to see if it will be enough for ~2750 ,in games.


----------



## Neoki

IIISLIDEIII said:


> sorry to hear this, i was a few days indecisive and went close to push button for a liquid devil ultimate, maybe i was lucky or maybe not but in the end i chose a very normal reference xtx with wb ek, i saved some money, definitely will go softer than a xtxh but I accept it.
> 
> Sorry if I ask but this is my first AMD, I'm no expert, when you say your xfx can only go to 2040mhz of memory, do you mean this adjustment on the vram ?:
> 
> 
> does it make a lot of difference in game to be able to go up with the adjustment of the vram?
> what is a good level of regulation?


Yes it's in reference to VRAM. And you have to find your cards sweetspot, every card is unique. Most of the good binned XTXH's get performance degradation above 2120mhz VRAM, even if it's "stable" at 2150. You can test this via timespy or mining if you're into that. Same logic applies to XTX's I imagine, just try and see what works for you.

Most of us run the "Fast Timing" option on the VRAM and then tune the overclock based on what's stable, and not performance degrading. Unless you have a memory dud like mine I'd expect most 6900xt's to hit 2100mhz safely. So you can start there and go up 10mhz at a time until you see a loss of performance, and then dial it back 10mhz. Then I'd recommend running something like OCCT VRAM test for a good 1hr to see if any errors pop up.

Good luck!


----------



## LtMatt

_AntLionBR_ said:


> Thank you ZealotKi11er. I will do this!
> 
> 
> Could you please if possible give me all the configuration you use in MPT and Radeon software on your Toxic?


I can, but no guarantee these settings will be stable for you.


----------



## IIISLIDEIII

Neoki said:


> Yes it's in reference to VRAM. And you have to find your cards sweetspot, every card is unique. Most of the good binned XTXH's get performance degradation above 2120mhz VRAM, even if it's "stable" at 2150. You can test this via timespy or mining if you're into that. Same logic applies to XTX's I imagine, just try and see what works for you.
> 
> Most of us run the "Fast Timing" option on the VRAM and then tune the overclock based on what's stable, and not performance degrading. Unless you have a memory dud like mine I'd expect most 6900xt's to hit 2100mhz safely. So you can start there and go up 10mhz at a time until you see a loss of performance, and then dial it back 10mhz. Then I'd recommend running something like OCCT VRAM test for a good 1hr to see if any errors pop up.
> 
> Good luck!


thank you, very kind, as soon as I get the card I will do the tests


----------



## bunnybutt

I read these comments and makes me realize Im not actually accomplishing anything, if I even am.

Using only radeon software, (it says) im doing 2700/2800, 2164 mem, 1145v. 
I have problems passing TimeSpy with this voltage (need minimum 1175 for pure TS stability), but I havent had any problems in games. so take that as you will.
I saw TimeSpy scoring drop as I started to set Radeon over 2150mhz (actual is 2148mhz), but actual gaming performance has shown some gains, something up to 2164mhz (2152mhz actual), so I leave it.
I have had it stable running with it set at 2870 (with a higher voltage, obviously), but have more testing to do, because it can sometimes be TimeSpy unstable. Ive been searching out 2900, but its just not TS stable, so I dont know how game stable it would be. Have yet to really test.



All crashing issues has stopped since I set ram at (manual) xmp. Going to either start doing a heavy XMP undervolt, or try to test the waters on a new RAM OC since Ive added the 6900xt.


----------



## Blameless

Still playing with Night Raid, which I'm finding to be very useful for teasing out the difference between core and SoC voltage issues.

I've come to realize that for optimal stability, SoC voltage cannot be too low, but it also cannot be too high, relative to core voltage. Incorrect SoC voltage will cause Night Raid to eventually blackscreen crash, requiring the system to be reset. Incorrect core voltage will just halt the test with 3DMark reporting an error.

Right now I've got baseline settings (2450-2550 core, 2124 fast memory, 2052 FCLK, 1269 SoC clock, 1137mV core, 1075mV SoC, 800mV VDDCI, 1300mV GDDR6) that I know will pass absurd amounts of 1600p Night Raid (31 hours, or ~1800 loops) without crashing and am in the process of seeing how far I can reasonably go on this second (also rather mediocre) 6900 OCF sample and still get the (mostly) unconditional stability I'm after.

I've also verified the general SoC voltage sweet-spot trend with my 6800 XT...it also becomes unstable in protracted runs if SoC is too high, though it's less picky than my 6900 XT.

With regard to DcBtc and DcTol, manipulating them has no apparent effect on this new 6900 XT sample, but a small and repeatable benefit to peak stable clocks on my heavily used 6800 XT.

I removed my PhyClk OC as it seems to induce subtle instability past 2100 mem clock or so. It's also almost meaningless as far as performance goes...it's measurable, barely, but so trivial that I'd only bother with it for benching, and only after absolutely everything else has been accounted for.


----------



## J7SC

Blameless said:


> Still playing with Night Raid, which I'm finding to be very useful for teasing out the difference between core and SoC voltage issues.
> 
> I've come to realize that for optimal stability, SoC voltage cannot be too low, but it also cannot be too high, relative to core voltage. Incorrect SoC voltage will cause Night Raid to eventually blackscreen crash, requiring the system to be reset. Incorrect core voltage will just halt the test with 3DMark reporting an error.
> 
> Right now I've got baseline settings (2450-2550 core, 2124 fast memory, 2052 FCLK, 1269 SoC clock, 1137mV core, 1075mV SoC, 800mV VDDCI, 1300mV GDDR6) that I know will pass absurd amounts of 1600p Night Raid (31 hours, or ~1800 loops) without crashing and am in the process of seeing how far I can reasonably go on this second (also rather mediocre) 6900 OCF sample and still get the (mostly) unconditional stability I'm after.
> 
> I've also verified the general SoC voltage sweet-spot trend with my 6800 XT...it also becomes unstable in protracted runs if SoC is too high, though it's less picky than my 6900 XT.
> 
> With regard to DcBtc and DcTol, manipulating them has no apparent effect on this new 6900 XT sample, but a small and repeatable benefit to peak stable clocks on my heavily used 6800 XT.
> 
> I removed my PhyClk OC as it seems to induce subtle instability past 2100 mem clock or so. It's also almost meaningless as far as performance goes...it's measurable, barely, but so trivial that I'd only bother with it for benching, and only after absolutely everything else has been accounted for.


Thanks for sharing the baseline settings above, very useful for comparative purposes, especially on some of the sub vars...Now, you may recall my opinion re. Night Raid for mobile, but I sure admire your tenacity, and your card's endurance: 31 hours , or ~1,800 loops ?

Quick question on MPT Power Limits (W) and TDC Limits (A). Mt 6900XT's PCB is a true 3x8 pin. Stock values (air-cooled factory vbios) are 264W PL and 320A TDC. With ample custom w-cooling, thermal putty an the like, I have set MPT PL to 360W and raised TDC to 340A...how do you zero in on the best custom TDC (A) with the thermal improvements ?


----------



## Blameless

J7SC said:


> Thanks for sharing the baseline settings above, very useful for comparative purposes, especially on some of the sub vars...Now, you may recall my opinion re. Night Raid for mobile, but I sure admire your tenacity, and your card's endurance: 31 hours , or ~1,800 loops ?


That was just the end of the runs I used to dial in my baseline. Card probably has 200-250 hours of Night Raid on it at this point. I'm finding the highest resolution that fully loads the card (2560*1600 works well...quite a bit more stressful than the default 1080p, but still gets near 1200 fps at points) to be the most stressful on the SoC/FCLK stuff.

I pretty much spend the entire time I have to return a part to the seller testing the snot out of it. Since I'm generally significantly undervolting these parts, there is essentially nothing I can do to them that would damage them if they don't have manufacturing defects, and if they do have defects, I want to find them now, not later.



J7SC said:


> Quick question on MPT Power Limits (W) and TDC Limits (A). Mt 6900XT's PCB is a true 3x8 pin. Stock values (air-cooled factory vbios) are 264W PL and 320A TDC. With ample custom w-cooling, thermal putty an the like, I have set MPT PL to 360W and raised TDC to 340A...how do you zero in on the best custom TDC (A) with the thermal improvements ?


Personally, I'm willing to run whatever limits I can cool, unless I think there is some other limiting factor at play. 3x8pin + a PCI-E slot are good for 525w without exceeding spec. On top of that, essentially every triple 8-pin 6900 XT also has a VRM capable of delivering far more than the inputs are rated for. Since the power limit slider goes to +15%, I consider the stock limit +15% equating to near 525w to be playing it reasonably safe on these parts. So, 450w PPT, 400A core TDC, and 60A SoC TDC limits are my baseline in MPT for triple 8-pin cards. The Wattman slider can be used to tune as necessary after the fact, but I normally just leave it as 450w is plenty for any reasonable settings in almost any app.

My card is going to be semi-passively air cooled in an SFF box and I'm still using that ~450w/400A limit to ensure it never throttles, except in FurMark and OCCT (which will overheat it immediately if allowed to run uncapped at worst case settings), as that is just about the maximum I can cool with the bottom 120mm intake fans of my TU150 at 100% fan speed, pressed directly up against the card's bare heatsink.

This OCF defaults to some very lopsided limits, with a 325w PPT, but a 420A TDC. No test I've ever seen will not max out the PPT way before the TDC, which should be obvious as 420A at 325w would imply ~774mV and anything that could draw anywhere near that kind of current is going to need way more voltage than that to be stable...so I reduced TDC a little and jacked up PPT a lot.


----------



## J7SC

Blameless said:


> That was just the end of the runs I used to dial in my baseline. Card probably has 200-250 hours of Night Raid on it at this point. I'm finding the highest resolution that fully loads the card (2560*1600 works well...quite a bit more stressful than the default 1080p, but still gets near 1200 fps at points) to be the most stressful on the SoC/FCLK stuff.
> 
> I pretty much spend the entire time I have to return a part to the seller testing the snot out of it. Since I'm generally significantly undervolting these parts, there is essentially nothing I can do to them that would damage them if they don't have manufacturing defects, and if they do have defects, I want to find them now, not later.
> 
> 
> 
> Personally, I'm willing to run whatever limits I can cool, unless I think there is some other limiting factor at play. 3x8pin + a PCI-E slot are good for 525w without exceeding spec. On top of that, essentially every triple 8-pin 6900 XT also has a VRM capable of delivering far more than the inputs are rated for. Since the power limit slider goes to +15%, I consider the stock limit +15% equating to near 525w to be playing it reasonably safe on these parts. So, 450w PPT, 400A core TDC, and 60A SoC TDC limits are my baseline in MPT for triple 8-pin cards. The Wattman slider can be used to tune as necessary after the fact, but I normally just leave it as 450w is plenty for any reasonable settings in almost any app.
> 
> My card is going to be semi-passively air cooled in an SFF box and I'm still using that ~450w/400A limit to ensure it never throttles, except in FurMark and OCCT (which will overheat it immediately if allowed to run uncapped at worst case settings), as that is just about the maximum I can cool with the bottom 120mm intake fans of my TU150 at 100% fan speed, pressed directly up against the card's bare heatsink.
> 
> This OCF defaults to some very lopsided limits, with a 325w PPT, but a 420A TDC. No test I've ever seen will not max out the PPT way before the TDC, which should be obvious as 420A at 325w would imply ~774mV and anything that could draw anywhere near that kind of current is going to need way more voltage than that to be stable...so I reduced TDC a little and jacked up PPT a lot.


Thanks !  I spent a great deal of time re. cooling the 6900XT - beyond the 'usual' - and want to take full advantage of that (ie. 30C - 40C _lower _Hotspot when compared to prior 'air-cooling, VRAM in the low 40s C). Plugging various assumed voltages into the W > A formula just ended up giving me ranges so wide they weren't really useful. PL upped to 380W (before 15%) and 390 TDC will be my next test bench play.


----------



## Spawnyspawn

Guys, I need some help. After tinkering with a few different reference 6900s earlier this year and a hiatus from AMD (some playtime with a few different 3080Tis and 3090s), I'm now back with AMD. I bought a Sapphire Extreme Edition some time ago and now during the holidays I finally have some time to see what kind of performance I can get out of it.
Earlier in december I managed to get this score with the 6900XT combined with a 5800X: I scored 21 591 in Time Spy
I have since swapped to a 5950X and with some initial tinkering I managed to get this score: I scored 23 089 in Time Spy

Both scores were with PPT set to 450W, 1200mV, 2678 min and 2778 max core clocks, 2138 mem clock and fans blasting at 100% speed.
Now I have a bit more time and attempted to "properly" OC (put more time into it).

Started out with finding the safe memory speeds using superposition (4k optimized) like this:
Stock voltage and coreclocks (500 min, 2589 max) and stock powerlimit +15%.
Baseline @ 2100 fast timings:
run 1 - 17719
run 2 - 17711
run 3 - 17720
run 4 - 17708
run 5 - 17724
Mean without highest and lowest: 17717

2110 fast timings:
run 1 - 17706
run 2 - 17721
run 3 - 17717

2120 fast timings:
run 1 - 17762
run 2 - 17752

2130 fast timings:
run 1 - 17778
run 2 - 17780

2140 fast timings:
run 1 - 17781
run 2 - 17782

Safe memory settings step before the plateau: 2120.

With MPT, I then set the powerlimit to 450W, OC test in TimeSpy:
Run 1 - 2550 min, 2650 max, 1200mV - 23473
Run 2 - 2600 min, 2700 max, 1200mV - 23695
Run 3 - 2650 min, 2750 max, 1200mV - 23852
Run 4 - 2700 min, 2800 max, 1200mV - CRASH (within 10 seconds of starting GPU1)
Run 5 - 2675 min, 2775 max, 1200mV - CRASH (instantly when starting GPU2)
Run 6 - 2670 min, 2770 max, 1200mV - CRASH (instantly when starting GPU2)
Run 7 - 2665 min, 2765 max, 1200mV - CRASH (halfway through GPU2)
Run 8 - 2660 min, 2760 max, 1200mV - CRASH (halfway through GPU1)
Run 9 - 2655 min, 2755 max, 1200mV - CRASH (halfway through GPU1) (GPU ASIC power 413W)

For pretty much all tests, fans are at 100%, GPU temp ~60 degrees and GPU Hotspot ~97 degrees.

As you can see I'm running into 2 issues. The first is the rather large delta between GPU average temps and GPU hotspot temps. The second is that I can't even get anywhere near the 24k+ gpu scores in TimeSpy I was getting before.

Especially after reading up in this thread, others seem to be getting much better performance with similar power draws or similar scores with far less power draw.
Do you guys have any tips as to what I can try to get more performance out of the card? I'll go try a few runs with memory clocks at 2130 (the first "plateau" step in superposition) and/or an undervolt with MPT, but more tips are appreciated.


----------



## LtMatt

I've discovered that I no longer need to use VMIN 1.212v to run 2760-2860Mhz in games. I can run it on the default voltage of 1.200v just fine without any crashes or artifacts. 2875Mhz eventually crashes though, so would probably be stable with a slight voltage bump.


Spawnyspawn said:


> Guys, I need some help. After tinkering with a few different reference 6900s earlier this year and a hiatus from AMD (some playtime with a few different 3080Tis and 3090s), I'm now back with AMD. I bought a Sapphire Extreme Edition some time ago and now during the holidays I finally have some time to see what kind of performance I can get out of it.
> Earlier in december I managed to get this score with the 6900XT combined with a 5800X: I scored 21 591 in Time Spy
> I have since swapped to a 5950X and with some initial tinkering I managed to get this score: I scored 23 089 in Time Spy
> 
> Both scores were with PPT set to 450W, 1200mV, 2678 min and 2778 max core clocks, 2138 mem clock and fans blasting at 100% speed.
> Now I have a bit more time and attempted to "properly" OC (put more time into it).
> 
> Started out with finding the safe memory speeds using superposition (4k optimized) like this:
> Stock voltage and coreclocks (500 min, 2589 max) and stock powerlimit +15%.
> Baseline @ 2100 fast timings:
> run 1 - 17719
> run 2 - 17711
> run 3 - 17720
> run 4 - 17708
> run 5 - 17724
> Mean without highest and lowest: 17717
> 
> 2110 fast timings:
> run 1 - 17706
> run 2 - 17721
> run 3 - 17717
> 
> 2120 fast timings:
> run 1 - 17762
> run 2 - 17752
> 
> 2130 fast timings:
> run 1 - 17778
> run 2 - 17780
> 
> 2140 fast timings:
> run 1 - 17781
> run 2 - 17782
> 
> Safe memory settings step before the plateau: 2120.
> 
> With MPT, I then set the powerlimit to 450W, OC test in TimeSpy:
> Run 1 - 2550 min, 2650 max, 1200mV - 23473
> Run 2 - 2600 min, 2700 max, 1200mV - 23695
> Run 3 - 2650 min, 2750 max, 1200mV - 23852
> Run 4 - 2700 min, 2800 max, 1200mV - CRASH (within 10 seconds of starting GPU1)
> Run 5 - 2675 min, 2775 max, 1200mV - CRASH (instantly when starting GPU2)
> Run 6 - 2670 min, 2770 max, 1200mV - CRASH (instantly when starting GPU2)
> Run 7 - 2665 min, 2765 max, 1200mV - CRASH (halfway through GPU2)
> Run 8 - 2660 min, 2760 max, 1200mV - CRASH (halfway through GPU1)
> Run 9 - 2655 min, 2755 max, 1200mV - CRASH (halfway through GPU1) (GPU ASIC power 413W)
> 
> For pretty much all tests, fans are at 100%, GPU temp ~60 degrees and GPU Hotspot ~97 degrees.
> 
> As you can see I'm running into 2 issues. The first is the rather large delta between GPU average temps and GPU hotspot temps. The second is that I can't even get anywhere near the 24k+ gpu scores in TimeSpy I was getting before.
> 
> Especially after reading up in this thread, others seem to be getting much better performance with similar power draws or similar scores with far less power draw.
> Do you guys have any tips as to what I can try to get more performance out of the card? I'll go try a few runs with memory clocks at 2130 (the first "plateau" step in superposition) and/or an undervolt with MPT, but more tips are appreciated.


I am pretty sure there are some tweaks for Timespy that allow some folk to get higher graphics scores than others. Also, for the best results you need to use the BIOS for the 6900 XTXH MBA Liquid Model. This will unlock the fastest memory frequency which improves score a bit.

Which driver are you using for Timespy? Perhaps try the same driver you used earlier.

That said, your delta is not too bad considering the 400W+ power draw, however I suspect the stock paste on your Extreme is dried since I've had two and I got much better results by switching to Silver King Liquid Metal. With a good application of LM you should bring your deltas down to 20c or lower and your max hotspot temp should drop 10c+ too. Lower temps will allow you a higher core clock, better stability and a higher graphics score. By how much exactly is hard to predict, but it will help.


----------



## Blameless

Was able to resolve the black screen crashes I was getting in Night Raid as I work my way up in core clock by increasing the minimum SoC voltage. I figure that during periods of load fluctuation the SoC clock drops momentarily, taking the SoC voltage along with it, then it doesn't always recover fast enough to remain stable during the extreme transients Night Raid prompts.



LtMatt said:


> Also, for the best results you need to use the BIOS for the 6900 XTXH MBA Liquid Model. This will unlock the fastest memory frequency which improves score a bit.


My ASRock OCFs wouldn't even even post with this firmware flashed to it. It must be sufficiently different from the reference to be incompatible.


----------



## Spawnyspawn

LtMatt said:


> I am pretty sure there are some tweaks for Timespy that allow some folk to get higher graphics scores than others. Also, for the best results you need to use the BIOS for the 6900 XTXH MBA Liquid Model. This will unlock the fastest memory frequency which improves score a bit.
> 
> Which driver are you using for Timespy? Perhaps try the same driver you used earlier.
> 
> That said, your delta is not too bad considering the 400W+ power draw, however I suspect the stock paste on your Extreme is dried since I've had two and I got much better results by switching to Silver King Liquid Metal. With a good application of LM you should bring your deltas down to 20c or lower and your max hotspot temp should drop 10c+ too. Lower temps will allow you a higher core clock, better stability and a higher graphics score. By how much exactly is hard to predict, but it will help.


Thanks for the advice.
I don't think the drivers changed between runs. The 24200 gpu score run was done just before new year's I think. I don't think we've had a driver update in between. Also, now that I think about it, part of the difference in scores could come from a "stable" OC now and OC settings that finished the run once every 3-4 attempts then. Stable now, as in, runs without crashes in TS all the time and haven't had any crashes when gaming either.

I'm a bit hesitant to screw around with liquid metal. I've never used it before and I'm kind of scared I'm going to break a very expensive piece of hardware by spilling the tiniest amount of LM somewhere on the card without noticing it. Would a repasted with high quality thermal paste make a difference?

Also, when I took my PC apart today to clean it, I noticed there's hardly any contact between the backplate and the PCB. There's a pretty big gap between most of the PCB and the backplate. It seems like the backplate is pretty much only there to be pretty. Since you've taken apart a few of these cards, does the backplate do anything to cool the PCB?

Can you point me to where I can find the BIOS you mentioned?


----------



## lawson67

Spawnyspawn said:


> Guys, I need some help. After tinkering with a few different reference 6900s earlier this year and a hiatus from AMD (some playtime with a few different 3080Tis and 3090s), I'm now back with AMD. I bought a Sapphire Extreme Edition some time ago and now during the holidays I finally have some time to see what kind of performance I can get out of it.
> Earlier in december I managed to get this score with the 6900XT combined with a 5800X: I scored 21 591 in Time Spy
> I have since swapped to a 5950X and with some initial tinkering I managed to get this score: I scored 23 089 in Time Spy
> 
> Both scores were with PPT set to 450W, 1200mV, 2678 min and 2778 max core clocks, 2138 mem clock and fans blasting at 100% speed.
> Now I have a bit more time and attempted to "properly" OC (put more time into it).
> 
> Started out with finding the safe memory speeds using superposition (4k optimized) like this:
> Stock voltage and coreclocks (500 min, 2589 max) and stock powerlimit +15%.
> Baseline @ 2100 fast timings:
> run 1 - 17719
> run 2 - 17711
> run 3 - 17720
> run 4 - 17708
> run 5 - 17724
> Mean without highest and lowest: 17717
> 
> 2110 fast timings:
> run 1 - 17706
> run 2 - 17721
> run 3 - 17717
> 
> 2120 fast timings:
> run 1 - 17762
> run 2 - 17752
> 
> 2130 fast timings:
> run 1 - 17778
> run 2 - 17780
> 
> 2140 fast timings:
> run 1 - 17781
> run 2 - 17782
> 
> Safe memory settings step before the plateau: 2120.
> 
> With MPT, I then set the powerlimit to 450W, OC test in TimeSpy:
> Run 1 - 2550 min, 2650 max, 1200mV - 23473
> Run 2 - 2600 min, 2700 max, 1200mV - 23695
> Run 3 - 2650 min, 2750 max, 1200mV - 23852
> Run 4 - 2700 min, 2800 max, 1200mV - CRASH (within 10 seconds of starting GPU1)
> Run 5 - 2675 min, 2775 max, 1200mV - CRASH (instantly when starting GPU2)
> Run 6 - 2670 min, 2770 max, 1200mV - CRASH (instantly when starting GPU2)
> Run 7 - 2665 min, 2765 max, 1200mV - CRASH (halfway through GPU2)
> Run 8 - 2660 min, 2760 max, 1200mV - CRASH (halfway through GPU1)
> Run 9 - 2655 min, 2755 max, 1200mV - CRASH (halfway through GPU1) (GPU ASIC power 413W)
> 
> For pretty much all tests, fans are at 100%, GPU temp ~60 degrees and GPU Hotspot ~97 degrees.
> 
> As you can see I'm running into 2 issues. The first is the rather large delta between GPU average temps and GPU hotspot temps. The second is that I can't even get anywhere near the 24k+ gpu scores in TimeSpy I was getting before.
> 
> Especially after reading up in this thread, others seem to be getting much better performance with similar power draws or similar scores with far less power draw.
> Do you guys have any tips as to what I can try to get more performance out of the card? I'll go try a few runs with memory clocks at 2130 (the first "plateau" step in superposition) and/or an undervolt with MPT, but more tips are appreciated.


If your hitting 97c on the hotspot the GPU will be downclocking that's why your not seeing your old graphic scores in TS, on most XTHX cards including the sapphire when the hotspot hits 95c the GPU downclocks, also to be running 450w you really need LM as any paste will soon dry up trying to cope with that wattage, paste simply can transfer that kind heat, so the first thing you need to achieve is getting your hotspot under 95c, i would recommend using LM, furthermore i would set your PL to about 400w not 450w you don't need much more than 400w for daily use.


----------



## Spawnyspawn

lawson67 said:


> If your hitting 97c on the hotspot the GPU will be downclocking that's why your not seeing your old graphic scores in TS, on most XTHX cards including the sapphire when the hotspot hits 95c the GPU downclocks, also to be running 450w you really need LM as any paste will soon dry up trying to cope with that wattage, paste simply can transfer that kind heat, so the first thing you need to achieve is getting your hotspot under 95c, i would recommend using LM, furthermore i would set your PL to about 400w not 450w you don't need much more than 400w for daily use.


Thanks for the response. I didn't know that the XYXH chips start to downclock at 95. I thought it was 105 coming from the reference 6900XT, but I might be mixing it up with the thermal throttle limits on the NVidia cards...

As for cooling: 
I have a 360mm AIO on the CPU with the rad top and the toxic's rad in front. Originally had all fans on them on exhaust, with a 140mm fan intake in the bottom of the case. Thermals were alright, but still in the 90-95 range. Swapping the fans on the toxic's rad to intake and adding a 140mm exhaust fan in the back of the case didn't improve temps. To the contrary, the hot air from the GPU rad was blowing into the case and heating up the air for the vrm fan on the pcb. Hotspot temps climbed to 98-99 degrees. Adding a flowguide to guide the air from the front radiator intake over the gpu to the back and top exhaust did improve idle thermals, but not much of a difference for hotspot temps during TimeSpy runs. All of the above with 100% fanspeed on all fans btw.

So with my current setup, the GPU will thermal throttle when using these limits.
So I guess I'll have to follow your and LtMatt's advice and go with LM and/or reduce max PPT to keep my gpu cool.


----------



## lawson67

Spawnyspawn said:


> Thanks for the response. I didn't know that the XYXH chips start to downclock at 95. I thought it was 105 coming from the reference 6900XT, but I might be mixing it up with the thermal throttle limits on the NVidia cards...
> 
> As for cooling:
> I have a 360mm AIO on the CPU with the rad top and the toxic's rad in front. Originally had all fans on them on exhaust, with a 140mm fan intake in the bottom of the case. Thermals were alright, but still in the 90-95 range. Swapping the fans on the toxic's rad to intake and adding a 140mm exhaust fan in the back of the case didn't improve temps. To the contrary, the hot air from the GPU rad was blowing into the case and heating up the air for the vrm fan on the pcb. Hotspot temps climbed to 98-99 degrees. Adding a flowguide to guide the air from the front radiator intake over the gpu to the back and top exhaust did improve idle thermals, but not much of a difference for hotspot temps during TimeSpy runs. All of the above with 100% fanspeed on all fans btw.
> 
> So with my current setup, the GPU will thermal throttle when using these limits.
> So I guess I'll have to follow your and LtMatt's advice and go with LM and/or reduce max PPT to keep my gpu cool.


Techpowerup bios collection is a good place to see your card temps limits, as you can see here under bios internals your max hotspot temp is 95c before it starts to downclock, XTX cards hotspot limits are generally 110c as you can see here


----------



## Spawnyspawn

lawson67 said:


> Techpowerup bios collection is a good place to see your card temps limits, as you can see here under bios internals your max hotspot temp is 95c before it starts to downclock, XTX cards hotspot limits are generally 110c as you can see here


Thanks again for the information. 
Do you happen to know which card LtMatt refered to with "6900 XTXH MBA Liquid Model "?


----------



## lawson67

Spawnyspawn said:


> Thanks again for the information.
> Do you happen to know which card LtMatt refered to with "6900 XTXH MBA Liquid Model "?


He will be referring to AMD RX 6900 XT LC which is also a XTXH card which runs 18 gbps memory, not every card will work with that bios though


----------



## J7SC

Spawnyspawn said:


> Thanks for the response. I didn't know that the XYXH chips start to downclock at 95. I thought it was 105 coming from the reference 6900XT, but I might be mixing it up with the thermal throttle limits on the NVidia cards...
> (...)
> So with my current setup, the GPU will thermal throttle when using these limits.
> So I guess I'll have to follow your and LtMatt's advice and go with LM and/or reduce max PPT to* keep my gpu cool.*


Keeping your GPU cool is *key*. While it can give you a few more speed bins (though not as many as equivalent NVidia models w/ the same cooling), the biggest advantage is that it allows you to run higher PL for longer.

LM is definitely an option I've used myself on several occasions, but especially with vertically mounted GPUs etc, one has to be real careful re. thorough insulation / conductivity.

Per below, I spent a lot of time on cooling my regular XTX 3x8 pin card on stock bios, what with a custom waterblock, thermal putty and Gelid GC Xtr on the die, along with an extensive custom loop. Typically, my peak temps for GPU and VRAM are in the low 40s C and Hotspot in the low 60s C w/ MPT PL. If good all-around cooling can wake up a regular XTX, I would think even more so with a good XTXH. Besides, well-cooled electronic components tend to last longer...


----------



## bunnybutt

Im starting to feel like re-pasting is a REQUIREMENT. 
You guys say "it can help" but it seems like "common sense" says its needed.

However... is there something "different" on the 6900xt vs any past historical GPUs? do I HAVE to use liquid metal? I have some Noctua NT-H2, Ive always used this on all my CPU/GPUs, but is the 6900xt just "too hot" and I have to use a different paste?

The main reason I dont want to use Liquid Metal, is I heard it has to be reapplied at least once a year. Im not interested in having that kind of maintenance on an expensive GPU.


----------



## Spawnyspawn

J7SC said:


> Keeping your GPU cool is *key*. While it can give you a few more speed bins (though not as many as equivalent NVidia models w/ the same cooling), the biggest advantage is that it allows you to run higher PL for longer.
> 
> LM is definitely an option I've used myself on several occasions, but especially with vertically mounted GPUs etc, one has to be real careful re. thorough insulation / conductivity.
> 
> Per below, I spent a lot of time on cooling my regular XTX 3x8 pin card on stock bios, what with a custom waterblock, thermal putty and Gelid GC Xtr on the die, along with an extensive custom loop. Typically, my peak temps for GPU and VRAM are in the low 40s C and Hotspot in the low 60s C w/ MPT PL. If good all-around cooling can wake up a regular XTX, I would think even more so with a good XTXH. Besides, well-cooled electronic components tend to last longer...
> View attachment 2541038


I'm noticing that more and more. Just ran Superposition again with my current settings (powerlimit at 400W) and hotspot was 87 degrees max. Scored 18005 points, so still below your score. And that's with my XTXH chip vs your XTX. I can't even get close to those kind of memory OCs and max frequencies.


----------



## J7SC

Spawnyspawn said:


> I'm noticing that more and more. Just ran Superposition again with my current settings (powerlimit at 400W) and hotspot was 87 degrees max. Scored 18005 points, so still below your score. And that's with my XTXH chip vs your XTX. I can't even get close to those kind of memory OCs and max frequencies.


It's not so much about a score being higher or lower, but simply about cooling s.th which puts out 400+ Watts (heat energy) focused on a tiny area. Re. VRAM, I wish I wouldn't bump into that 2150 Mhz limit via XTX, but the rest I can take care of with MPT --- and really good cooling. Effective VRAM speeds seemed to improve a bit as well when I was done with the custom cooling. I'm sure your XTXH will score much higher as you incrementally improve cooling !


----------



## CS9K

bunnybutt said:


> Im starting to feel like re-pasting is a REQUIREMENT.
> You guys say "it can help" but it seems like "common sense" says its needed.
> 
> However... is there something "different" on the 6900xt vs any past historical GPUs? do I HAVE to use liquid metal? I have some Noctua NT-H2, Ive always used this on all my CPU/GPUs, but is the 6900xt just "too hot" and I have to use a different paste?
> 
> The main reason I dont want to use Liquid Metal, is I heard it has to be reapplied at least once a year. Im not interested in having that kind of maintenance on an expensive GPU.


One doesn't _have_ to use LM, but ALL pastes will pump-out eventually. Unlike past generations of GPU's, where one could re-paste once or twice over its lifetime, I would say that a re-paste once at year is about what I would expect to do if you plan on pushing the GPU right to its thermal limits.

Stock paste isn't the best, but it's tacky and intended to last the lifetime of the GPU (2-4 years). There _are_ aftermarket pastes that will get a solid year of use if you apply it right, but there's only a few that can survive that long under one of today's spicy a.f. 300W+ GPU's. 

I myself use Gelid GC-Extreme on my reference RX 6900 XT, and the b/f has had excellent luck with it on his waterblocked RTX 3080 (and now 3080Ti on the same block). Search this thread and I have a video and description of how to apply GC-Extreme, and why you would want to use a paste like that on a GPU instead of the oil-based Noctua NT-H2, Kryonaut, MX-4, etc.


----------



## ZealotKi11er

I most likely have to LM my card. I cant run much over 400W even with custom loop.


----------



## J7SC

ZealotKi11er said:


> I most likely have to LM my card. I cant run much over 400W even with custom loop.


What temps are you getting now w/400 W ?


----------



## ZealotKi11er

J7SC said:


> What temps are you getting now w/400 W ?


Pretty bad. 72c / 100c+


----------



## gtz

ZealotKi11er said:


> Pretty bad. 72c / 100c+


Damn

That is hot, how many rads in the system?


----------



## ZealotKi11er

gtz said:


> Damn
> 
> That is hot, how many rads in the system?


2x 360


----------



## J7SC

ZealotKi11er said:


> Pretty bad. 72c / 100c+


For a 2x360 mm custom loop, that's toasty for sure...may be uneven die or s.th. like that ?


----------



## Blameless

J7SC said:


> View attachment 2541038


I'm curious about that minimum frame rate. Is that normal/repeatable for your setup? What else is running on that system?

My 6900 XT at 2500-2600 with 2124 fast and a very modest FCLK/SoC bump is consistently getting minimum frame rates in the 110-112 range in 4k Optimized at default driver settings. This is on my air-cooled 3900X test bench.


----------



## ZealotKi11er

J7SC said:


> For a 2x360 mm custom loop, that's toasty for sure...may be uneven die or s.th. like that ?


Could be. This card hard pretty high leakage which I used to run 2870MHz for single TS run.


----------



## J7SC

Blameless said:


> I'm curious about that minimum frame rate. Is that normal/repeatable for your setup? What else is running on that system?
> 
> My 6900 XT at 2500-2600 with 2124 fast and a very modest FCLK/SoC bump is consistently getting minimum frame rates in the 110-112 range in 4k Optimized at default driver settings. This is on my air-cooled 3900X test bench.


...that's actually my 'work' machine and yes, there all kinds of things running in the background. Ditto for the 'play' machine, though, in the spoiler...note min frame rate


Spoiler


----------



## IIISLIDEIII

Neoki said:


> Yes it's in reference to VRAM. And you have to find your cards sweetspot, every card is unique. Most of the good binned XTXH's get performance degradation above 2120mhz VRAM, even if it's "stable" at 2150. You can test this via timespy or mining if you're into that. Same logic applies to XTX's I imagine, just try and see what works for you.
> 
> Most of us run the "Fast Timing" option on the VRAM and then tune the overclock based on what's stable, and not performance degrading. Unless you have a memory dud like mine I'd expect most 6900xt's to hit 2100mhz safely. So you can start there and go up 10mhz at a time until you see a loss of performance, and then dial it back 10mhz. Then I'd recommend running something like OCCT VRAM test for a good 1hr to see if any errors pop up.
> 
> Good luck!


I got the card, tonight I did the first OC tests, I also increased the power limit with MPT, I have not yet found the OC limit, at the moment the card is up 2600-2700
if you tell me that the settings are fine I continue with the OC to find the limit

I haven't activated the Resizable bar yet, I stopped because before continuing whit OC I would like to know if I am exaggerating with the power limit , I would not want to destroy the card, in time spy it touched 400W, I remember that it is a 6900 reference.

I put below the screens:


----------



## Enzarch

ZealotKi11er said:


> Pretty bad. 72c / 100c+


That HAS to be a bad mount/TIM application.
I don't think i have ever seen above 50C / 75C on mine @ 465W with an EK ref block and GC-Extreme.
This is with coolant at ~31C at 1.1GPM
I ended up ditching the backplate and using the stock rear retention spring bracket to theoretically get better, more consistent pressure.


----------



## ptt1982

J7SC said:


> Keeping your GPU cool is *key*. While it can give you a few more speed bins (though not as many as equivalent NVidia models w/ the same cooling), the biggest advantage is that it allows you to run higher PL for longer.
> 
> LM is definitely an option I've used myself on several occasions, but especially with vertically mounted GPUs etc, one has to be real careful re. thorough insulation / conductivity.
> 
> Per below, I spent a lot of time on cooling my regular XTX 3x8 pin card on stock bios, what with a custom waterblock, thermal putty and Gelid GC Xtr on the die, along with an extensive custom loop. Typically, my peak temps for GPU and VRAM are in the low 40s C and Hotspot in the low 60s C w/ MPT PL. If good all-around cooling can wake up a regular XTX, I would think even more so with a good XTXH. Besides, well-cooled electronic components tend to last longer...
> View attachment 2541038


Just for giggles: my Red Devil XTX 6900xt on extreme undervolt got around 14360 points. Your GPU is 26% faster than mine! My card used average of 120W and 5600x average of 40W during the 4K optimized superposition run, so an average of 160W total system consumption including all rails. You achieved your score with 300% more power (based on 120W vs 360W average) to gain 26% higher fps. It's nuts to think about it like that.

Also, the system uses 25% less power than a PS5 but has 2.2x the performance! Temps for CPU at 30C average and GPU at 34C average for hotspot.


----------



## Spawnyspawn

J7SC said:


> It's not so much about a score being higher or lower, but simply about cooling s.th which puts out 400+ Watts (heat energy) focused on a tiny area. Re. VRAM, I wish I wouldn't bump into that 2150 Mhz limit via XTX, but the rest I can take care of with MPT --- and really good cooling. Effective VRAM speeds seemed to improve a bit as well when I was done with the custom cooling. I'm sure your XTXH will score much higher as you incrementally improve cooling !


What have you done to increase cooling for your card?
As far as I can tell, my options seem to be limited to LM, thermal putty and better fans on the rad. Anything else involves a custom loop I think.
Or am I missing things I can try?


----------



## ZealotKi11er

Enzarch said:


> That HAS to be a bad mount/TIM application.
> I don't think i have ever seen above 50C / 75C on mine @ 465W with an EK ref block and GC-Extreme.
> This is with coolant at ~31C at 1.1GPM
> I ended up ditching the backplate and using the stock rear retention spring bracket to theoretically get better, more consistent pressure.


Yeah, I have to look at the loop again. Even 12900K runs pretty hot.


----------



## IIISLIDEIII

the card keeps stable 2700-2800 vram 2120 but the time spy score drops slightly, I make a higher score by holding 2650-2750 vram 2100, does anyone know why?


----------



## ZealotKi11er

IIISLIDEIII said:


> the card keeps stable 2700-2800 vram 2120 but the time spy score drops slightly, I make a higher score by holding 2650-2750 vram 2100, does anyone know why?


try 2700-2800 with 2100 memory.


----------



## IIISLIDEIII

ZealotKi11er said:


> try 2700-2800 with 2100 memory.


OK I try .
i'm exceeding 400w, risk of damage?
it is a reference with 2 x8 connectors



2700-2800 time spy now freezes in the middle of the second graph test


----------



## M3atWad

ZealotKi11er said:


> try 2700-2800 with 2100 memory.


Ive noticed the same, Ive benched from stock to almost 2400, and what Ive found is at 2100-2110 performance falls off of a cliff. Thinking about it now, I wonder if slow timings exhibits the same behavior. Also anything over 2780 on max clock crashes in the second test for me as well, right at the spot with the space station thing everytime.


----------



## J7SC

ptt1982 said:


> Just for giggles: my Red Devil XTX 6900xt on extreme undervolt got around 14360 points. Your GPU is 26% faster than mine! My card used average of 120W and 5600x average of 40W during the 4K optimized superposition run, so an average of 160W total system consumption including all rails. You achieved your score with 300% more power (based on 120W vs 360W average) to gain 26% higher fps. It's nuts to think about it like that.
> 
> Also, the system uses 25% less power than a PS5 but has 2.2x the performance! Temps for CPU at 30C average and GPU at 34C average for hotspot.


'Interesting' - but I wasn't really trying to go as slow as I could  ...still, power consumption is important, even if our power out here is sourced 97% from (mostly) clean hydro.

After reading your post, I did a few quick calculations (GPU only): The 520W+ RTX 3090 run I also posted later used 23% more power for a 7.5% improvement in the Superposition 4K 

The absolute power mongers though sit in my Threadripper dual 2080 Ti (former work) system with each of the two GPUs regularly slurping back 380W...per an earlier post, they hit 11998 in Superposition 8K which would pu them into the top-10 overall at HWBot even now...combined with the TR, that system can suck back close to 1,200W at the wall...

...came in handy though recently when our heat went out (pipes burst) Christmas Eve w/ - 12C outside...said TR / 2x 2080 Ti system was moved to our master bedroom a while back, so I just turned that on and did some repeated Unigine benches..within 15 min, it was a nice cozy 23 C +-


Spoiler

















Spawnyspawn said:


> *What have you done to increase cooling for your card?*
> As far as I can tell, my options seem to be limited to LM, thermal putty and better fans on the rad. Anything else involves a custom loop I think.
> Or am I missing things I can try?


To begin with, it sits in a custom loop with 2x D5 pumps and 1200x60+ dual and triple core rads along w/ push-pull fans. Even if you use an AIO yet do have some extra space in your case, adding thick, powerful push-pull fans instead of the stock ones likely improves temps a bit.

On the card itself, I used the 'thick' Gelid GC Extreme on the die, TG-10 thermal putty for the VRAM _and the back of_ the GPU die and also mounted a big old heatsink (2x pack, Amazon, cheap). The additional heatsink on the back has a thick layer of MX5 where it joins the back-plate. I also added thermal putty to some of the power stage components, while using Thermalright 12.8 wmk thermal pads on the remaining power stage components.


----------



## Enzarch

IIISLIDEIII said:


> OK I try .
> i'm exceeding 400w, risk of damage?
> it is a reference with 2 x8 connectors


Assuming your cooling and power cables are up to the task, you should be fine
I comfortably run a ref LC card at 420W daily and higher for bench runs, and thats with more voltage being a XTXH chip and even a bit more via EVC during benching
That said, It is very well cooled in an open loop and quality 16ga power cables/supply.


----------



## CfYz

Enzarch said:


> XTXH chip


Hi can you plz help me in DM here at overclock.net? Sorry for bother but question about OEM XTLC from AMD...


----------



## Spawnyspawn

J7SC said:


> To begin with, it sits in a custom loop with 2x D5 pumps and 1200x60+ dual and triple core rads along w/ push-pull fans. Even if you use an AIO yet do have some extra space in your case, adding thick, powerful push-pull fans instead of the stock ones likely improves temps a bit.
> 
> On the card itself, I used the 'thick' Gelid GC Extreme on the die, TG-10 thermal putty for the VRAM _and the back of_ the GPU die and also mounted a big old heatsink (2x pack, Amazon, cheap). The additional heatsink on the back has a thick layer of MX5 where it joins the back-plate. I also added thermal putty to some of the power stage components, while using Thermalright 12.8 wmk thermal pads on the remaining power stage components.
> 
> View attachment 2541264
> 
> View attachment 2541265
> 
> 
> View attachment 2541267


I'm by no means an expert, but isn't that much radiator surface complete overkill? You could probably run it with the fans barely spinning and still have enough cooling capacity, but still.
Anyway, on the Toxic LC, the backplate doesn't make contact with the PCB, save for a small part on the end of the card. So I don't think that adding a heatsink to the backplate is going to do much, if at all, for the hotspot temps. There's even a hole in the backplate that keeps the back of the die exposed. Now that I think about it, would it be possible to replace the stock backplate with a Nitro+ SE waterblock backplate? The waterblock backplate would make better contact and actually do something for the card's thermals.
I don't have to space in my case to fit fans in a push-pull config, but I think I'll just replace the VERY loud stock fans with much better and quieter noctua fans. Should help a lot, even if it's just a reduction in the noise. Also, a pci fan bracket like on this picture might even help.








I have the card in a vertical mount, so any fans I strap to the bracket would be blowing directly against the back of the card.
Lastly, I'll have to got the LM + thermal putty route to get even more improvements. Again, if not for max performance, it's for noise. Kinda sad btw, that a GPU this expensive is shipped with a pump that is so loud and fans that are so crap as the ones that came with the Toxic. Almost regretting getting the Toxic and not a Liquid Devil...


----------



## IIISLIDEIII

Enzarch said:


> Assuming your cooling and power cables are up to the task, you should be fine
> I comfortably run a ref LC card at 420W daily and higher for bench runs, and thats with more voltage being a XTXH chip and even a bit more via EVC during benching
> That said, It is very well cooled in an open loop and quality 16ga power cables/supply.


thank you.
Dual loop managed by 2 mora 3 420, I have 2 pumps in series for each loop, (koolance pmp400).
I have this pci cable extension that go to the gpu:



these are the temperatures touched in the time spy sessions, are they good?



2650-2750 rebar off



2650-2750 rebar on


----------



## ZealotKi11er




----------



## CS9K

ZealotKi11er said:


> View attachment 2541335
> 
> View attachment 2541336


mmmmm that's the good stuff


----------



## ZealotKi11er

This was the application I had and temps were garbage.


----------



## J7SC

ZealotKi11er said:


> This was the application I had and temps were garbage.


A bit hard to tell, but it does look like there is an area on the die (and/or block) which is 'of interest', including on the outer die frame. Quick question: If this was originally an air-cooled card, how did its temps stack up against similar models prior to water-cooling ? Trying to figure out whether it is a potential die un-eveness or a water-block issue. On some water-block models, using (extra) plastic washers or even carefully sanding down standoffs have been known to help. Ditto for the thickest hi-po paste you can find.

First, I would try a remount though w/o any of those mods


----------



## CS9K

J7SC said:


> A bit hard to tell, but it does look like there is an area on the die (and/or block) which is 'of interest', including on the outer die frame. Quick question: If this was originally an air-cooled card, how did its temps stack up against similar models prior to water-cooling ? Trying to figure out whether it is a potential die un-eveness or a water-block issue. On some water-block models, using (extra) plastic washers or even carefully sanding down standoffs have been known to help. Ditto for the thickest hi-po paste you can find.
> 
> First, I would try a remount though w/o any of those mods
> View attachment 2541341


A few observations, @ZealotKi11er:

See how there's very little paste right in the middle of the die? See how paste on the block is evenly thin-to-thick, from center to edge, on the entire die?
Now, look at how hard your memory modules were pressing against the memory thermal pads. How thick are the pads? Are they what came with the block/card?

The indention in the dead-center of the GPU die, and even ring around the center, tells me there was plenty of clamp-force holding the block on, but perhaps the memory's thermal pads were a hair too thick, and kept pressure from being evenly put on the entire die itself.

The EK block for the 'reference' Radeon RX 6800 XT/6900XT's comes with 1mm thermal pads, but the pads EK includes are VERY compressible, and do so when the block is mounted. Thicker pads like Thermal Grizzly Minus Pad 8's, barely compress at all, so 1mm TG pads would do exactly as I described above.

1mm isn't 1mm between thermal pad brands/models, since some compress differently than others.


----------



## J7SC

CS9K said:


> A few observations:
> 
> See how there's very little paste right in the middle of the die? See how paste on the block is evenly thin-to-thick, from center to edge, on the entire die?
> Now, look at how hard your memory modules were pressing against your memory thermal pads. How thick are the pads? Are they what came with the block/card?
> 
> The indention in the dead-center of your GPU die, and even ring around the center, tells me you had plenty of clamp-force holding the block on, but perhaps your memory's thermal pads were a hair too thick, and kept pressure from being evenly put on the entire die itself.
> 
> The EK block for the 'reference' Radeon RX 6800 XT/6900XT's comes with 1mm thermal pads, but the pads EK includes are VERY compressible, and do so when the block is mounted. Thicker pads like Thermal Grizzly Minus Pad 8's, barely compress at all, so 1mm TG pads would do exactly as I described above.
> 
> 1mm isn't 1mm between thermal pad brands/models, since some compress differently than others.


..."your" GPU die ? Not mine > Zealotki11er


----------



## CS9K

J7SC said:


> ..."your" GPU die ? Not mine > Zealotki11er


*The GPU die, my bad. Fixed.


----------



## J7SC

CS9K said:


> *The GPU die, my bad. Fixed.


No Probs...

@ZealotKi11er ...have you considered TG PP 10 Thermal Putty for the VRAM ? Almost always the perfect fit and pressure. Others have used it longer than I have, but so far (since the summer), I am thrilled with its performance on both GPUs I mounted it on. 

Btw, some of the Theramlright 12.8 wmk thermal pads I had saved up for GPUs have been redeployed onto the latest PCIe Gen4 NVMEs.


----------



## ZealotKi11er

These are the pad that came with: 









126.14US $ 15% OFF|Bykski 6900 Water Block For AMD Radeon RX 6900 XT RX 6800 XT Founders Edition 6900XT GPU Block VGA Waterooling ARGB A RX6900XT X|Fans & Cooling| - AliExpress


Smarter Shopping, Better Living! Aliexpress.com




www.aliexpress.com


----------



## CS9K

ZealotKi11er said:


> These are the pad that came with:
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 126.14US $ 15% OFF|Bykski 6900 Water Block For AMD Radeon RX 6900 XT RX 6800 XT Founders Edition 6900XT GPU Block VGA Waterooling ARGB A RX6900XT X|Fans & Cooling| - AliExpress
> 
> 
> Smarter Shopping, Better Living! Aliexpress.com
> 
> 
> 
> 
> www.aliexpress.com


I couldn't find mention of the thermal pads on that page. It was too much of a mix of stolen EK pictures and adverts for 3090 backplates inside of the detailed information about the RX 6900 XT water block.


----------



## Blameless

Got a black screen crash at my new settings after 35 hours of Night Raid, but it's so close to stable that I think I'll just bump minimum SoC voltage another 12mV and call it done.

A more pressing issue is temps...they are extremely borderline and the delta between GPU edge and hotspot reaches 33C at ~440w (what it reaches in Time Spy Extreme test 2). Was planning on doing a full re-TIM of the entire card before I move it to my primary system anyway, so hopefully that will be enough to get things where I need them to be.

I've already stripped and cleaned the card. Plan is to:


Underfill GPU and VRAM for a small thermal advantage, as well as mechanical and moisture protection.
Fully encapsulate all the chokes on the board to mitigate coil whine.
Conformal coat anything in proximity to the GPU, front and back, that could conceivably come into contact with liquid metal, or air, in the event I decide to do some sub-zero benching on it at some point. These first three things will all be done with BGA Coat, which I've used with good results in the past.
Use copper shims between the GDDR6 and cooler plate, with SYY-157 between the GDDR6 package and the shim and TG-NSP80 between the shim and the cooler (TG-PP-10 is too thick for this narrow gap, which is still a bit wide for me to be comfortable with conventional paste, and I already have a tube of NSP80). This is mostly because I have a pile of shims and it will save me some TG-PP-10; thermally it's complete overkill on plain GDDR6.
Apply TG-PP-10 between the heatsink and all power stages and chokes.
Use liquid metal TIM (Phobya in this case, which I have a lot of and which works within margin of error of anything else) between the GPU die and the nickel plated base plate of the cooler. I'll be using a very liberal amount here as everything on the board that gets anywhere near it will be protected, so as long as it doesn't spill onto the aluminum components of the heatsink, or drip into the slot (I'm not going too crazy with it, I just want any crystallization of the LM to stay floating on the liquid around the edge, rather than harden the actual thermal interface between the die and cold plate), I won't have to be too careful with it.
Test everything thoroughly for issues, then re-secure all screws with small dabs of Loctite blue.


----------



## Spawnyspawn

Blameless said:


> Got a black screen crash at my new settings after 35 hours of Night Raid, but it's so close to stable that I think I'll just bump minimum SoC voltage another 12mV and call it done.
> 
> A more pressing issue is temps...they are extremely borderline and the delta between GPU edge and hotspot reaches 33C at ~440w (what it reaches in Time Spy Extreme test 2). Was planning on doing a full re-TIM of the entire card before I move it to my primary system anyway, so hopefully that will be enough to get things where I need them to be.
> 
> I've already stripped and cleaned the card. Plan is to:
> 
> 
> Underfill GPU and VRAM for a small thermal advantage, as well as mechanical and moisture protection.
> Fully encapsulate all the chokes on the board to mitigate coil whine.
> Conformal coat anything in proximity to the GPU, front and back, that could conceivably come into contact with liquid metal, or air, in the event I decide to do some sub-zero benching on it at some point. These first three things will all be done with BGA Coat, which I've used with good results in the past.
> Use copper shims between the GDDR6 and cooler plate, with SYY-157 between the GDDR6 package and the shim and TG-NSP80 between the shim and the cooler (TG-PP-10 is too thick for this narrow gap, which is still a bit wide for me to be comfortable with conventional paste, and I already have a tube of NSP80). This is mostly because I have a pile of shims and it will save me some TG-PP-10; thermally it's complete overkill on plain GDDR6.
> Apply TG-PP-10 between the heatsink and all power stages and chokes.
> Use liquid metal TIM (Phobya in this case, which I have a lot of and which works within margin of error of anything else) between the GPU die and the nickel plated base plate of the cooler. I'll be using a very liberal amount here as everything on the board that gets anywhere near it will be protected, so as long as it doesn't spill onto the aluminum components of the heatsink, or drip into the slot (I'm not going too crazy with it, I just want any crystallization of the LM to stay floating on the liquid around the edge, rather than harden the actual thermal interface between the die and cold plate), I won't have to be too careful with it.
> Test everything thoroughly for issues, then re-secure all screws with small dabs of Loctite blue.


I would be very interested in a step by step "guide" with pictures of the process 😀


----------



## J7SC

As TG PP 10 is often sold out, I put away the two remaining bottles / 100g for future consideration after doing two GPUs (front and back) with the TG PP 10 thermal putty. With other mods described earlier for the GPUs, the max temps for both cards are well controlled now so that I can probably skip LM, which I was actually originally planning on applying:










Another 'fyi tip' for cooling mods is to apply a small amount of thermal paste like MX5 on top of good thermal pads; it may sound a bit counter-intuitive, but it it has worked well for me for a while now.

I just used that method on some new NVME drives which are fast but sure get very toasty. Instead of the 'stock' Asus thermal pad only for the M.2 mobo cover, I used a marginally thicker Thermalright 12.8 w/mk thermal pad, with a bit of MX5 - drive temps during benchmarking dropped by 9 C...


----------



## Spawnyspawn

J7SC said:


> As TG PP 10 is often sold out, I put away the two remaining bottles / 100g for future consideration after doing two GPUs (front and back) with the TG PP 10 thermal putty. With other mods described earlier for the GPUs, the max temps for both cards are well controlled now so that I can probably skip LM, which I was actually originally planning on applying:
> View attachment 2541371


The spec sheet of the TG PP10 lists a shelf life of just 6 months after opening and it basically tells me to discard any unused putty after 6 months of opening the container. That's why I was leaning towards K5-Pro thermal putty. That stuff supposedly never goes bad and you can even scrape it off the card and put it back in the container after using it and it's good to go for the next application.
Do you have any experience with TG PP10 "going bad" after opening a container?
Also, how much of the stuff do you typically need for 1 full GPU worth of application? I would guess about 20-40 grams based on the size of the thermal pads.


----------



## J7SC

Spawnyspawn said:


> The spec sheet of the TG PP10 lists a shelf life of just 6 months after opening and it basically tells me to discard any unused putty after 6 months of opening the container. That's why I was leaning towards K5-Pro thermal putty. That stuff supposedly never goes bad and you can even scrape it off the card and put it back in the container after using it and it's good to go for the next application.
> Do you have any experience with TG PP10 "going bad" after opening a container?
> Also, how much of the stuff do you typically need for 1 full GPU worth of application? I would guess about 20-40 grams based on the size of the thermal pads.


I used up two containers for two cards (including extra on the back VRAM for the 3090, and back-of-the-die for both cards) and that includes a remount for one card. So I don't have an opened container wasting away, just two extra new ones I haven't opened yet. That said, the container has a secondary airtight lid. If you're worried about it, open a new container, take half out and immediately put the other half into a little freezer bag, let any air out and stuff it back into the container w/ both lids tight. 

FYI, for my 6900XT w/Bykski block and custom back-plate, one 50 g container was more than enough.


----------



## Spawnyspawn

Just some information. I contacted Bykski and the waterblock backplate can't be used with the stock cooler and isn't sold seperately. Was a fun idea though


----------



## J7SC

Spawnyspawn said:


> Just some information. I contacted Bykski and the waterblock backplate can't be used with the stock cooler and isn't sold seperately. Was a fun idea though


Yeah, Bykski just bundled it when I picked up the block for my custom PCB. Still, trying to do .sth. with the stock back-plate re. thermal putty and also an additional heatsink can be useful, per posted temps above.

When I got my Gigabyte Gaming OC / 3x8pin, my initial stock temp results were along these lines.....note the 3800+ rpm fans 









...temps were very good for stock / air-cooled, but as soon as I added MPT PL, Hotspot went into the 80s, then 90s...and no matter what, the stock back-plate got way too hot to touch...unlike many other 6900XTs or for that matter most hi-po 3090s, the stock back-plate was solid (aluminum) with no opening above the rear of the die. Here are some relevant pics from TechPowerUp:








The white thermal pad is very soft and sits on / fills the whole back of the GPU die. The included Bykski back-plate was also solid (no opening) but did not have any such pads, yet a slightly different height / clearance. So I added the trusty TG PP 10 - very carefully. It's a bit nerve wracking to squish stuff onto all the tiny fragile components located at the back of an expensive GPU die, but I wanted that heat-transference contact. I also added thermal putty at the back of the VRAM modules and a combo of that and Thermalright pads on the back of the VRM areas. Then I mounted the heatsink on top of the back-plate, per earlier pics, including with a big dollop of MX5.

The temps spoke for themselves, so I applied the same process - very carefully - to the back of the 3090 Strix back-plate _which did have _the cutout above the back of the GPU die...built that up with thermal putty and some MX5 until I had good contact with the heatsink on top of the back-plate...thus the temps listed for both cards in my previous post above, combined with extensive w-loops. The whole point is to have a hi-po but almost silent system.

Adding extra cooling on the back - even on a card that doesn't have double-sided VRAM, not only aids the back temps, but the overall PCB temps on both sides.


----------



## Blameless

Spawnyspawn said:


> The spec sheet of the TG PP10 lists a shelf life of just 6 months after opening and it basically tells me to discard any unused putty after 6 months of opening the container.


The rating is very conservative and even stuff that I have that's been in use for nearly a year at average temps of 60C+ is essentially indistinguishable from new material. If it was clean, I'd reuse it.

Shelf life ratings are also for room temp, and you can double that estimate every 10C colder you keep it. In my freezer at -10C the putty should be just as good after two years as it is at room temp after 3 months.

Honestly, I'd be surprised if it was meaningfully degraded after even five years of sitting on shelf, unrefrigerated. I expect it to last forever in the freezer.



Spawnyspawn said:


> Also, how much of the stuff do you typically need for 1 full GPU worth of application? I would guess about 20-40 grams based on the size of the thermal pads.


Depends on the GPU and exactly where you're applying it. I'd recommend at least 30g for a conservative application, but 50g would give you more margin.


----------



## ZealotKi11er

Are these any good: 








Thermalright Thermal Pad 12.8 W/mK, 85x45x1mm, Non Conductive Heat Resistance High Temperature Resistance, Silicone Thermal Pads for Laptop Heatsink/GPU/CPU/LED Cooler : Amazon.ca: Electronics


Thermalright Thermal Pad 12.8 W/mK, 85x45x1mm, Non Conductive Heat Resistance High Temperature Resistance, Silicone Thermal Pads for Laptop Heatsink/GPU/CPU/LED Cooler : Amazon.ca: Electronics



www.amazon.ca


----------



## J7SC

ZealotKi11er said:


> Are these any good:
> 
> 
> 
> 
> 
> 
> 
> 
> Thermalright Thermal Pad 12.8 W/mK, 85x45x1mm, Non Conductive Heat Resistance High Temperature Resistance, Silicone Thermal Pads for Laptop Heatsink/GPU/CPU/LED Cooler : Amazon.ca: Electronics
> 
> 
> Thermalright Thermal Pad 12.8 W/mK, 85x45x1mm, Non Conductive Heat Resistance High Temperature Resistance, Silicone Thermal Pads for Laptop Heatsink/GPU/CPU/LED Cooler : Amazon.ca: Electronics
> 
> 
> 
> www.amazon.ca


...those Thermalright 12.8 w/mk are my go-to pads and I used them a lot (in varying thicknesses, 0.5mm to 2.0 mm)...only problem has been some 'fake' ones being sold at Amazon, usually distinguishable by spelling errors on the package, and slightly different colour. The Thermalright 12.8 w/mk (the real ones anyway) are fairly firm.


----------



## KGV

The BEST. Thermalright Odyssey.


----------



## Spawnyspawn

Blameless said:


> The rating is very conservative and even stuff that I have that's been in use for nearly a year at average temps of 60C+ is essentially indistinguishable from new material. If it was clean, I'd reuse it.
> 
> Shelf life ratings are also for room temp, and you can double that estimate every 10C colder you keep it. In my freezer at -10C the putty should be just as good after two years as it is at room temp after 3 months.
> 
> Honestly, I'd be surprised if it was meaningfully degraded after even five years of sitting on shelf, unrefrigerated. I expect it to last forever in the freezer.
> 
> 
> 
> Depends on the GPU and exactly where you're applying it. I'd recommend at least 30g for a conservative application, but 50g would give you more margin.


Thanks, I'll go and try it then. I can practice the whole procedure on an R9 290 I have in an old system. That card needs maintenance anyway. Might as well throw some LM and thermal putty on it. 
I was planning on using liquid black electrical tape to insulate and protect the components around the die from LM, but could you just use the thermal putty too? Or would that still leave a risk when/if the putty moves or something?


----------



## Blameless

Spawnyspawn said:


> I was planning on using liquid black electrical tape to insulate and protect the components around the die from LM, but could you just use the thermal putty too? Or would that still leave a risk when/if the putty moves or something?


I wouldn't count on the putty being completely impermeable to the liquid metal over time, it's also not something that can easily be spread into very thin layers. Some form of conformal coating is a better idea, either purpose made stuff, nail polish, or a couple layers of superglue.


----------



## Blameless

Spawnyspawn said:


> I would be very interested in a step by step "guide" with pictures of the process 😀


Was already well underway when I saw this post, but I did get a picture of the second application of BGACoat:

First coat was a proper underfill of the GPU and GDDR6, more or less according to the BGACoat instructions. This coat is more of a thick partial encapsulation/overcoat of anything that could possibly be shorted. It's still wet, but as it cures will turn translucent and contract a bit. That will take 48-72 hours. After that I'll scrape off anything that got on the actual top surface of the memory ICs or the GPU shim.

I will do a third and fourth application after this one is dry. Third will get the VRM inductors as best I can manage, which will deaden coil whine a bit. Fourth will cover much of the back of the PCB, which isn't necessary for the cooling I'm putting on it now, but will matter for sub ambient stuff later.

Application, beyond the actual underfill, is pretty straightforward. I just paint it on. If these chokes were bigger I would have used a blunt tipped syringe to actually fill them, but capillary action will have to suffice with these.


----------



## Spawnyspawn

Blameless said:


> I wouldn't count on the putty being completely impermeable to the liquid metal over time, it's also not something that can easily be spread into very thin layers. Some form of conformal coating is a better idea, either purpose made stuff, nail polish, or a couple layers of superglue.


Fair enough. Conformal coating it is.
And I checked, but I can't even buy Thermalright Silver King around here unless I pay like triple normal retail price. So I'll go with conductonaut instead. I doubt that's going to make any sort of measurable difference.

Btw, does it matter if the card is in a vertical mount?



Blameless said:


> Was already well underway when I saw this post, but I did get a picture of the second application of BGACoat:
> 
> 
> 
> 
> 
> 
> 
> 
> 
> First coat was a proper underfill of the GPU and GDDR6, more or less according to the BGACoat instructions. This coat is more of a thick partial encapsulation/overcoat of anything that could possibly be shorted. It's still wet, but as it cures will turn translucent and contract a bit. That will take 48-72 hours. After that I'll scrape off anything that got on the actual top surface of the memory ICs or the GPU shim.
> 
> I will do a third and fourth application after this one is dry. Third will get the VRM inductors as best I can manage, which will deaden coil whine a bit. Fourth will cover much of the back of the PCB, which isn't necessary for the cooling I'm putting on it now, but will matter for sub ambient stuff later.
> 
> Application, beyond the actual underfill, is pretty straightforward. I just paint it on. If these chokes were bigger I would have used a blunt tipped syringe to actually fill them, but capillary action will have to suffice with these.


Thanks for the picture and description. Looks like that's a very thorough procedure and a bit overkill for only LM on the die. Can image it's necessary for LN cooling though.


----------



## GrandestFinale

Hello, first post on the site here! I was wondering if anyone more experienced than I am could offer some OC advice to potentially get my reference XTX card over 2600 stable (suspect I got shafted on the silicon lottery). My MPT settings are 400w on the power limit and 2150 on FCLK (higher seems unstable), with an Alphacool Eiswolf 360mm for cooling, pictured in avatar.

Any help is appreciated!


----------



## Blameless

Spawnyspawn said:


> And I checked, but I can't even buy Thermalright Silver King around here unless I pay like triple normal retail price. So I'll go with conductonaut instead. I doubt that's going to make any sort of measurable difference.


Conductonaut should be fine. Almost all the indium-gallium alloy TIMs are very similar in practice. Some have different initial wetability, some seem to work better with unplated copper than others, but once you figure out the specific formula, it will last as long and perform as well as any other, within margin of error.



Spawnyspawn said:


> Btw, does it matter if the card is in a vertical mount?


Not unless you use way too much, or have too large a gap to fill. One of the advantages of putty everywhere else is that you can be sure you'll be able to get good core mounting pressure, and if you used the correct amount of LM (a thin coat on both the die and cold plate), surface tension will do the rest.



Spawnyspawn said:


> Looks like that's a very thorough procedure and a bit overkill for only LM on the die.


Complete overkill, but even if it's a year or two before I actually get around to doing anything with the card that would really need it, now--while it's still being using in my test bench--is a good time for me to get it done.


----------



## ZealotKi11er

I reapplied thermal paste to my GPU. Now its hitting 60/80 after about 5 minutes of TS or Halo. Probably will got another 5-10C if I play for longer. Power was 400W + 15%. Maybe I do need LM after all.


----------



## GrandestFinale

GrandestFinale said:


> Hello, first post on the site here! I was wondering if anyone more experienced than I am could offer some OC advice to potentially get my reference XTX card over 2600 stable (suspect I got shafted on the silicon lottery). My MPT settings are 400w on the power limit and 2150 on FCLK (higher seems unstable), with an Alphacool Eiswolf 360mm for cooling, pictured in avatar.
> 
> Any help is appreciated!


...I'm sorry to say that after some failed maintenance, both she and the MOBO have passed away. I've been knocked back down to my 3700x/RX 580 setup. Alas...


----------



## J7SC

GrandestFinale said:


> ...I'm sorry to say that after some failed maintenance, both she and the MOBO have passed away. I've been knocked back down to my 3700x/RX 580 setup. Alas...


Oh dear - both the GPU and the mobo ? What happened, if it is not too painful to describe ?


----------



## Enzarch

ZealotKi11er said:


> I reapplied thermal paste to my GPU. Now its hitting 60/80 after about 5 minutes of TS or Halo. Probably will got another 5-10C if I play for longer. Power was 400W + 15%. Maybe I do need LM after all.


This is far better than before, and reasonable for 460W, which is quite a lot on a ref card. Although thats just speculation.


----------



## toxick

After a year of using the RX 6900 XT BBA, I had the opportunity to buy the RX 6900 XT LC, the differences don't seem big, it brings 300 points more in time spy and no coil whine.
I'm curious what will happen after I mount water block.









I scored 22 567 in Time Spy


AMD Ryzen 9 5950X, AMD Radeon RX 6900 XT x 1, 16384 MB, 64-bit Windows 11}




www.3dmark.com


----------



## GrandestFinale

J7SC said:


> Oh dear - both the GPU and the mobo ? What happened, if it is not too painful to describe ?


I was doing some routine maintenance and something seems to have happened with the PCI slots, because I smelled what smelled like an electrical fire when I went to boot. Tried my backup 580 in both slots, no signal out - tried the 69xt in another computer, still not working.


----------



## CS9K

toxick said:


> After a year of using the RX 6900 XT BBA, I had the opportunity to buy the RX 6900 XT LC, the differences don't seem big, it brings 300 points more in time spy and no coil whine.
> I'm curious what will happen after I mount water block.
> 
> 
> 
> 
> 
> 
> 
> 
> 
> I scored 22 567 in Time Spy
> 
> 
> AMD Ryzen 9 5950X, AMD Radeon RX 6900 XT x 1, 16384 MB, 64-bit Windows 11}
> 
> 
> 
> 
> www.3dmark.com
> 
> 
> 
> 
> 
> View attachment 2541891
> View attachment 2541892
> View attachment 2541895


I am quite envious of you on your opportunity to acquire one of the reference LC RX 6900 XT's. Hopefully you have a winner once the block is mounted 💗

I'm curious to see how far you can push the memory.


----------



## ZealotKi11er

CS9K said:


> I am quite envious of you on your opportunity to acquire one of the reference LC RX 6900 XT's. Hopefully you have a winner once the block is mounted 💗
> 
> I'm curious to see how far you can push the memory.


Memory will not do much more than 2380-2400. You get too many errors. I think its more of limitation with memory controller than memory itself.


----------



## CS9K

ZealotKi11er said:


> Memory will not do much more than 2380-2400. You get too many errors. I think its more of limitation with memory controller than memory itself.


More interested in another data point, not so much the max it is capable of 

Thank you, though!


----------



## Enzarch

toxick said:


> After a year of using the RX 6900 XT BBA, I had the opportunity to buy the RX 6900 XT LC, the differences don't seem big, it brings 300 points more in time spy and no coil whine.
> I'm curious what will happen after I mount water block.


You will see larger gains at higher res, try TimeSpy Extreme, FireStrike Ultra, etc.

Please keep us updated, I'd love to know if the reason I cant crack 2800mhz on my XTLC is due to silicon or VRM


----------



## CS9K

Enzarch said:


> You will see larger gains at higher res, try TimeSpy Extreme, FireStrike Ultra, etc.
> 
> Please keep us updated, I'd love to know if the reason I cant crack 2800mhz on my XTLC is due to silicon or VRM


Now _this_ is the kind of data I'm interested in:

4k performance once you take the memory above 2150MHz
On average, how high the memory clock on the reference RX 6900 XT LC can go and be stable

On the _offchance_ that I ever have the opportunity to acquire a 69 XT LC at some point, since it is drop-in into my EK water block.


----------



## Enzarch

CS9K said:


> Now _this_ is the kind of data I'm interested in:
> 
> 4k performance once you take the memory above 2150MHz
> On average, how high the memory clock on the reference RX 6900 XT LC can go and be stable
> 
> On the _offchance_ that I ever have the opportunity to acquire a 69 XT LC at some point, since it is drop-in into my EK water block.


-Its hard to A-B test the memory, as the driver minimum is 2310
-Not a large data-set, but it seems 2400 is the limit, fast timings or not. I can set mine a wee bit higher and be stable, but performance plateaus. However I have not done a lot of tweaking to the memory system yet.

Not sure if it helps, here are some of my 4k scores.
Fire Strike Ultra
Time Spy Extreme
Port Royal


----------



## J7SC

Enzarch said:


> -Its hard to A-B test the memory, as the driver minimum is 2310
> -Not a large data-set, but it seems 2400 is the limit, fast timings or not. I can set mine a wee bit higher and be stable, but performance plateaus. However I have not done a lot of tweaking to the memory system yet.
> 
> Not sure if it helps, here are some of my 4k scores.
> Fire Strike Ultra
> Time Spy Extreme
> Port Royal


At 4K and the like, the extra 250 MHz over the (dreaded) 2150 / VRAM limit for regular XTX that other wise clock nice (never mind MPT tools) seems to make a difference, at least in benching - not sure whether it matters for 'every-day' gaming. Still, I'd take an extra 250MHz in VRAM as I run my monitors at 4K.


----------



## CS9K

J7SC said:


> At 4K and the like, the extra 250 MHz over the (dreaded) 2150 / VRAM limit for regular XTX that other wise clock nice (never mind MPT tools) seems to make a difference, at least in benching - not sure whether it matters for 'every-day' gaming. Still, I'd take an extra 250MHz in VRAM as I run my monitors at 4K.


Likewise, I'm on the LG CX 48" OLED train, myself. For a time, I was tempted to go 3080Ti since it _can_ hold up better at 4k, but with my card at 2700MHz/2150MHz+fast, it keeps up just the same. It'd be nice if we had bios control, because I _really_ want to try my hand at tweaking GDDR6 memory timings again. I did get to dabble in it for a week or two in 2020 with my RX 5600 XT, but the memory got so warm that I never really made much headway. That, and I know a helluva lot more about desktop memory overclocking now than I did then, so I reckon I would have better luck, especially with such cool-running memory under my water block.

_Sigh_. In time...


----------



## J7SC

CS9K said:


> Likewise, I'm on the LG CX 48" OLED train, myself. For a time, I was tempted to go 3080Ti since it _can_ hold up better at 4k, but with my card at 2700MHz/2150MHz+fast, it keeps up just the same. It'd be nice if we had bios control, because I _really_ want to try my hand at tweaking GDDR6 memory timings again. I did get to dabble in it for a week or two in 2020 with my RX 5600 XT, but the memory got so warm that I never really made much headway. That, and I know a helluva lot more about desktop memory overclocking now than I did then, so I reckon I would have better luck, especially with such cool-running memory under my water block.
> 
> _Sigh_. In time...


... as before , I agree 100% re. bios control . With watercooling and my 6900XT XTX encased in thermal putty and connected to a big loop w/external rads, the tests I've run at 2150 VRAM w/fast timings hadn't plateaued yet. I don't think it would have made it to the XTXH LC VRAM numbers above, but 2260 give or take would also be a step forward, even for what is primarily a work machine.


----------



## ZealotKi11er

CS9K said:


> Likewise, I'm on the LG CX 48" OLED train, myself. For a time, I was tempted to go 3080Ti since it _can_ hold up better at 4k, but with my card at 2700MHz/2150MHz+fast, it keeps up just the same. It'd be nice if we had bios control, because I _really_ want to try my hand at tweaking GDDR6 memory timings again. I did get to dabble in it for a week or two in 2020 with my RX 5600 XT, but the memory got so warm that I never really made much headway. That, and I know a helluva lot more about desktop memory overclocking now than I did then, so I reckon I would have better luck, especially with such cool-running memory under my water block.
> 
> _Sigh_. In time...


Infinity Cache mitigates a lot of the memory latency improvements. Also from the looks it, AMD did a pretty good job considering how much you can OC it. I dont think they left much in the table.


----------



## J7SC

AMD did a great job with BigNavi, but when compared directly to the RTX 3090 _at 4K_, the 256 bit bus and 'regular' GDDR6 as opposed to the 384 bit bus and faster GDDR6X start to show at 4K and above, even with InfinityCache there to mask some of that...I'm still hopping to see a future BigNavi with 512 bit bus, double the InfinityCache and GDDR6X (a HBM3 memory setup would be even better, but probably make it unbotanium at the consumer level). 

Or may be AMD can release an intermediate model - they can call the 6900XT Ti...


----------



## ZealotKi11er

J7SC said:


> AMD did a great job with BigNavi, but when compared directly to the RTX 3090 _at 4K_, the 256 bit bus and 'regular' GDDR6 as opposed to the 384 bit bus and faster GDDR6X start to show at 4K and above, even with InfinityCache there to mask some of that...I'm still hopping to see a future BigNavi with 512 bit bus, double the InfinityCache and GDDR6X (a HBM3 memory setup would be even better, but probably make it unbotanium at the consumer level).
> 
> Or may be AMD can release an intermediate model - they can call the 6900XT Ti...


I think 128mb scales well into 4K. Why it is not as fast as 3090 at 4K is most likely because 3090 just more compute power. I do not think we will see 512-bit card for long time. Before AMD goes to 512 they need 384 first. That alone even with faster G6 will be enough of generational uplift.


----------



## CS9K

ZealotKi11er said:


> I think 128mb scales well into 4K. Why it is not as fast as 3090 at 4K is most likely because 3090 just more compute power. I do not think we will see 512-bit card for long time. Before AMD goes to 512 they need 384 first. That alone even with faster G6 will be enough of generational uplift.


I believe the word-around is that 128MB is _just_ a little too small for the excellent scaling seen at 1080p and 1440p/ultrawide, to keep scaling up into 4k. At 4k, increasing the memory clock speed and the "fast" timings certainly help, but RDNA3 I believe will remedy this issue with at least 256MB of Infinity cache, if not more (based on a variety of rumors, none of which anyone should believe at face-value). 

In my opinion, I feel like the RX 6900 XT is a tiny bit short on compute muscle for how well it performs, but that too will be remedied with RDNA3's WGP configuration. Exciting times ahead!


----------



## ZealotKi11er

CS9K said:


> I believe the word-around is that 128MB is _just_ a little too small for the excellent scaling seen at 1080p and 1440p/ultrawide, to keep scaling up into 4k. At 4k, increasing the memory clock speed and the "fast" timings certainly help, but RDNA3 I believe will remedy this issue with at least 256MB of Infinity cache, if not more (based on a variety of rumors, none of which anyone should believe at face-value).
> 
> In my opinion, I feel like the RX 6900 XT is a tiny bit short on compute muscle for how well it performs, but that too will be remedied with RDNA3's WGP configuration. Exciting times ahead!


I do not see 256MB personally simply because it takes too much die space.


----------



## THUMPer1

toxick said:


> After a year of using the RX 6900 XT BBA, I had the opportunity to buy the RX 6900 XT LC, the differences don't seem big, it brings 300 points more in time spy and no coil whine.
> I'm curious what will happen after I mount water block.
> 
> 
> 
> 
> 
> 
> 
> 
> 
> I scored 22 567 in Time Spy
> 
> 
> AMD Ryzen 9 5950X, AMD Radeon RX 6900 XT x 1, 16384 MB, 64-bit Windows 11}
> 
> 
> 
> 
> www.3dmark.com
> 
> 
> 
> 
> 
> View attachment 2541891
> View attachment 2541892
> View attachment 2541895


Wow. Im trying myself to find a 6900 LC. What a beautiful card.


----------



## CS9K

THUMPer1 said:


> Wow. Im trying myself to find a 6900 LC. What a beautiful card.


Likewise, though sadly, I've yet to see someone in the U.S. with a reference RX 6900 XT LC.


----------



## Enzarch

CS9K said:


> Likewise, though sadly, I've yet to see someone in the U.S. with a reference RX 6900 XT LC.


I have one. In Texas. Also just pulled mine apart again, have some more details in a post soon.


----------



## J7SC

Enzarch said:


> I have one. In Texas. Also just pulled mine apart again, have some more details in a post soon.


Looking forward to some pics of PCB and close-up of VRAM chips


----------



## Enzarch

Alright just pulled my ref LC card out again to do a liquid metal application and to re-inspect since I had forgotten to check the power-stages the first time around.

As I suspected due to lower VRM temps compared to my previous ref model; The LC card uses 90A TDA21490 power stages as opposed to the 70A ones on the base card.
Appears to be many component changes on the power delivery (at least caps and chokes)

Samsung GDDR6 Chips here are K4ZAF325BM-HCP2 could not find a data sheet, maybe just heavily binned? 
(Normal 16gbps chips are K4ZAF325BM-HC16)

Only other change I can find is the additional header and cap in the corner for the cooler.
I will eventually disassemble and upload pics of the cooler as it seems to be quite a nice design by Coolermaster

As a sidenote, my very fastidious application of liquid metal didnt drop my edge temps much, but dramatically improved hotspot: Single digit deltas at 420W

Can use TechPowerUps review to compare images below























And I suppose Ill show off for once


----------



## J7SC

Enzarch said:


> Alright just pulled my ref LC card out again to do a liquid metal application and to re-inspect since I had forgotten to check the power-stages the first time around.
> 
> As I suspected due to lower VRM temps compared to my previous ref model; The LC card uses 90A TDA21490 power stages as opposed to the 70A ones on the base card.
> Appears to be many component changes on the power delivery (at least caps and chokes)
> 
> Samsung GDDR6 Chips here are K4ZAF325BM-HCP2 could not find a data sheet, maybe just heavily binned?
> (Normal 16gbps chips are K4ZAF325BM-HC16)
> 
> Only other change I can find is the additional header and cap in the corner for the cooler.
> I will eventually disassemble and upload pics of the cooler as it seems to be quite a nice design by Coolermaster
> 
> As a sidenote, my very fastidious application of liquid metal didnt drop my edge temps much, but dramatically improved hotspot: Single digit deltas at 420W
> 
> Can use TechPowerUps review to compare images below
> View attachment 2542292
> View attachment 2542293
> View attachment 2542294
> 
> 
> And I suppose Ill show off for once
> View attachment 2542295
> View attachment 2542296


 
Nice ! The 90A power stages would help this card in sub-zero. 

The VRAM pcis remind me of that Russian fellow on YT who likes to solder different types of VRAM chips onto GPUs...may be if I get a handful of those Samsung chips depicted above...


----------



## CS9K

Enzarch said:


> Alright just pulled my ref LC card out again to do a liquid metal application and to re-inspect since I had forgotten to check the power-stages the first time around.
> 
> As I suspected due to lower VRM temps compared to my previous ref model; The LC card uses 90A TDA21490 power stages as opposed to the 70A ones on the base card.
> Appears to be many component changes on the power delivery (at least caps and chokes)
> 
> Samsung GDDR6 Chips here are K4ZAF325BM-HCP2 could not find a data sheet, maybe just heavily binned?
> (Normal 16gbps chips are K4ZAF325BM-HC16)
> 
> Only other change I can find is the additional header and cap in the corner for the cooler.
> I will eventually disassemble and upload pics of the cooler as it seems to be quite a nice design by Coolermaster
> 
> As a sidenote, my very fastidious application of liquid metal didnt drop my edge temps much, but dramatically improved hotspot: Single digit deltas at 420W
> 
> Can use TechPowerUps review to compare images below
> View attachment 2542292
> View attachment 2542293
> View attachment 2542294
> 
> 
> And I suppose Ill show off for once
> View attachment 2542295
> View attachment 2542296


Oh right! I'm _so_ sorry for forgetting about you, for some reason I thought you were across the pond. I am a dingus; my bad!

That is one beefy system you have there! Also very interesting differences on the LC GPU vs the standard reference RX 6900 XT. I'll never complain about overbuilt power delivery, though!


----------



## J7SC

Enzarch said:


> Alright just pulled my ref LC card out again to do a liquid metal application and to re-inspect since I had forgotten to check the power-stages the first time around.
> 
> As I suspected due to lower VRM temps compared to my previous ref model; The LC card uses 90A TDA21490 power stages as opposed to the 70A ones on the base card.
> Appears to be many component changes on the power delivery (at least caps and chokes)
> 
> Samsung GDDR6 Chips here are K4ZAF325BM-HCP2 could not find a data sheet, maybe just heavily binned?
> (Normal 16gbps chips are K4ZAF325BM-HC16)
> 
> Only other change I can find is the additional header and cap in the corner for the cooler.
> I will eventually disassemble and upload pics of the cooler as it seems to be quite a nice design by Coolermaster
> 
> As a sidenote, my very fastidious application of liquid metal didnt drop my edge temps much, but dramatically improved hotspot: Single digit deltas at 420W
> 
> Can use TechPowerUps review to compare images below
> View attachment 2542292
> View attachment 2542293
> View attachment 2542294
> 
> 
> And I suppose Ill show off for once
> View attachment 2542295
> View attachment 2542296


Just to reiterate...I've done some complex builds and know how tricky some of it can be, I really like what you have done there, and that includes the gorgeous colour combo. As to the GPU die, how did you lap it ? Wet sandpaper / stone with an underlying glass plate ?


----------



## Enzarch

J7SC said:


> Just to reiterate...I've done some complex builds and know how tricky some of it can be, I really like what you have done there, and that includes the gorgeous colour combo. As to the GPU die, how did you lap it ? Wet sandpaper / stone with an underlying glass plate ?


No lapping, thats just streaking from preliminary cleaning: However when I do, just wet sanding on a piece of float glass up to about 5000 grit


----------



## Spawnyspawn

I think I missed something somehwere. The RX6900XT can do 2400mhz mem clocks? What? I thought it was "just" a reference version of an XTXH chip card. Or are those OC's you can attempt with all XTXH's?

On a sidenote, I contacted Alphacool through their "Send it and get one cooler for free" program. They currently don't have a block that fits the Toxic PCB. They responded that they indeed do not have such a block, but that they also have no interest in creating one.
So for now, if you have a Toxic EE and want to add it to a custom loop or watercool a Nitro+ SE or Toxic aircooled, Byksi is your only option (as far as I can tell).


----------



## Enzarch

Spawnyspawn said:


> I think I missed something somehwere. The RX6900XT can do 2400mhz mem clocks? What? I thought it was "just" a reference version of an XTXH chip card. Or are those OC's you can attempt with all XTXH's?
> 
> On a sidenote, I contacted Alphacool through their "Send it and get one cooler for free" program. They currently don't have a block that fits the Toxic PCB. They responded that they indeed do not have such a block, but that they also have no interest in creating one.
> So for now, if you have a Toxic EE and want to add it to a custom loop or watercool a Nitro+ SE or Toxic aircooled, Byksi is your only option (as far as I can tell).


The AMD reference liquid cooled model is the only card with 18gbps memory chips on it. (Some have loaded its vBios onto other cards and have been able to get similar memory clocks)

Or do what I have done for ages: Core only 'universal' water block and craft heatsinks for the rest


----------



## bunnybutt

Does anyone have any photos or details of how to disassemble the Sapphire Toxic Limited/Extreme editions?

Frankly the whole thing looks unwieldy. It sure was a bit of money... Im aprehensive of how to take it apart... Also not sure how best to rebuild (I guess LM and this putty on the VRM you guys speak of?)


----------



## LtMatt

bunnybutt said:


> Does anyone have any photos or details of how to disassemble the Sapphire Toxic Limited/Extreme editions?
> 
> Frankly the whole thing looks unwieldy. It sure was a bit of money... Im aprehensive of how to take it apart... Also not sure how best to rebuild (I guess LM and this putty on the VRM you guys speak of?)


Not my pictures, but was chatting with a fella on OcuK forums and he took a few pictures. Not sure if the link will work for others but... 8 new items by Divsta

Relax, the Toxic is extremely easy to take apart, change paste and remount it all. I've done it a dozen times over the use of two Toxics using Liquid Metal with good results. The hardest part is reconnecting the pump block/retention bracket to the pcb to ensure an even mount.

The reason for this is you need to apply even pressure in the center on the back of the PCB (where the back of the GPU die is) so the 4 screw holes from the pump block poke through the pcb evenly to ensure the best possible mount when you secure the retention bracket.

You need a third hand really, but it is possible and not that difficult. Once you've done it a few times (seeking the best mount and lowest deltas between edge/junction temps at idle/load) it becomes a simple 20 minute job.

First take off the backplate screws/backplate. Next take off the two larger screws on the PCI bracket. Next take off the 5 screws on the outside of the shroud. Then lastly remove the retention bracket. Watch out for the wires that are connected from the shroud to the PCB, they are not very long so try not to disconnect them, and ensure they don't catch on anything when you reassemble.

I did not bother surrounding the GPU die area in electrical tape or anything either. I just used the correct amount (small drop) of Liquid Metal (on the DIE and Block) and had no issues.


----------



## Spawnyspawn

So I found the AMD RX6900XT LC bios, but I can't seem to flash it under windows. The version of the ATI flash utility that supports XTXH flashing only comes in a portable installer that doesn't support force flashing. So I get the ID mismatch error and it won't flash. Is there a way to flash the bios under windows? Or do I have to use linux?


----------



## toxick

Hi,
I'm a little late.
Finally I took down the cooling to install EK block. I hope to have time next week to test it.
I have to assemble the one with air cooling and I can't find the hitachi hm01 thermal pad.


----------



## J7SC

I hadn't done a late-night benching session in a while and with a foot injury found it hard to sleep anyway, sooo...my XTX card got some VminDep 1.218v. Alas, still hoping for VRAM control beyond 2150. Ambient for below was 23 C, MPT PL was 400W 'nominal' (plus 15%) and the heavily w-cooled card had GPU and VRAM temp in the mid-40s C and Hotspot at mid-to-upper 50s/low 60s C









@Blameless ...further to our earlier posts on this, min fps for Superposition 4K / default seems to stay low. The 3090 in a different system which scores 1400 points higher also still has that issue, as do the 2080 Tis in another system.

As discussed, these are also work systems so several (sometimes different) apps are always running in the background, but I did notice that the min fps drop always happens at the same spot - segment 10 in Superposition as the camera pans up. Since it is happening on all my different systems with different CPUs and GPUs (and somewhat different background apps), I wonder if I should re-download and re-install Unigine's Superposition.

*Does anyone else get a 'hiccup' that drops min fps by about 20 frames at Superposition 4K/8k at segment 10 ??*


----------



## SoloCamo

J7SC said:


> *Does anyone else get a 'hiccup' that drops min fps by about 20 frames at Superposition 4K/8k at segment 10 ??*


If you are referring to scene 10 out of the 17 then no, not here - at least on the 4k optimized run I just did. Stock core clocks, power limit bumped to +15 and mem set at 2100 w/ fast timings.

Edit:

Correction, must have blinked initially because now I see the exact hiccup on the next runs I ran. So yes, able to replicate here.


----------



## Tradition

THUMPer1 said:


> Wow. Im trying myself to find a 6900 LC. What a beautiful card.


i have one this is what i was able to do with it by strapping 2 delta fans to the rad









I scored 24 493 in Time Spy


Intel Core i7-12700K Processor, AMD Radeon RX 6900 XT x 1, 16384 MB, 64-bit Windows 10}




www.3dmark.com


----------



## ZealotKi11er

Tradition said:


> i have one this is what i was able to do with it by strapping 2 delta fans to the rad
> 
> 
> 
> 
> 
> 
> 
> 
> 
> I scored 24 493 in Time Spy
> 
> 
> Intel Core i7-12700K Processor, AMD Radeon RX 6900 XT x 1, 16384 MB, 64-bit Windows 10}
> 
> 
> 
> 
> www.3dmark.com


stock voltage?


----------



## Tradition

ZealotKi11er said:


> stock voltage?


1225mv with 450w tpd


----------



## Enzarch

Anyone notice different driver min/max clock behavior with 22.1.1?
I stopped even touching minimum clock and leaving at 500MHz a while ago after testing and realizing it had zero effect on my setup.

However with the '22 drivers, especially in 3DMark, this is no longer the case: If I leave min. at 500 the card will not go near max, behaving as if it is power limited.
I have not done significant testing to determine the exact behavior dependent on min/max delta. Though I can say how this works varies quite a lot per application.


----------



## Blameless

J7SC said:


> I hadn't done a late-night benching session in a while and with a foot injury found it hard to sleep anyway, sooo...my XTX card got some VminDep 1.218v. Alas, still hoping for VRAM control beyond 2150. Ambient for below was 23 C, MPT PL was 400W 'nominal' (plus 15%) and the heavily w-cooled card had GPU and VRAM temp in the mid-40s C and Hotspot at mid-to-upper 50s C
> View attachment 2542763
> 
> 
> @Blameless ...further to our earlier posts on this, min fps for Superposition 4K / default seems to stay low. The 3090 in a different system which scores 1400 points higher also still has that issue, as do the 2080 Tis in another system.
> 
> As discussed, these are also work systems so several (sometimes different) apps are always running in the background, but I did notice that the min fps drop always happens at the same spot - segment 10 in Superposition as the camera pans up. Since it is happening on all my different systems with different CPUs and GPUs (and somewhat different background apps), I wonder if I should re-download and re-install Unigine's Superposition.
> 
> *Does anyone else get a 'hiccup' that drops min fps by about 20 frames at Superposition 4K/8k at segment 10 ??*


I haven't seen such a hiccup at that location on any of my three RDNA2 parts. Even my 6800 XT at 2500-2600 core and 2100 fast memory doesn't dip below 105 fps until test 13.

That 6800 XT and both samples of 6900 XT(HX) I've tried see their minimum fps at the transition between test 13 and 14, and reach almost that low during the very last test.

This is with stock driver graphics quality settings other than "surface format optimization", which I disable, and has been the case on every driver I've used since the RDNA2 launch. The pattern is also consistent between my main system, which has a 5800X, and my test bench, which has a 3900X. Both are Windows 10 (one 20H2, one 21H2), configured to be lean and light, but fully functional...they have no excess bloat, but they aren't stripped to the bone either.

As an aside, Superposition was one of the tests where 2124 mem consistently (but only slightly) outperformed 2150 mem (both fast) for me.


----------



## CfYz

toxick said:


> I'm a little late.


Thanks for the comparing photos, but one more favor - can you make LC/AIR photo from the back not only front? And please post raw (big resolution) photos if you can. Thank you! Really helpful!


----------



## ptt1982

Has anyone else experienced Timespy instability with OCing and lower scores using the latest drivers 22.1.1. ?

I had to lower clocks by 80mhz to get stability, and the new drivers react negatively to high power as well. First time experiencing this much instability when OCing.


----------



## ptt1982

Confirmed: New 22.1.1. drivers are way worse for OCing and UVing. Avoid until a better release comes along!!

Some people have had problems with the new drivers when using 3D workload applications as well.


----------



## J7SC

*I can also confirm that 22.1.1.* is not only not as good for oc'ing than the previous one, but it in fact caused all kinds of other weirdness in my Win 10 Pro system. Out the door it goes...



Blameless said:


> I haven't seen such a hiccup at that location on any of my three RDNA2 parts. Even my 6800 XT at 2500-2600 core and 2100 fast memory doesn't dip below 105 fps until test 13.
> 
> That 6800 XT and both samples of 6900 XT(HX) I've tried see their minimum fps at the transition between test 13 and 14, and reach almost that low during the very last test.
> 
> This is with stock driver graphics quality settings other than "surface format optimization", which I disable, and has been the case on every driver I've used since the RDNA2 launch. The pattern is also consistent between my main system, which has a 5800X, and my test bench, which has a 3900X. Both are Windows 10 (one 20H2, one 21H2), configured to be lean and light, but fully functional...they have no excess bloat, but they aren't stripped to the bone either.
> 
> As an aside, Superposition was one of the tests where 2124 mem consistently (but only slightly) outperformed 2150 mem (both fast) for me.


Well, it seems now like a '*tempest in a teapot*'...I went to Unigine's site to download a fresh copy of Superposition. Fortunately, I had the presence of mind (in spite of 3 am and the pain meds for the foot) to check their leader-board in my categories - score-wise and GPUs - and look at run details. Quite a lot of folks had their worst min at segment / scene 10 just like I do, at times with min fps lower than mine. That said, a bigger chunk had their (less drastic) minimum around scene 13 and/or 14.

As to VRAM, I had tested out fast-timing at 2000 to 2150 with the latter working best in terms of performance. Remember though that not only do I seem to have decent VRAM chips, but they also never go above the mid 40s C due to the HD cooling. As to optimizations, I don't touch the driver graphics setting at all, but will check out 'surface format optimization' when the time comes. I always run 4K Optimized or 8K Optimized from the drop-down menu.

As to Windows version, I'm on Win 10 Pro (latest updates) w/o any special prep or 'slimming', this is the main work-PC after all in my home office with various apps running. Finally, I decided to do another run...same conditions and MPT PL / v-core on my XTX as in my prior post but I pushed the GPU speed to from 2807 to 2860...


----------



## Blameless

J7SC said:


> View attachment 2542832


Yeah, this score is almost 900 points higher than mine, but with a minimum fps eight lower than what I can consistently get on my 3900X system.


----------



## cfranko

I haven’t followed this thread for a long time, did anyone figure out a way to unlock memory clock on XTX cards?


----------



## ZealotKi11er

I feel like putting the pc outside and try to run it at 1.3v. Its -22C right now.


----------



## SoloCamo

ZealotKi11er said:


> I feel like putting the pc outside and try to run it at 1.3v. Its -22C right now.


And here I was trying to push my system a bit more because it hit a whopping low of 39F last night here.


----------



## 99belle99

It's 6 degrees celsius where I am. So not too bad. Not cold enough to push the PC but I would do that what you said above if it is that cold there but make sure it's dry cold and not damp cold like we get here where I am.


----------



## J7SC

Blameless said:


> Yeah, this score is almost 900 points higher than mine, but with a minimum fps eight lower than what I can consistently get on my 3900X system.


My proc used for this is a_ 3950X _and ambient_ temp was 23 C_ ...I doubt the rest of my family would appreciate if I would turn the heat off at 3 am and open all the windows, just because I can't sleep due to a (healing) injury - I most likely would find myself outside at about +3 C 

Below are three examples from the Unigine site / leader-board for 3090 and 6900XTs with similar scores / setups. The first two have that scene 10 hiccup, the last one the scene 13 low - which seems to be less costly overall. On my system, min frame rate instantly drops by about 20 fps.

I have to do some more testing, but I know that Unigine apps are more sensitive to IPC and RAM tuning than many other benches. I usually run binned Samsung B-die with very tight tFaw and tRC (+tRAS)...that might have something to do with it, not sure yet until I've done more testing. It did make a difference with the 3950X on FS2020.


----------



## Blameless

J7SC said:


> My proc used for this is a_ 3950X _and ambient_ temp was 23 C_ ...I doubt the rest of my family would appreciate if I would turn the heat off at 3 am and open all the windows, just because I can't sleep due to a (healing) injury - I most likely would find myself outside at about +3 C
> 
> Below are three examples from the Unigine site / leader-board for 3090 and 6900XTs with similar scores / setups. The first two have that scene 10 hiccup, the last one the scene 13 low - which seems to be less costly overall. On my system, min frame rate instantly drops by about 20 fps.
> 
> I have to do some more testing, but I know that Unigine apps are more sensitive to IPC and RAM tuning than many other benches. I usually run binned Samsung B-die with very tight tFaw and tRC (+tRAS)...that might have something to do with it, not sure yet until I've done more testing. It did make a difference with the 3950X on FS2020.
> View attachment 2542938


I don't know what's going on with the systems on the leaderboard, but I'm not able to reproduce that scene 10 dip after any number of runs. Both of my systems with RDNA2 parts look essentially like that third graph.

My test bench is not unusually fast either. It has an air cooled 3900X in an 80 dollar motherboard and is running a pair of cheap single rank Micron E-die DIMMs (rated for 3200 CL16) @ 3733 18-22-22-30, GDM enabled.

I don't think 4k optimized is particularly CPU/memory performance limited. My faster 5800X setup produces the same results with the same GPU.

Some sort of background task, or some kind of power option that affects the GPU would be my guess.


----------



## J7SC

Blameless said:


> I don't know what's going on with the systems on the leaderboard, but I'm not able to reproduce that scene 10 dip after any number of runs. Both of my systems with RDNA2 parts look essentially like that third graph.
> 
> My test bench is not unusually fast either. It has an air cooled 3900X in an 80 dollar motherboard and is running a pair of cheap single rank Micron E-die DIMMs (rated for 3200 CL16) @ 3733 18-22-22-30, GDM enabled.
> 
> I don't think 4k optimized is particularly CPU/memory performance limited. My faster 5800X setup produces the same results with the same GPU.
> 
> Some sort of background task, or some kind of power option that affects the GPU would be my guess.


It doesn't really matter anyhow. At the Unigine site, the top systems seem to mostly fall either into the 'Scene 13/14' or 'Scene 10' category, so it's _not a problem_ as such. On lower-to-mid level like my 980 Classifieds, it is scene 9...benchmarks are obviously not linear (which would be a shame anyhow)...

I base GPU purchases mostly on PCB, and I'm very pleased with the potential my 'not top end' card fulfilled once I gave it strong cooling...the stock core clock is listed as 2050 MHz and boost clock as 2285 MHz at XTX-typical 1.175v and running (up to nominal) 2860 MHz per above at 1.218v just makes me happy. It's an unexpected bonus for this dual-mobo work-play build.









I reiterate that the custom cooling really helps with sustaining these clocks at 150%+ more power limit than originally spec'ed. Low to mid 40s C for GPU and VRAM, low 60s C for Hotpsot. As you know, I have added thermal putty and extra heatsinks etc. as part of the custom water-cooling. The stock air-cooler did a more or less decent job at stock PL, but its 3x 3800+ rpm fans were _annoying_.

Unless it is an AIO for other machines in the office, I use only dual or triple core copper/brass rads (360s or 480s) with medium to high density fpi and low-rpm / high static pressure push-pull fans that are whisper quiet but get the job done on cooling. I still have a dozen or more of the original GentleTyphoon 3K rpm (and some 5K rpm server fans) laying around and do like them / adore them for their cooling power - I just don't want to listen to them anymore when working 😠


----------



## SoloCamo

Anyone have any luck or pointers with Resizable bar and the 6900XT? Any major known issues? Playing at 4k here and it seems the general consensus is it's better on then off for _most_ titles.



Blameless said:


> I don't know what's going on with the systems on the leaderboard, but I'm not able to reproduce that scene 10 dip after any number of runs. Both of my systems with RDNA2 parts look essentially like that third graph.


I'm getting the same on the 10th segment oddly enough and I kind of remember this same hiccup with my Radeon VII now that I think about it. Wish I still had the card to confirm. If I get a chance I'll try it out on my now backup rig (4790k / 32gb cl10 ddr3 2400 / Vega 64).


----------



## Blameless

J7SC said:


> It doesn't really matter anyhow. At the Unigine site, the top systems seem to mostly fall either into the 'Scene 13/14' or 'Scene 10' category, so it's _not a problem_ as such. On lower-to-mid level like my 980 Classifieds, it is scene 9...benchmarks are obviously not linear (which would be a shame anyhow)...


It's an anomaly, that undoubtedly has _some_ cause. Identifying that cause may help avoid such dips where they would be more relevant. Even if something about my setups is already the 'right' thing, I'd like to understand what precisely that is, for preventative purposes.

Totally different architectures/drivers would rationally be bottlenecked by different scenes, so don't really provide a meaningful point of comparison.



SoloCamo said:


> Anyone have any luck or pointers with Resizable bar and the 6900XT? Any major known issues? Playing at 4k here and it seems the general consensus is it's better on then off for _most_ titles.


I leave it enabled, partially because I haven't found anything it's meaningfully slower in and partially because having the whole VRAM capacity mapped at once generally reveals VRAM instabilities faster.



SoloCamo said:


> I'm getting the same on the 10th segment oddly enough and I kind of remember this same hiccup with my Radeon VII now that I think about it. Wish I still had the card to confirm. If I get a chance I'll try it out on my now backup rig (4790k / 32gb cl10 ddr3 2400 / Vega 64).


I'd be curious to know if the same pattern exists in the Vega. It's a very different card, but the drivers are still the same, and anything that help narrow down potential causal relationships would be useful.

I'm trying a few things to see if that allows me to induce the drop on test 10.


----------



## J7SC

SoloCamo said:


> Anyone have any luck or pointers with* Resizable bar and the 6900XT? Any major known issues? Playing at 4k here *and it seems the general consensus is it's better on then off for _most_ titles.
> 
> I'm getting the same on the 10th segment oddly enough and I kind of remember this same hiccup with my Radeon VII now that I think about it. Wish I still had the card to confirm. If I get a chance I'll try it out on my now backup rig (4790k / 32gb cl10 ddr3 2400 / Vega 64).


I leave r_Bar on all the time as all my monitors are 4K now and that's where the 6900XT can use a little bit extra. There are some apps where it really helps (PortRoyal for example) and other apps where it doesn't seem to make a difference. Presumably, one can find the odd example where it may hurt performance, but so far I haven't personally come across it in what I usually run.


----------



## ZealotKi11er

SoloCamo said:


> Anyone have any luck or pointers with Resizable bar and the 6900XT? Any major known issues? Playing at 4k here and it seems the general consensus is it's better on then off for _most_ titles.
> 
> 
> 
> I'm getting the same on the 10th segment oddly enough and I kind of remember this same hiccup with my Radeon VII now that I think about it. Wish I still had the card to confirm. If I get a chance I'll try it out on my now backup rig (4790k / 32gb cl10 ddr3 2400 / Vega 64).


From what people say is that you need Ryzen 5000 to see the true benefits. I high HUB will do a investigation on Re-Bar soon. From what he said so far is that Intel 9-11 gen are not very good with it.


----------



## THUMPer1

Tradition said:


> i have one this is what i was able to do with it by strapping 2 delta fans to the rad
> 
> 
> 
> 
> 
> 
> 
> 
> 
> I scored 24 493 in Time Spy
> 
> 
> Intel Core i7-12700K Processor, AMD Radeon RX 6900 XT x 1, 16384 MB, 64-bit Windows 10}
> 
> 
> 
> 
> www.3dmark.com


Yeah where can I get one of those? haha


----------



## Tradition

THUMPer1 said:


> Yeah where can I get one of those? haha


in brazil there are plenty haha


----------



## CS9K

ZealotKi11er said:


> From what people say is that you need Ryzen 5000 to see the true benefits. I high HUB will do a investigation on Re-Bar soon. From what he said so far is that Intel 9-11 gen are not very good with it.


This has not been my observation with my previous 9700K+Z390 system and my current 5800x+X570 system. ReSize BAR made _roughly_ the same % difference with ReSize BAR disabled vs enabled.

My test was only the % difference disabled vs enabled, and only on two platforms, though. I too look forward to HWUB or others doing a complete multi-generation test to see things at a much broader scale.


----------



## ZealotKi11er

CS9K said:


> This has not been my observation with my previous 9700K+Z390 system and my current 5800x+X570 system. ReSize BAR made _roughly_ the same % difference with ReSize BAR disabled vs enabled.
> 
> My test was only the % difference disabled vs enabled, and only on two platforms, though. I too look forward to HWUB or others doing a complete multi-generation test to see things at a much broader scale.


I think it more to do with games. I also haze Z390 + 9900K and saw no difference vs X570 + 5950X.


----------



## SoloCamo

Blameless said:


> I'd be curious to know if the same pattern exists in the Vega. It's a very different card, but the drivers are still the same, and anything that help narrow down potential causal relationships would be useful.
> 
> I'm trying a few things to see if that allows me to induce the drop on test 10.


Well did a bunch of runs with a Vega 64. On the first run I didn't notice it, but all subsequent runs did show a slight hitch on segment 10. Going to do more tests, but one thing that somewhat seemed consistent was when there was a mem oc applied the hitch seemed ever so slightly worse but it's probably in my head.


----------



## J7SC

SoloCamo said:


> Well did a bunch of runs with a Vega 64. On the first run I didn't notice it, but all subsequent runs did show a slight hitch on segment 10. Going to do more tests, but one thing that somewhat seemed consistent was when there was a mem oc applied the hitch seemed ever so slightly worse but it's probably in my head.


I'll check on the system RAM settings such as tFAW etc as well later in the coming week. As mentioned though, I don't see it as such a big issue, since _anything but a horizontal straight line_ will have local and/or global minima and maxima  if you know what I mean.

When I checked the various runs at the Unigine leader-board, it was either scene 10 or 13/14 that most often showed up as a pronounced dip. Finally, sometimes it can be just an oddly written app, but we'll get to 'the bottom' of it eventually.


----------



## SoloCamo

J7SC said:


> I'll check on the system RAM settings such as tFAW etc as well later in the coming week. As mentioned though, I don't see it as such a big issue, since _anything but a horizontal straight line_ will have local and/or global minima and maxima  if you know what I mean.
> 
> When I checked the various runs at the Unigine leader-board, it was either scene 10 or 13/14 that most often showed up as a pronounced dip. Finally, sometimes it can be just an oddly written app, but we'll get to 'the bottom' of it eventually.


Just to clarify I meant gpu memory. Did a bunch of stock vs oc runs, and some oc runs with no mem oc'ing. Something is weird about that spot. I typically always run radeon anti lag as well and forgot to check if it was on with the vega 64 system. Also always use a preset of high textures at the driver level. Will try to mess with those on both rigs and see if anything changes.


----------



## J7SC

SoloCamo said:


> Just to clarify I meant gpu memory. Did a bunch of stock vs oc runs, and some oc runs with no mem oc'ing. Something is weird about that spot. I typically always run radeon anti lag as well and forgot to check if it was on with the vega 64 system. Also always use a preset of high textures at the driver level. Will try to mess with those on both rigs and see if anything changes.


Yeah, I actually meant system memory settings such as tFAW and a couple of others that can affect GPU hiccups from past experience. I will check it out in the coming week and report back. FYI, my drivers settings are all on default, and r_BAR on.

EDIT: just fid a few runs on both 690XT and 3090 Strix. Both at total stock settings (no oc, no PL). For the 600XT in particular, the scene 10 drop was not only still there but got more pronounced ! I then re-ran the same stock test after relaxing tFAW - no effect. Like many other cards, this one sets the min fps at scene 10. I will do some other tests this week w/ Microsoft virus programs disabled.


----------



## Swenzzon86

Hello! Andreas from Sweden here. After many years of absence from my interest in computers, I resumed it this autumn. I bought a 5900X and XFX 6900XT. Had actually thought to buy a 3080ti but when a 6900xt showed up at a really good price it turned out to be one of those things  which I really do not regret today! what a monster it is! my friend has a 3090, I always have better performance than him in games, as long as ray tracing is not involved .. My 6900XT also turned out to be an XTXH, the overclocking interest remained  
A water block was mounted last Friday from alphacool, really nice temperatures with it. I ran some benchmarks (Time Spy) Has managed to get 25660 points so far The card was run at 2950mhz (benchmarks outside with about 0 degrees) average temp of about 30-35 degrees Now to my question! Have probably read every single page here xD have tried all tweaks with MPT 25660 is pretty good, though! with those clock frequencies, it feels like at least 26k should be possible! have compared with others' runs very many who have significantly better points with significantly lower clocks ... do I miss something you should set? or how do you manage to get such high scores with lower clocks? The card pulled at most 610w in GT2 test I noticed that the frequency went well below the minimum of the first 10 seconds in GT2 (minimum was set to 2850) to then go up to max! 2950mhz, even the memories dipped GT1 seems to work properly with around 2900mhz during the whole test (less load in that test) Could it be my PSU that can't handle it? Has only an 850W Evga Read that a slightly overclocked 6900 can pull up to 800 watt spikes It was a long post  And thank you for all the good tips you have received here! really grateful for that


----------



## ZealotKi11er

Swenzzon86 said:


> Hello! Andreas from Sweden here. After many years of absence from my interest in computers, I resumed it this autumn. I bought a 5900X and XFX 6900XT. Had actually thought to buy a 3080ti but when a 6900xt showed up at a really good price it turned out to be one of those things  which I really do not regret today! what a monster it is! my friend has a 3090, I always have better performance than him in games, as long as ray tracing is not involved .. My 6900XT also turned out to be an XTXH, the overclocking interest remained
> A water block was mounted last Friday from alphacool, really nice temperatures with it. I ran some benchmarks (Time Spy) Has managed to get 25660 points so far The card was run at 2950mhz (benchmarks outside with about 0 degrees) average temp of about 30-35 degrees Now to my question! Have probably read every single page here xD have tried all tweaks with MPT 25660 is pretty good, though! with those clock frequencies, it feels like at least 26k should be possible! have compared with others' runs very many who have significantly better points with significantly lower clocks ... do I miss something you should set? or how do you manage to get such high scores with lower clocks? The card pulled at most 610w in GT2 test I noticed that the frequency went well below the minimum of the first 10 seconds in GT2 (minimum was set to 2850) to then go up to max! 2950mhz, even the memories dipped GT1 seems to work properly with around 2900mhz during the whole test (less load in that test) Could it be my PSU that can't handle it? Has only an 850W Evga Read that a slightly overclocked 6900 can pull up to 800 watt spikes It was a long post  And thank you for all the good tips you have received here! really grateful for that


If you just want for benchmark there is not much are something you can do but for gaming there are no secrets. Does you card actually hit 2950MHz or close to that during TS?


----------



## Swenzzon86

I think the average was around 2895mhz according to 3dmark. GT2 at most 2950mhz at least below 2800 GT1 2900mhz stable throughout the test


----------



## J7SC

Swenzzon86 said:


> Hello! Andreas from Sweden here. After many years of absence from my interest in computers, I resumed it this autumn. I bought a 5900X and XFX 6900XT. Had actually thought to buy a 3080ti but when a 6900xt showed up at a really good price it turned out to be one of those things  which I really do not regret today! what a monster it is! my friend has a 3090, I always have better performance than him in games, as long as ray tracing is not involved .. My 6900XT also turned out to be an XTXH, the overclocking interest remained
> A water block was mounted last Friday from alphacool, really nice temperatures with it. I ran some benchmarks (Time Spy) Has managed to get 25660 points so far The card was run at 2950mhz (benchmarks outside with about 0 degrees) average temp of about 30-35 degrees Now to my question! Have probably read every single page here xD have tried all tweaks with MPT 25660 is pretty good, though! with those clock frequencies, it feels like at least 26k should be possible! have compared with others' runs very many who have significantly better points with significantly lower clocks ... do I miss something you should set? or how do you manage to get such high scores with lower clocks? The card pulled at most 610w in GT2 test I noticed that the frequency went well below the minimum of the first 10 seconds in GT2 (minimum was set to 2850) to then go up to max! 2950mhz, even the memories dipped GT1 seems to work properly with around 2900mhz during the whole test (less load in that test) Could it be my PSU that can't handle it? Has only an 850W Evga Read that a slightly overclocked 6900 can pull up to 800 watt spikes It was a long post  And thank you for all the good tips you have received here! really grateful for that


Sounds like a superb card ! I would do a few max-settings bench runs not for the scores but with HWInfo open during the run. It will give you some pretty good numbers on max clocks (including effective), power peak consumption etc including for your CPU and some peripherals...that should tell you a bit more about your PSU being a limiting factor or not.


----------



## Swenzzon86

J7SC said:


> Sounds like a superb card ! I would do a few max-settings bench runs not for the scores but with HWInfo open during the run. It will give you some pretty good numbers on max clocks (including effective), power peak consumption etc including for your CPU and some peripherals...that should tell you a bit more about your PSU being a limiting factor or not.


Yes it seems to be a good 6900  The only thing I noticed is that I get worse points than some with lower clocks. According to hwinfo, the gpu took a maximum of 611 watts CPU maybe around 100-120 watts? Do not think the average value of the load would be such a big problem, but power spikes I think far exceeds 850w


----------



## J7SC

Swenzzon86 said:


> Yes it seems to be a good 6900  The only thing I noticed is that I get worse points than some with lower clocks. According to hwinfo, the gpu took a maximum of 611 watts CPU maybe around 100-120 watts? Do not think the average value of the load would be such a big problem, but power spikes I think far exceeds 850w


...if you add the CPU max and other peripherals to it and take PSU efficiency into account, you're probably at the 'pushing your luck' stage....may not matter much in the short run, but I certainly would try to get a min 1000W platinum rated PSU if I were you.


----------



## colourcode

Swenzzon86 said:


> Hello! Andreas from Sweden here


Vilket köpte du?

I recently got myself a red devil, which has a whole lot of coil whine. Like unbearably much. I would return it but I got absolutely run over by covid.
I see they have the MERC 319 Black available for less than what I gave for the red devil.

My issue with the amd card, judging from my own and comments, is that nearly everyone say they have coil whine... 
I've had a single card the last 10 years with coil whine (was returned). I also had a 3090 for a few days and it was dead silent, spare the disgusting fan noise. lol

Would the MERC be a gamble, or is it better built than the devil?
AFAIK the vrms don't have contact with anything on the devil. I assume cooling pads would mitigate the whine if present..?


----------



## CfYz

Guys, who has XTLC - which GPU clocks sustained do you have? 3D Mark 2.4 maximum, Control 2.4-2.5 mostly. Can't hit anything near 2.6 or worse 2.7 Ghz - sample limitation or some wrong settings? All settings on 21.10.2 - power limit +15%, voltage 1200, gpu clock 2750 (can't set 2800+ - crushes driver after hitting apply...), VRAM tuning Fast timings only.

P.S. TS says 20080 when best only 21+ for 6900XT combination with my 3900XT...


----------



## J7SC

colourcode said:


> Vilket köpte du?
> 
> I recently got myself a red devil, which has a whole lot of coil whine. Like unbearably much. I would return it but I got absolutely run over by covid.
> I see they have the MERC 319 Black available for less than what I gave for the red devil.
> 
> My issue with the amd card, judging from my own and comments, is that nearly everyone say they have coil whine...
> I've had a single card the last 10 years with coil whine (was returned). I also had a 3090 for a few days and it was dead silent, spare the disgusting fan noise. lol
> 
> Would the MERC be a gamble, or is it better built than the devil?
> AFAIK the vrms don't have contact with anything on the devil. I assume cooling pads would mitigate the whine if present..?


I haven't heard a single peep of coil whine on my card (3x8 pin Gigabyte OC, XTX) whether prior on air or now on water, whether on stock PL or 450W custom MPT.


----------



## colourcode

J7SC said:


> I haven't heard a single peep of coil whine on my card (3x8 pin Gigabyte OC, XTX) whether prior on air or now on water, whether on stock PL or 450W custom MPT.


I should've recorded the coil whine variety this card exhibits. It went from loud chirping, almost scraping, to settle down after ~30 hours of cyberpunk and now it's like the worst tinnitus you could imagine.


----------



## Enzarch

CfYz said:


> Guys, who has XTLC - which GPU clocks sustained do you have? 3D Mark 2.4 maximum, Control 2.4-2.5 mostly. Can't hit anything near 2.6 or worse 2.7 Ghz - sample limitation or some wrong settings? All settings on 21.10.2 - power limit +15%, voltage 1200, gpu clock 2750 (can't set 2800+ - crushes driver after hitting apply...), VRAM tuning Fast timings only.
> 
> P.S. TS says 20080 when best only 21+ for 6900XT combination with my 3900XT...


My XTLC seems fairly similar to yours, although mine sustains higher actual clocks; that is probably only due to higher power limits and cooling.
I dont have any issues with setting higher than 2800 in driver. You may want to try a fresh re-install.

I can set about 2770MHz in driver which gets me around 2600-2740 actual in heavy apps @435W limit.
Mine hits a brick wall at just under 2800 actual, even with an additional 50mV, though I suspect I may be able to get around this somewhat with secondary voltage tweaking.


----------



## 99belle99

Anyone else see the reports of XTXH 6900's to get the faster memory chips. I forget the speeds but the same as the LC reference supposedly.


----------



## J7SC

colourcode said:


> I should've recorded the coil whine variety this card exhibits. It went from loud chirping, almost scraping, to settle down after ~30 hours of cyberpunk and now it's like the worst tinnitus you could imagine.


Oh I believe you that yours is doing coil whine, I'm just pointing out that not every 6900XT does it since it sounded like that was implied. It usually comes down to custom-PCB components used.


----------



## ZealotKi11er

99belle99 said:


> Anyone else see the reports of XTXH 6900's to get the faster memory chips. I forget the speeds but the same as the LC reference supposedly.


Most likely a refresh.


----------



## CS9K

colourcode said:


> I should've recorded the coil whine variety this card exhibits. It went from loud chirping, almost scraping, to settle down after ~30 hours of cyberpunk and now it's like the worst tinnitus you could imagine.


My RX 6800 XT, both RX 6800's, and my RX 6900 XT, all reference from AMD, had noticeable, but very quiet/muffled, but still tolerable coil whine, when they were wearing their stock air coolers.

My RX 6800 coil whines slightly louder now that I've got it in an ATX case and have pushed it to 2550Mhz, but it's still barely noticeable over the fan noise of the GPU and case fans (IF I'm sitting here listening for it).

My (former RX 6800 XT and) RX 6900 XT wearing (the same) EK water block, meanwhile, screech like a banshee trying to yell over her tinnitus... but ONLY if the EK backplate is firmly screwed in. On both GPU's, if I tighten the backplate screws _only_ enough to touch the backplate+1/4 turn, then the coil whine is present, but the volume of it is fine. The EK backplate apparently acts like a _speaker_ for the coil whine if you tighten the screws all the way down. I have tested this with and without extra thermal padding on the front of the block to ensure the inductor casings touch the block.

All of the Nvidia GPU's I have tested (3070Ti FE, 3070 XC3, 3080 FE, 3080Ti FE) have had coil whine to some degree, with the 3070Ti FE being the worst, but all were quiet enough to be tolerable in an ATX case sitting on my desk.


----------



## jonRock1992

ptt1982 said:


> Has anyone else experienced Timespy instability with OCing and lower scores using the latest drivers 22.1.1. ?
> 
> I had to lower clocks by 80mhz to get stability, and the new drivers react negatively to high power as well. First time experiencing this much instability when OCing.


22.1.1 is a terrible driver for me. It's very unstable, requiring higher voltage and lower clocks.


----------



## J7SC

jonRock1992 said:


> 22.1.1 is a terrible driver for me. It's very unstable, requiring higher voltage and lower clocks.


...just guessing, but if AMD puts out a new driver that requires higher voltage and limits to lower clocks, they probably had a lot of 'customer service' subs on crashes before w/ marginal cards and/or unrealistic expectations. Anyway, I went back to 21.1 and everything is fine


----------



## CS9K

J7SC said:


> ...just guessing, but if AMD puts out a new driver that requires higher voltage and limits to lower clocks, they probably had a lot of 'customer service' subs on crashes before w/ marginal cards and/or unrealistic expectations. Anyway, I went back to 21.1 and everything is fine


That's a pretty big assumption.


----------



## J7SC

CS9K said:


> That's a pretty big assumption.


...wasn't an 'assumption' - I wrote '_just guessing_' . And I'm _guessing_ even more that it had something to do with crash-complaints about Battlefield 2042 - but I could be entirely wrong, and insist on the opposite.


----------



## CS9K

J7SC said:


> ...wasn't an 'assumption' - I wrote '_just guessing_' . And I'm _guessing_ even more that it had something to do with crash-complaints about Battlefield 2042 - but I could be entirely wrong, and insist on the opposite.


Fair point.


----------



## Enzarch

Enzarch said:


> Anyone notice different driver min/max clock behavior with 22.1.1?
> I stopped even touching minimum clock and leaving at 500MHz a while ago after testing and realizing it had zero effect on my setup.
> 
> However with the '22 drivers, especially in 3DMark, this is no longer the case: If I leave min. at 500 the card will not go near max, behaving as if it is power limited.
> I have not done significant testing to determine the exact behavior dependent on min/max delta. Though I can say how this works varies quite a lot per application.





ptt1982 said:


> Has anyone else experienced Timespy instability with OCing and lower scores using the latest drivers 22.1.1. ?
> I had to lower clocks by 80mhz to get stability, and the new drivers react negatively to high power as well. First time experiencing this much instability when OCing.





jonRock1992 said:


> 22.1.1 is a terrible driver for me. It's very unstable, requiring higher voltage and lower clocks.


I noticed very new min/max behavior with the '22 drivers (see post above)
And after tweaking minimum values had no performance regression.
Might be something to play around with.


----------



## Conenubi701

I ripped apart my XTXH LC model to replace my thermal pads a couple of months ago and swapped some Fujipoly 17w/mk thermal pads for the memory, I ended up going with thermal putty for the backplate since the Liquid Devil Ultimate doesn't come with pads between the backplate and the back fets. Also replaced the stock paste for some kryonaut extreme. Not the biggest fan of the "extreme" kryonaut so next time I rip it apart I'll probably go back to regular kryonaut. Anyway, since then I did a format and forgot where to get the latest MPT version lmao. Anyone have a link? I remember there was a version that allowed us to unlock voltage but I never got to test it out. (Edit: I just went to Igor's site and got the MPT from there. Time to go into the lab and start benching again)


----------



## ZealotKi11er

Looks so nice.


----------



## J7SC

Enzarch said:


> I noticed very new min/max behavior with the '22 drivers (see post above)
> And after tweaking minimum values had no performance regression.
> Might be something to play around with.


On my system, it wasn't just the 22. performance...there were odd boot issues for the first time ever on this system, so I went back to 21.


----------



## Spawnyspawn

So, I have decided to sell my Toxic. I managed to quiet down my PC a bit, but turns out it wasn't the fans annoying me, it's the pump noise. It's such an annoying sound, it drives me crazy. Especially since I usually work without headphones on and there's no way to reduce the noise.
On top of that, I find the performance of my specific chip rather lackluster. When using 400W power limit it hits throttle temps easily. Lowering the power limit to 350W just makes a it perform like a reference card with the same power limit.

So, air-cooled XTXH is next. I can get a good price on a Red Devil Ultimate and at least that card is going to be dead silent while I work.
I can even slap a waterblock on if I decide to go for a custom loop down the line.


----------



## LtMatt

Spawnyspawn said:


> So, I have decided to sell my Toxic. I managed to quiet down my PC a bit, but turns out it wasn't the fans annoying me, it's the pump noise. It's such an annoying sound, it drives me crazy. Especially since I usually work without headphones on and there's no way to reduce the noise.
> On top of that, I find the performance of my specific chip rather lackluster. When using 400W power limit it hits throttle temps easily. Lowering the power limit to 350W just makes a it perform like a reference card with the same power limit.
> 
> So, air-cooled XTXH is next. I can get a good price on a Red Devil Ultimate and at least that card is going to be dead silent while I work.
> I can even slap a waterblock on if I decide to go for a custom loop down the line.


Fair enough. Though I think something is up with either the cooler or your mount if your temps are that bad. For sure it is no where near as good as a water block, but I can keep temps at 75-84c hotspot peak in a 24c-25c room, with 50% fan speed. Generally, it's high 70s hotspot when playing Vanguard/Halo, 4K max settings. These games draw a lot more power than some other games, like Red Dead 2, Far Cry 6 etc.

Not sure if anyone here plays Vanguard, but I find temps can spike up to the low 80s on the final screens after the rounds end. Where it shows the play of the game and the Team MVP. Something about these canned action seuquences drops FPS to the 70-80 range, and spikes power draw up to 380-400W briefly. In general gameplay its mostly 350-370W. I get lower temps if i move the rad out of the case, by a few C or so. That's running at a locked 2800Mhz+ core clock in game, 1.212v, 399W power limit

YMMV.


----------



## Spawnyspawn

LtMatt said:


> Fair enough. Though I think something is up with either the cooler or your mount if your temps are that bad. For sure it is no where near as good as a water block, but I can keep temps at 75-84c hotspot peak in a 24c-25c room, with 50% fan speed. Generally, it's high 70s hotspot when playing Vanguard/Halo, 4K max settings. These games draw a lot more power than some other games, like Red Dead 2, Far Cry 6 etc.


Things is, my card is stock standard. Hasn't been opened yet, so that bad mount would've been done during assembly in the factory. The card itself stays rather cool, but the delta with hotspots is usually around 35 degrees. It can hit 95-100 degrees hotspot temps easy. At 400W... with 100 percent fanspeed...


----------



## J7SC

LtMatt said:


> Fair enough. Though I think something is up with either the cooler or your mount if your temps are that bad. For sure it is no where near as good as a water block, but I can keep temps at 75-84c hotspot peak in a 24c-25c room, with 50% fan speed. Generally, it's high 70s hotspot when playing Vanguard/Halo, 4K max settings. These games draw a lot more power than some other games, like Red Dead 2, Far Cry 6 etc.
> 
> Not sure if anyone here plays Vanguard, but I find temps can spike up to the low 80s on the final screens after the rounds end. Where it shows the play of the game and the Team MVP. Something about these canned action seuquences drops FPS to the 70-80 range, and spikes power draw up to 380-400W briefly. In general gameplay its mostly 350-370W. I get lower temps if i move the rad out of the case, by a few C or so. That's running at a locked 2800Mhz+ core clock in game, 1.212v, 399W power limit
> 
> YMMV.


Why 399 W PL, and not 400.18 W ?


----------



## ptt1982

Spawnyspawn said:


> So, I have decided to sell my Toxic. I managed to quiet down my PC a bit, but turns out it wasn't the fans annoying me, it's the pump noise. It's such an annoying sound, it drives me crazy. Especially since I usually work without headphones on and there's no way to reduce the noise.
> On top of that, I find the performance of my specific chip rather lackluster. When using 400W power limit it hits throttle temps easily. Lowering the power limit to 350W just makes a it perform like a reference card with the same power limit.
> 
> So, air-cooled XTXH is next. I can get a good price on a Red Devil Ultimate and at least that card is going to be dead silent while I work.
> I can even slap a waterblock on if I decide to go for a custom loop down the line.


I cannot bear any noise from my PC, because I have some strange autistic/adhd elements in my brain, and I am hypersensitive to noise. My monitor, speakers, gaming systems etc. are in my living room, so the ultimate solution was to move my PC to our bedroom, crank up the fans and pump to levels where they perform well but do not bother my wife too much (she likes to chill at our bedroom.) Bought a usb hub /w long cable, and hdmi optic 15m cable, and cleanly wired them with floor/wall cable mounts, color matching and all. Wife didn't care once she saw the end result (but to start the project it took a debate that lasted 2-3 months), which is a perfect score in my book. Highly recommended for people who cannot bear noise: invest in long, high quality, cables, put your PC on wheels, and drag it 10+ meters away from where you sit, ideally having walls and furniture in and between when you use it. Perfect silence when listening music, playing games, or watching movies. And most of all, when WFH.

Never again will I keep a PC or a console around my speakers, they eat up too much of the soundscape of my Dynaudio Special 40s! (okay flexing the audio muscle here a bit, I admit)


----------



## ptt1982

J7SC said:


> On my system, it wasn't just the 22. performance...there were odd boot issues for the first time ever on this system, so I went back to 21.


It seems AMD pulled the driver and updated it directly with the one that has the hotfix, released on Jan 18th. They still recommend the 21 driver on their website, and now have 22.1.2 version as the optional. Something was clearly up with it. Maybe they rushed it to get the +7% God of War performance out there for the reviewers.


----------



## jonRock1992

Spawnyspawn said:


> So, I have decided to sell my Toxic. I managed to quiet down my PC a bit, but turns out it wasn't the fans annoying me, it's the pump noise. It's such an annoying sound, it drives me crazy. Especially since I usually work without headphones on and there's no way to reduce the noise.
> On top of that, I find the performance of my specific chip rather lackluster. When using 400W power limit it hits throttle temps easily. Lowering the power limit to 350W just makes a it perform like a reference card with the same power limit.
> 
> So, air-cooled XTXH is next. I can get a good price on a Red Devil Ultimate and at least that card is going to be dead silent while I work.
> I can even slap a waterblock on if I decide to go for a custom loop down the line.


Think again...
The Red Devil Ultimate is a VERY noisy GPU. It has very loud fans and coil whine.


----------



## Spawnyspawn

jonRock1992 said:


> Think again...
> The Red Devil Ultimate is a VERY noisy GPU. It has very loud fans and coil whine.


Thanks for the heads up. Did some research and coilwhine is probably from being able to pull 500W with only slightly better inputfiltering on the card than the reference design has.
I should probably find a deal on an MSI Gaming Z Trio or Asrock OC formula. Although the latter is probably going to be next to impossible to find.


----------



## J7SC

FYI, I just noticed that MPT 1.3.8. beta 2 is out (may be for a while?). Btw, any recommendation re. SoC-v and FCLK values in MPT - worth changing from stock values / if so what's recommended as 'fast but still mostly safe' - or is that a rabbit hole ?


----------



## LtMatt

Spawnyspawn said:


> Things is, my card is stock standard. Hasn't been opened yet, so that bad mount would've been done during assembly in the factory. The card itself stays rather cool, but the delta with hotspots is usually around 35 degrees. It can hit 95-100 degrees hotspot temps easy. At 400W... with 100 percent fanspeed...


Yeah does sound about right. Sometimes the stock mount is crap and the paste is dried so 30c deltas may well happen in that scenario. It's easy to fix though and this card is easy to dismantle. You can buy the warranty stickers online too for easy replacement, but can understand if you can't be bothered with that hassle.


J7SC said:


> Why 399 W PL, and not 400.18 W ?


347W in MPT, =15% = 399W. It always spikes around 4-5W over the set power limit though. After a gaming session I look in HWINFO64 and the PPT is always something like 404-405W even with a limit of 399W.


----------



## colourcode

Which non watercooled card would you recommend?

Eyeing the 6900 MERC, but there seems to be different verrsions of it. It's $250 cheaper than MSI Gaming Z Trio.

The review comments for 3dguru (i think) claim the merc review sample had much better vrm components than the retail ones so I'm not sure what to do.


----------



## Spawnyspawn

colourcode said:


> Which non watercooled card would you recommend?
> 
> Eyeing the 6900 MERC, but there seems to be different verrsions of it. It's $250 cheaper than MSI Gaming Z Trio.
> 
> The review comments for 3dguru (i think) claim the merc review sample had much better vrm components than the retail ones so I'm not sure what to do.


I've been looking into it. The Merc319 limited black is basically the same PCB as a red devil Ultimate, but with slightly better input filtering. The MSI gaming trio z has the second best PCB, but the cooler allows rather high junction temps and is rather noisy. AsRock formula OC is the best PCB, but again very loud.
I planned to go for the Red Devil, but that card basically uses reference design input filtering, which is why it has issues with coilwhine (I think).
So I'm planning to pick up a Merc319 if I can find one.


----------



## colourcode

Spawnyspawn said:


> I've been looking into it. The Merc319 limited black is basically the same PCB as a red devil Ultimate, but with slightly better input filtering. The MSI gaming trio z has the second best PCB, but the cooler allows rather high junction temps and is rather noisy. AsRock formula OC is the best PCB, but again very loud.
> I planned to go for the Red Devil, but that card basically uses reference design input filtering, which is why it has issues with coilwhine (I think).
> So I'm planning to pick up a Merc319 if I can find one.


I just returned my Red Devil due to the coil whine. But it appears corona has triggered / worsened my tinnitus, so part of my complaints may have been brain-coil whine. 😂

The red devil cooler was quite good. At around 1700rpm the noise was on par with my noctua D15 keeping the 5950x from burning up when gaming.

This is the Merc319 i've been eyeing. not sure if it's the same as your mentioned Limited Black... XFX RADEON RX 6900XT MERC 319 Black 3XDP 16GB | NetOnNet 
They have a cheaper one with lower clock, so maybe this is the decent one.


----------



## Spawnyspawn

colourcode said:


> I just returned my Red Devil due to the coil whine. But it appears corona has triggered / worsened my tinnitus, so part of my complaints may have been brain-coil whine. 😂
> 
> The red devil cooler was quite good. At around 1700rpm the noise was on par with my noctua D15 keeping the 5950x from burning up when gaming.
> 
> This is the Merc319 i've been eyeing. not sure if it's the same as your mentioned Limited Black... XFX RADEON RX 6900XT MERC 319 Black 3XDP 16GB | NetOnNet
> They have a cheaper one with lower clock, so maybe this is the decent one.


This is the part number of that card: RX-69XTATBD9

This is the XTXH chip part number: RX-69XTACSD9. So it looks like that card isn't an XTXH chip.


----------



## cfranko

I got a RX 6600 and I tried the TEMP_DEPENDENT_VMIN method to increase core voltage but it didn't work. Does that method only work with the 6900 xt or am I doing something wrong?


----------



## colourcode

Spawnyspawn said:


> This is the part number of that card: RX-69XTATBD9
> 
> This is the XTXH chip part number: RX-69XTACSD9. So it looks like that card isn't an XTXH chip.


Damn the RX-69XTACSD9 is $250more than the other one... does it have more premium components or is it only the chip itself that is binned better for the XTXH ones?


----------



## Spawnyspawn

colourcode said:


> Damn the RX-69XTACSD9 is $250more than the other one... does it have more premium components or is it only the chip itself that is binned better for the XTXH ones?


As far as I can tell only the chip is different.


----------



## lawson67

LtMatt said:


> Fair enough. Though I think something is up with either the cooler or your mount if your temps are that bad. For sure it is no where near as good as a water block, but I can keep temps at 75-84c hotspot peak in a 24c-25c room, with 50% fan speed. Generally, it's high 70s hotspot when playing Vanguard/Halo, 4K max settings. These games draw a lot more power than some other games, like Red Dead 2, Far Cry 6 etc.
> 
> Not sure if anyone here plays Vanguard, but I find temps can spike up to the low 80s on the final screens after the rounds end. Where it shows the play of the game and the Team MVP. Something about these canned action seuquences drops FPS to the 70-80 range, and spikes power draw up to 380-400W briefly. In general gameplay its mostly 350-370W. I get lower temps if i move the rad out of the case, by a few C or so. That's running at a locked 2800Mhz+ core clock in game, 1.212v, 399W power limit
> 
> YMMV.


At 1.212v, with a PL of 399W you will be having worse performance in demanding games unless you put your power limit right up to compensate for the higher core voltage, if you don't belive me set core voltage @1.212v with a PL of 399w then run GT2 and watch the clock plummet with anything over 1.2v, with a higher core voltage there must be a higher PL to sustain the clocks your pushing or they will plummet in demanding games just as they will in GT2, @1.212v core voltage you will need about 450w - 460w to sustain a core clock of 2800mhz plus in demanding games


----------



## jonRock1992

colourcode said:


> I just returned my Red Devil due to the coil whine. But it appears corona has triggered / worsened my tinnitus, so part of my complaints may have been brain-coil whine. 😂
> 
> The red devil cooler was quite good. At around 1700rpm the noise was on par with my noctua D15 keeping the 5950x from burning up when gaming.
> 
> This is the Merc319 i've been eyeing. not sure if it's the same as your mentioned Limited Black... XFX RADEON RX 6900XT MERC 319 Black 3XDP 16GB | NetOnNet
> They have a cheaper one with lower clock, so maybe this is the decent one.


I just got over some variant of coronavirus and it made my tinnitus really bad. Even started losing my hearing a bit in one of my ears. I noticed that with a higher voltage, the coil whine on my red devil ultimate is greatly reduced. This is likely due to the extra heat.


----------



## LtMatt

lawson67 said:


> At 1.212v, with a PL of 399W you will be having worse performance in demanding games unless you put your power limit right up to compensate for the higher core voltage, if you don't belive me set core voltage @1.212v with a PL of 399w then run GT2 and watch the clock plummet with anything over 1.2v, with a higher core voltage there must be a higher PL to sustain the clocks your pushing or they will plummet in demanding games just as they will in GT2, @1.212v core voltage you will need about 450w - 460w to sustain a core clock of 2800mhz plus in demanding games


Yes the power limit would need to be higher for synthetics like FS Ultra and Timespy GT2, but I don't play those. In games my power limit is more than enough for the games I play so don't see any point in having it higher.

Some videos of games running using those very settings.
Forza Horizon 5 Benchmark | 6900 XT Toxic Extreme | 2160P Maximum Settings - YouTube
Halo Infinite - Campaign | 6900 XT Toxic Extreme | 2160P Max Settings - YouTube
Call of Duty: Vanguard Online | 6900 XT Toxic Extreme | 4K Ultra Settings - YouTube


----------



## Nawaf

Been following you guys for over a year now. Great TS numbers and motivated me to keep pushing. Finaly opened up my XFX 6900 Black for a repaste. Not all thermal pads were changed as I didn’t have the 2 and 3 mm ones at hand, the important ones were swapped out.

With some MPT which can hit 400-405 watts on Adrenaline, I’ve hit some decent numbers. Not so much the mhz though as it would just crash. Topping out at 2615 mhz, thermal paste is a few hours old and needs curing time from experience. Saw a 10C improvement, can’t complain.


----------



## CfYz

Nawaf said:


> With some MPT which can hit 400-405 watts on Adrenaline, I’ve hit some decent numbers. Not so much the mhz though as it would just crash. Topping out at 2615 mhz, thermal paste is a few hours old and needs curing time from experience. Saw a 10C improvement, can’t complain.


Strange, I hit 20525 in the same test with 3900XT (stock, PBO disabled) - and my GPU clocks 2.5 most of the time, but sometimes its 2.4Ghz. MPT was a little bit shifted - stock+15% PL is 326W, and I set it to 326W to start with. So 15%PL in my case just 378W... Just wondering why - memory bandwidth maybe?


----------



## IIISLIDEIII

in your opinion what could be the causes why with the same overclock and the same driver I can no longer reach the scores on time spy that I did before?

these are the scores I obtained until a few days ago:


these are the current scores:


----------



## Swenzzon86

ran a little bench yesterday With liquid metal and cold weather, the gpu maxed out at 2980mhz Average about 2900mhz Am very happy with the overclocking, but .... 
Do not understand why my score is so low with the relatively high overclocking .. Have tried lower clocks, but get lower scores Tried different fclk settings etc. post a comparison with one that has the same cpu, more than 1000p difference Result Thoughts?


----------



## J7SC

IIISLIDEIII said:


> in your opinion what could be the causes why with the same overclock and the same driver I can no longer reach the scores on time spy that I did before?
> 
> these are the scores I obtained until a few days ago:
> 
> 
> these are the current scores:


The latest Windows update pack could be the cause


----------



## Nawaf

CfYz said:


> Strange, I hit 20525 in the same test with 3900XT (stock, PBO disabled) - and my GPU clocks 2.5 most of the time, but sometimes its 2.4Ghz. MPT was a little bit shifted - stock+15% PL is 326W, and I set it to 326W to start with. So 15%PL in my case just 378W... Just wondering why - memory bandwidth maybe?


That’s because you are comparing an 8 core cpu to a 12 core one. More cores definitely gets you a higher score in TS. PBO off gets me a better score with the 5800XT, not liking the finicky AMD cpu and ram setup honestly. I suspect motherboard choice is a priority with AMD for overclocking, but will still stick with AMD in the future.

What’s your graphics score out of curiosity?


----------



## Swenzzon86

Is there any good guide on how to flash the LC bios? tried with amdvbflash, id mismatch


----------



## IIISLIDEIII

J7SC said:


> The latest Windows update pack could be the cause


what other tests can i do to rule out that it is a hardware problem, gpu, psu, motherboard?

if you look at the gpu values on hwinfo in the screenshots you will notice that there are current absorption values lower than those I had when I was doing higher scores, what do you think?


----------



## CfYz

Nawaf said:


> What’s your graphics score out of curiosity?


Graphics score *22 900*
Graphics test 1 *150.51* FPS
Graphics test 2 *130.33* FPS
CPU score *12 929*
CPU test *43.44* FPS


----------



## Nawaf

CfYz said:


> Graphics score *22 900*
> Graphics test 1 *150.51* FPS
> Graphics test 2 *130.33* FPS
> CPU score *12 929*
> CPU test *43.44* FPS


This clears up your query on my scores. I’ve set the overclock to 2615 mhz but it fluctuates between 2.4-2.5 mhz, hence our similar scores. I need to try again and see if it has improved after the thermal paste curing phase. Will keep you updated.


----------



## SoloCamo

edit - wrong thread


----------



## alceryes

AMD reference RX 6900 XT.
Just did the quick-n-easy pcb back thermal pad mod. I went with the 'less is more' approach with the pads. All temps dropped nicely with the VRM temp going down 9ºC! I've left all OC and power settings at stock and the GPU core still peaks at 2500MHz. Extremely happy with the results. I'll probably start tinkering with small overclocks soon. Think I got lucky with this reference card. 

*Before adding thermal pads.* Five Horizon Zero Dawn benchmark runs back to back (ultimate quality).










*After adding thermal pads.* Five Horizon Zero Dawn benchmark runs back to back (ultimate quality).










I used Gelid Solutions GP-Extreme 12W thermal pads. 3mm for the eight memory module areas and 1mm for the VRM components.









Edit: After some undervolting and a lite overclock I've got a 6-7% Time Spy graphics score improvement.


----------



## Helmbo

Hey guys, i own a ASUS ROG-STRIX-LC-RX6900XT which i believe is the xtxh version. Just a quick tip before workday starts. To all of you who cant stand the coilwine. i was in the same boat a couple of days ago. And it was so bad, i looked for a solution to move the pc to another room (couldnt return it, since i slapped a EKWB waterblock and backplate on it) - so i untighten the screws all arround the backplate, just enought so the plate was still in place. And gone was 70% of the wine. Ofcourse this was in my case. but try it out. - My card runs 1.2v with MPT set to 440W/470W - clocks 2750-2850 ish while gaming.

1 thing i noticed while tinkering with the card... the more watt the less stutter, so feed the card good guys. the TDC i put to 470W because the card at random and rare occasions, do spike that high, and if you dont allow it, you will hit a stutter wich is noticeable, specially in Ark, Red dead redemtion 2, and Assassins Creed Valhalla.

Good day.


----------



## FishTankTower

lawson67 said:


> At 1.212v, with a PL of 399W you will be having worse performance in demanding games unless you put your power limit right up to compensate for the higher core voltage, if you don't belive me set core voltage @1.212v with a PL of 399w then run GT2 and watch the clock plummet with anything over 1.2v, with a higher core voltage there must be a higher PL to sustain the clocks your pushing or they will plummet in demanding games just as they will in GT2, @1.212v core voltage you will need about 450w - 460w to sustain a core clock of 2800mhz plus in demanding games


Hi there!



I was reading one of your comments and it sounds like you are able to increase the voltage being fed to the core, do you have a link or advise as to how to do that? I was under the impression that making that edit using More power tool did not work?



Thanks!


----------



## kairi_zeroblade

FishTankTower said:


> Hi there!
> 
> 
> 
> I was reading one of your comments and it sounds like you are able to increase the voltage being fed to the core, do you have a link or advise as to how to do that? I was under the impression that making that edit using More power tool did not work?
> 
> 
> 
> Thanks!


its still using MPT, though you will have to disable some safety straps in order to do so..and just a big WARNING, make sure you adequately cool your GPU..


----------



## J7SC

^^This

I've run 1.218 v (on a regular 'XTX' normally restricted to 1.175 v) only a couple of times via MPT, but any time you're _removing a safety, it pays to be very cautious_, and also to have prepared the GPU. My 6900XT is in a loop w/ 1200x62 rads and dual D5s along with other cooling-related preps per below, so the temps are well controlled even at 450W. When it was air-cooled, this would have likely pushed Hotspot to well past 100 C.


----------



## FishTankTower

the unleashed bios allow. hat 28936203, member: 82710"] its still using MPT, though you will have to disable some safety straps in order to do so..and just a big WARNING, make sure you adequately cool your GPU.. [/QUOTE] I see thanks for the reply! ! I do, I have the liquid devil rx 6900 xt card that came with the pre-installed waterblock, and it's supposed to be a pretty well bined card but with the 1.175 that the unleashed bios allows, it only sustains 2550mhz clock. I did modify the power setting using MPT to 350w/375 TDP, but no difference, it won't go past the 2550mhz. I believe the limitation is the voltage. So I want to find out how to be able to unlock that so I can actually Push the card. I have 2 960 radiators with 8 240 fans pulling air through them to cool down both my delided cpu 7980xe and my new rx 6900 xt card. Upgraded from a 2070 super SLI setup.


----------



## J7SC

FishTankTower said:


> I see thanks for the reply! ! I do, I have the liquid devil rx 6900 xt card that came with the pre-installed waterblock, and it's supposed to be a pretty well bined card but with the 1.175 that the unleashed bios allows, it only sustains 2550mhz clock. I did modify the power setting using MPT to 350w/375 TDP, but no difference, it won't go past the 2550mhz. I believe the limitation is the voltage. So I want to find out how to be able to unlock that so I can actually Push the card. I have 2 960 radiators with 8 240 fans pulling air through them to cool down both my delided cpu 7980xe and my new rx 6900 xt card. Upgraded from a 2070 super SLI setup.


Well, cooling is not a problem w/ your setup  ...re. extra voltage beyond the 1.175 v lock, search this thread / Google for 'TEMP_DEPENDENT_VMIN' and what steps to take in MPT v1.3.7+ with that. I don't have the exact post link here, but it looked very much related to > this earlier 'how-to' guide (which is in German...). Of course, do at your own risk and use extra caution...


----------



## colourcode

Would you get:
RADEON RX 6900XT MERC 319 Black 3XDP 16GB
or
Sapphire RADEON™ RX 6900 XT SE Nitro+ Gaming OC... | NetOnNet


----------



## kairi_zeroblade

colourcode said:


> Would you get:
> RADEON RX 6900XT MERC 319 Black 3XDP 16GB
> or
> Sapphire RADEON™ RX 6900 XT SE Nitro+ Gaming OC... | NetOnNet


Sapphire


----------



## colourcode

kairi_zeroblade said:


> Sapphire


Would you rather get the watercooled toxic for 100euro more? I could fit it, but i've not had good experiences with water cooled computers. Rather noisy compared to decent fan cooling.

Asus TUF 6900xt costs 100euro less than the Sapphire card right now. Is the asus cooler better?

**** it. Someone bought the TUF just when I was processing purchase. Ordered the Nitro. Praying for no coil whine.


----------



## kairi_zeroblade

colourcode said:


> Would you rather get the watercooled toxic for 100euro more?


yes..given this is the 6900XT any better cooling solution out of the box is most welcome..

regarding coil whine its pretty much random..


----------



## colourcode

kairi_zeroblade said:


> yes..given this is the 6900XT any better cooling solution out of the box is most welcome..
> 
> regarding coil whine its pretty much random..


Opted out, dont feel like putting 3 top fans in the case. Maybe i'll try the watercooled card if this has coil whine 😒


----------



## kairi_zeroblade

colourcode said:


> Opted out, dont feel like putting 3 top fans in the case. Maybe i'll try the watercooled card if this has coil whine 😒


which one did you get?? for the air cooled ones, I would suggest the Aorus Master..chuky beefy cooling solution I must say..


----------



## colourcode

kairi_zeroblade said:


> which one did you get?? for the air cooled ones, I would suggest the Aorus Master..chuky beefy cooling solution I must say..


Sapphire RADEON™ RX 6900 XT SE Nitro+ Gaming OC... | NetOnNet


11308-03-20G

Cooler looks shoddy NGL. Could still cancel and get the Merc (non LTD)


----------



## Conenubi701

FishTankTower said:


> I have the liquid devil rx 6900 xt card that came with the pre-installed waterblock, and it's supposed to be a pretty well bined card but with the 1.175 that the unleashed bios allows, it only sustains 2550mhz clock. I did modify the power setting using MPT to 350w/375 TDP, but no difference, it won't go past the 2550mhz. I believe the limitation is the voltage. So I want to find out how to be able to unlock that so I can actually Push the card. I have 2 960 radiators with 8 240 fans pulling air through them to cool down both my delided cpu 7980xe and my new rx 6900 xt card. Upgraded from a 2070 super SLI setup.


Unleashed Bios On a Liquid Devil Ultimate allows 1.200 Voltage stock. Are you sure you're on the Unleashed Bios? Unless you have a non ultimate, those cards with unleashed BIOS top out at 1.175. If that's the case, feed it more power before messing with voltage limits. My Ultimate will eat up 400w at 2785mhz and I haven't tried giving it more power since i haven't needed to. Keep in mind, 348~349w in MPT +15% Power Limit in AMD driver = 400 ~ 401w


----------



## Conenubi701

Took advantage of the cold weather (Cold for my region, 47f) here to bench the card after the repaste and thermal pads. Took #1 in Timespy, Timespy Extreme, Firestrike Ultra, and #2 for Firestrike Extreme (though I got #1 if purely looking at graphics score) with my temporary combo until I get my 5950x back in again. Honestly, I was surprised my old stock 2700x pushed out 23,000+ Graphics score in TimeSpy


----------



## ballofgold

I'm running custom loop with Aorus Waterforce 6900 XT. with clocks set at 2870Mhz MPT VMIN and power settings below









I could lower my power limit to around 420 and still pass the TimeSpy with 24k score. However if I increase clock to 2880Mhz it crashes even though Max power draw is only 450W at peak and junction temperature peaks at 74C. What is the bottleneck in my case? It seems like its not the temperature and its not the power? Any suggestions?


----------



## kairi_zeroblade

colourcode said:


> Sapphire RADEON™ RX 6900 XT SE Nitro+ Gaming OC... | NetOnNet
> 
> 
> 11308-03-20G
> 
> Cooler looks shoddy NGL. Could still cancel and get the Merc (non LTD)


seems a wise buy, its a very capable overclocker..


----------



## CS9K

ballofgold said:


> I'm running custom loop with Aorus Waterforce 6900 XT. with clocks set at 2870Mhz MPT VMIN and power settings below
> View attachment 2544856
> 
> 
> I could lower my power limit to around 420 and still pass the TimeSpy with 24k score. However if I increase clock to 2880Mhz it crashes even though Max power draw is only 450W at peak and junction temperature peaks at 74C. What is the bottleneck in my case? It seems like its not the temperature and its not the power? Any suggestions?


As what happens with most of the RX 6800 and higher models that I've seen: if you give it enough power limit and can keep it cool, you will eventually run out of voltage. My reference RX 6900 XT can do 2700MHz, but 2750MHz is a no-go due to voltage limits.


----------



## marjanoos

Received 6900XT Red Devil Ultimate today. Beautiful card. Coil whine af.  But I came here because I have issue with Crysis Remastered. I get warning message that hw raytracing is disabled because my drivers are too old. I have installed latest stable drivers, twice. Have anyone experienced that?


----------



## lawson67

ballofgold said:


> I'm running custom loop with Aorus Waterforce 6900 XT. with clocks set at 2870Mhz MPT VMIN and power settings below
> View attachment 2544856
> 
> 
> I could lower my power limit to around 420 and still pass the TimeSpy with 24k score. However if I increase clock to 2880Mhz it crashes even though Max power draw is only 450W at peak and junction temperature peaks at 74C. What is the bottleneck in my case? It seems like its not the temperature and its not the power? Any suggestions?


You've met the limitation of your silicon, you could add slightly more voltage but at certain point even higher voltage wont help get higher clocks


----------



## SoloCamo

So with PL at +15 (293w) my reference cooler seems to struggle to keep up in a very well cooled case (Thor V2) when running 2400mhz+ for a while. There is a delta of about 30c and the hotspot gets over to 100 after a bit. Is this pretty typical? I'm not maxing the fans on the cooler but even when I put them pretty high (2k rpm) it doesn't keep up as well as I would hope.

Meanwhile the stock PL of 255w gets no where near close to those temps while maintaining only slightly lower clock speeds over all with much lower fan rpm. I mean I get the near 40w increase in power is going to increase heat but it seems kind of dramatic compared to cards I've increased PL on in the past, 

I usually game at -10 on the PL anyways (230w) and the hot spot never goes above 75c with 1k rpm on the fans. The performance is good at 4k for the games I play so I'd rather keep the card cool. I took down the clock speeds to 1ghz and was still getting similar or better performance than my prior oc'ed & uv'ed Radeon VII.


----------



## J7SC

SoloCamo said:


> So with PL at +15 (293w) my reference cooler seems to struggle to keep up in a very well cooled case (Thor V2) when running 2400mhz+ for a while. There is a delta of about 30c and the hotspot gets over to 100 after a bit. Is this pretty typical? I'm not maxing the fans on the cooler but even when I put them pretty high (2k rpm) it doesn't keep up as well as I would hope.
> 
> Meanwhile the stock PL of 255w gets no where near close to those temps while maintaining only slightly lower clock speeds over all with much lower fan rpm. I mean I get the near 40w increase in power is going to increase heat but it seems kind of dramatic compared to cards I've increased PL on in the past,
> 
> I usually game at -10 on the PL anyways (230w) and the hot spot never goes above 75c with 1k rpm on the fans. The performance is good at 4k for the games I play so I'd rather keep the card cool. I took down the clock speeds to 1ghz and was still getting similar or better performance than my prior oc'ed & uv'ed Radeon VII.


I dug around for some older stock air-cooler results. On the left 'no MPT', on the right 'MPT w/ extra ~ 50W'. Even with the stock fans on full blast (~3800 rpm), the extra 50W went past the cooler's capabilities - never mind the jet-engine sound at those fan speeds 😲.

These 6900XTs can get very toasty very quickly, with a highly concentrated 7nm heat focus. With extensive custom water-cooling on the other hand, I can run an extra 150W PL above stock, yet Hotspot et al stays 30 C lower, even w/ extensive bench or gaming sessions.


----------



## SoloCamo

J7SC said:


> I dug around for some older stock air-cooler results. On the left 'no MPT', on the right 'MPT w/ extra ~ 50W'. Even with the stock fans on full blast (~3800 rpm), the extra 50W went past the cooler's capabilities - never mind the jet-engine sound at those fan speeds 😲.
> 
> These 6900XTs can get very toasty very quickly, with a highly concentrated 7nm heat focus. With extensive custom water-cooling on the other hand, I can run an extra 150W PL above stock, yet Hotspot et al stays 30 C lower, even w/ extensive bench or gaming sessions.
> View attachment 2544901


Thanks. The 6900XT for me even at full fan speed is still a lot more silent than the reference cooled 290X and Vega 64 I had. Let's just say headphones were a requirement. Even my Radeon VII on the stock cooler was loud AF in comparison. I've been running 1.15V as it's been stable in everything at this point with a PL of 230w and it's still an amazing card. Can't complain when my Radeon VII more then paid for it (and scalped miners on top of it).


----------



## alceryes

SoloCamo said:


> The 6900XT for me even at full fan speed is still a lot more silent than the reference cooled 290X and Vega 64 I had. Let's just say headphones were a requirement.


Agreed. The reference fan is very quiet. My H80i v2 CPU cooler is louder after 30+ mins of gaming and it's pretty quiet in its own right.
I had a Sapphire Vega 64 with that *super *_*loud blower cooler.*_ I could only stand it for a couple months before I ripped it off and put on the MORPHEUS II cooler with a couple silent Noctua fans.


----------



## ZealotKi11er

SoloCamo said:


> Thanks. The 6900XT for me even at full fan speed is still a lot more silent than the reference cooled 290X and Vega 64 I had. Let's just say headphones were a requirement. Even my Radeon VII on the stock cooler was loud AF in comparison. I've been running 1.15V as it's been stable in everything at this point with a PL of 230w and it's still an amazing card. Can't complain when my Radeon VII more then paid for it (and scalped miners on top of it).


I way playing around with my 290X and boy that thing was loud. AMD got it very wrong back then. $650 290X makes more sense than $650 Fury X. They could have had everything with 290X.


----------



## SoloCamo

ZealotKi11er said:


> I way playing around with my 290X and boy that thing was loud. AMD got it very wrong back then. $650 290X makes more sense than $650 Fury X. They could have had everything with 290X.


Yea, I picked up the 290X on launch back in Oct 2013 for $550 and it was the BF4 edition which came with the game. Card did 1500mhz on the memory and oc'ed well for me, despite the loud reference cooler. Actually going to be selling my reference Vega 64 and putting the 290X back in service in my living room pc as it barely gets used as is.

I'll never sell that card and it's still got plenty of horsepower for todays games, at least at 1080p. 4gb of vram and playing at 4k was the only reason I had to retire it a few years back.


----------



## J7SC

...you folks making my all misty-eyed, nostalgically speaking 😢

they were 'pigs on gas watts' though w/ custom bios and sub-ambient


----------



## FishTankTower

Conenubi701 said:


> Unleashed Bios On a Liquid Devil Ultimate allows 1.200 Voltage stock. Are you sure you're on the Unleashed Bios? Unless you have a non ultimate, those cards with unleashed BIOS top out at 1.175. If that's the case, feed it more power before messing with voltage limits. My Ultimate will eat up 400w at 2785mhz and I haven't tried giving it more power since i haven't needed to. Keep in mind, 348~349w in MPT +15% Power Limit in AMD driver = 400 ~ 401w


hey thanks for the reply and advise as well!

It is not the Ultimate, however the card has a physical switch that shows OC/UNLEASHED 

But when I looked in the Radeon software, it would max out at 1.175, I tried changing it in MPT to 1.2 but it didn't do anything helpful it actually lowered the core clock. So I went back in and changed it back to 1.175 and everything went back to normal. so right now I have only really messed with the Power settings. I will go ahead and increase the power, and see where that truly maxes out at.


----------



## FishTankTower

J7SC said:


> Well, cooling is not a problem w/ your setup  ...re. extra voltage beyond the 1.175 v lock, search this thread / Google for 'TEMP_DEPENDENT_VMIN' and what steps to take in MPT v1.3.7+ with that. I don't have the exact post link here, but it looked very much related to > this earlier 'how-to' guide (which is in German...). Of course, do at your own risk and use extra caution...


I found the thread that explain the Temp Dependent VMIN, I have enabled that, and loaded the bios, edit it, saved it, then wrote the SPPT, all while running it as administrator.

Once I rebooted and opened Radon software, I could see all the new values as a MAX on all the slider bars. However, once I selected all the appropriate selections, and then I ran the stress test, the program will instantly crash.

I lowered then the clock speed to 2605, the stress test didn't crash but the max wattage was 237w, and the clock speed was like in the 2400's. 

It doesn't look like that bio modification is helping with unlocking the voltage... any more ideas?

Thanks!


----------



## FishTankTower

Alright, I guess I had made some typos when I was editing the bios.

I got it to work, but max I got was to 2758mhz. It would go over 400W, I think it hit up to like 430, but very briefly and then it would just drop down to the 350-370W.

I also noticed that the Volt slider doesn't have any control over the amount of voltage. Basically it is set to static at 1.250v so no matter what you set the core speed or the setting, quit, balanced, etc, the voltage is always 1.250v, so that's not very ideal as I dont want to kill the core by it being over volted 100% of the time.


----------



## SoloCamo

J7SC said:


> ...you folks making my all misty-eyed, nostalgically speaking 😢


Yea, there are five cards I'll likely never sell. My GF4 ti4200 128mb (AGP 8x), ATI x800pro AGP , ATI x1950pro 256mb AGP, AMD 7970GHZ Edition and my 290X. 

Realistically I wanted to have the entire GCN flagship lineup but ah well, better to let someone else game with them then sit on the shelf.


----------



## ZealotKi11er

SoloCamo said:


> Yea, I picked up the 290X on launch back in Oct 2013 for $550 and it was the BF4 edition which came with the game. Card did 1500mhz on the memory and oc'ed well for me, despite the loud reference cooler. Actually going to be selling my reference Vega 64 and putting the 290X back in service in my living room pc as it barely gets used as is.
> 
> I'll never sell that card and it's still got plenty of horsepower for todays games, at least at 1080p. 4gb of vram and playing at 4k was the only reason I had to retire it a few years back.


I had 2x 290X water-cooled


SoloCamo said:


> Yea, there are five cards I'll likely never sell. My GF4 ti4200 128mb (AGP 8x), ATI x800pro AGP , ATI x1950pro 256mb AGP, AMD 7970GHZ Edition and my 290X.
> 
> Realistically I wanted to have the entire GCN flagship lineup but ah well, better to let someone else game with them then sit on the shelf.


I got HD 7970 > 290X > Fury X > Vega 64 > Radeon 7 > 5700 XT > 6900 XT.


----------



## SoloCamo

ZealotKi11er said:


> I got HD 7970 > 290X > Fury X > Vega 64 > Radeon 7 > 5700 XT > 6900 XT.


For GCN I had 7970, 7970GE, 290X, RX580 8gb (sold), Vega 56 (given to brother), Vega 64 (going up for sale), Radeon VII (sold) and now RDNA2 w/ the 6900XT. Going to the 6900XT was by far the biggest jump I've made in the last decade. Hopefully I'll get the same life span out of it as I did my 290X (2013-2019)

All I want at this point is an extremely efficient AMD card with at least 8gb of vram that matches at minimum Vega64 performance to pair with my 4790k setup. The 290X will be a fine placeholder in the interim but for a living room pc both gpu's are just too loud.


----------



## CS9K

FishTankTower said:


> I also noticed that the Volt slider doesn't have any control over the amount of voltage. Basically it is set to static at 1.250v so no matter what you set the core speed or the setting, quit, balanced, etc, the voltage is always 1.250v, so that's not very ideal as I dont want to kill the core by it being over volted 100% of the time.


Aye, this is why I don't really recommend that folks do temp. dependent Vmin except for bench runs; it's definitely not an ideal daily driver modification.


----------



## lawson67

FishTankTower said:


> Alright, I guess I had made some typos when I was editing the bios.
> 
> I got it to work, but max I got was to 2758mhz. It would go over 400W, I think it hit up to like 430, but very briefly and then it would just drop down to the 350-370W.
> 
> I also noticed that the Volt slider doesn't have any control over the amount of voltage. Basically it is set to static at 1.250v so no matter what you set the core speed or the setting, quit, balanced, etc, the voltage is always 1.250v, so that's not very ideal as I dont want to kill the core by it being over volted 100% of the time.


The core voltage will only stay at 1.250v if you disenable the deep sleep settings in features and while underload ie play games etc otherwise then it will settle back down while browsing the net etc, however if you set a higher core voltage you will notice your clocks drop as you have mentioned earlier as a higher core voltage requires you run a much higher powerlimit.

For example if i boost the core voltage to as little as 1.218v and set a PL of 400w running a max core clock of 2870mhz in a demanding game like metro exodus enhanced edition i will see my core clock plummet in parts of the map to as little as 2500mhz, where as @1.2v or under the using the same core clock it never drops below 2750mhz and is mostly around 2810mhz, this is because i need a much higher PL for a core voltage of 1.218v to sustain those clock's


----------



## colourcode

Where in gpu-z do you see what chip it is, xtx, etc?
Found it, Advanced > Bios > Bootup message

Just got the Nitro+ SE OC... It's quite ugly cheap feelin and the screws on the top armor sits above the plate so it's got some wiggle room...the armor feels disconnected l_ol._

Cooler is like half the weight of the red devil, which felt like an absolute tank in comparison. But this seems to cool more efficently?

Coil whine, yep, although not as bad the devil 👌


----------



## FishTankTower

colourcode said:


> Where in gpu-z do you see what chip it is, xtx, etc?
> Found it, Advanced > Bios > Bootup message
> 
> Just got the Nitro+ SE OC... It's quite ugly cheap feelin and the screws on the top armor sits above the plate so it's got some wiggle room...the armor feels disconnected l_ol._
> 
> Cooler is like half the weight of the red devil, which felt like an absolute tank in comparison. But this seems to cool more efficently?
> 
> Coil whine, yep, although not as bad the devil 👌


Bootup Message NAVI21EXT Gaming XTX D412 301/320 L4105OAT.SMG 2021

this is what's listed under the Bootup message regarding my card


----------



## LtMatt

lawson67 said:


> The core voltage will only stay at 1.250v if you disenable the deep sleep settings in features and while underload ie play games etc otherwise then it will settle back down while browsing the net etc, however if you set a higher core voltage you will notice your clocks drop as you have mentioned earlier as a higher core voltage requires you run a much higher powerlimit.
> 
> For example if i boost the core voltage to as little as 1.218v and set a PL of 400w running a max core clock of 2870mhz in a demanding game like metro exodus enhanced edition i will see my core clock plummet in parts of the map to as little as 2500mhz, where as @1.2v or under the using the same core clock it never drops below 2750mhz and is mostly around 2810mhz, this is because i need a much higher PL for a core voltage of 1.218v to sustain those clock's


This.

2758/2858 @1.212v, 399W power limit.

I was testing a stuttering issue on Resident Evil 3 on DX12 the other day (TL : DR DX11 runs better but with lower FPS) and I was frequently bouncing off the 399W power limit, dropping my clocks to 2700-2725Mhz. Really depends on the game tbh, they don't all suck down so much power. Games with lots of RT often crank the power draw up too.


----------



## colourcode

I dont understand core speed for amd cards. This card is specified @2385 iirc. But it boosts as high as 2554 in games?! My red devil stayed around 2300 (despite showing ~2500) and was neigh impossible to undervolt without crashing.

Are each card individually "tuned" other than what the boxes say?


----------



## ZealotKi11er

colourcode said:


> I dont understand core speed for amd cards. This card is specified @2385 iirc. But it boosts as high as 2554 in games?! My red devil stayed around 2300 (despite showing ~2500) and was neigh impossible to undervolt without crashing.
> 
> Are each card individually "tuned" other than what the boxes say?


The clock that AMD says its just the bare minimum. You have to look at Radeon Settings for your cards max boost clk.


----------



## 99belle99

ZealotKi11er said:


> I got HD 7970 > 290X > Fury X > Vega 64 > Radeon 7 > 5700 XT > 6900 XT.


I had a R9 290 > Fury X > Vega 56 > 5700 XT and now a 6900 XT.


----------



## EastCoast

LtMatt said:


> This.
> 
> 2758/2858 @1.212v, 399W power limit.


Hey, have a question for you. Can you provide any insight to these reference 6900xt that have 2310Mhz memory?








A List Of Different Types Of 6900 XTXH GPUs


AIR COOLED ASRock 6900xt OC Formula Boost: 2475Mhz 21 Phase Power Design MSI 6900xt Gaming Z Trio Boost: 2425Mhz 14+3 Phase Power Design** PowerColor 6900xt Red Devil Ultimate Boost: 2425Mhz 14+2 Phase Power Design** TOXIC AMD Radeon RX 6900 XT Air Cooled Boost: 2425Mhz...




www.overclock.net





People are looking for the 18Gbps 6900xtxh variants but are reporting vram frequencies of 2310MHz? Which is an odd base frequency. And, I can't find anything about this.


----------



## Enzarch

EastCoast said:


> Hey, have a question for you. Can you provide any insight to these reference 6900xt that have 2310Mhz memory?
> People are looking for the 18Gbps 6900xtxh variants but are reporting vram frequencies of 2310MHz? Which is an odd base frequency. And, I can't find anything about this.


What info are you looking for? The ref 6900 XT LC is the only card ATM with higher clocked/binned memory chips, rated at 18Gbps. It does not seem unusual at all for these to be actually clocked at ~18.4Gbps. 
Sites like TPU probably just inferred the 2250Mhz clock speed they have listed, as that is exactly 18Gbps.
AMD probably just got what they could from Samsung with stable speeds/timings, and may just be a trial run for a Navi2 refresh.


Enzarch said:


> *Click this quote to see my previous post here for more detail on the LC card*


----------



## J7SC

Speaking of GDDR6 and 6900XT, @CS9K :

Have you seen any relief yet for us folk with really well-running GDDR6 locked out of > 2150 MHz ? Per earlier comments, using a few MPT adjustments, I can actually go higher (2260 efficicient), but only with GPU speed locking into safe mode. This remains the final frontier I like to breach with my otherwise well-tuned 6900XT XTX.


----------



## ZealotKi11er

J7SC said:


> Speaking of GDDR6 and 6900XT, @CS9K :
> 
> Have you seen any relief yet for us folk with really well-running GDDR6 locked out of > 2150 MHz ? Per earlier comments, using a few MPT adjustments, I can actually go higher (2260 efficicient), but only with GPU speed locking into safe mode. This remains the final frontier I like to breach with my otherwise well-tuned 6900XT XTX.


Only XTXH can go over 2150. Other cards are firmware locked.


----------



## J7SC

ZealotKi11er said:


> Only XTXH can go over 2150. Other cards are firmware locked.


...I know that, though it can subject to some debate, noting that 1.175V was also supposed to be a an absolute firmware lock, and it isn't with the right kind of MPT settings. 

Per below, I have in fact run the VRAM much faster on my XTX already, just not w/o the GPU going into lock-down speed. I just wanted to know whether there had been any progress made on cracking this 'final frontier' since the last time I complained about it (typically at least once a month ), not least as GPU speeds (set and effective) of my little XTX continue to delight me since I gave it good custom cooling.


----------



## Conenubi701

I missed tinkering with this card. Made me fall in love with OCing again lmao


----------



## CS9K

J7SC said:


> ...I know that, though it can subject to some debate, noting that 1.175V was also supposed to be a an absolute firmware lock, and it isn't with the right kind of MPT settings.
> 
> Per below, I have in fact run the VRAM much faster on my XTX already, just not w/o the GPU going into lock-down speed. I just wanted to know whether there had been any progress made on cracking this 'final frontier' since the last time I complained about it (typically at least once a month ), not least as GPU speeds (set and effective) of my little XTX continue to delight me since I gave it good custom cooling.
> View attachment 2545408


Nope, no progress. Until someone familiar with bios editing at the hex-level can figure out how to bypass or satisfy the safety-checks that are built into RDNA2 and its drivers, then we're stuck with what we've got.


----------



## J7SC

CS9K said:


> Nope, no progress. Until someone familiar with bios editing at the hex-level can figure out how to bypass or satisfy the safety-checks that are built into RDNA2 and its drivers, then we're stuck with what we've got.


Thanks just the same - VRAM 'freedom' is the final piece of the puzzle, but still MIA ...don't really know why my card has a dual bios in the first place if I can't experiment (please don't say fan profile)...


----------



## CS9K

J7SC said:


> Thanks just the same - VRAM 'freedom' is the final piece of the puzzle, but still MIA ...don't really know why my card has a dual bios in the first place if I can't experiment (please don't say fan profile)...


I feel you. It was a glorious day when the VBIOS mod for RDNA GPU's came to be, but alas, AMD made RDNA2's VBIOS checks _much_ more complex than RDNA's, and we're at the mercy of people figuring out this stuff on their own, in their free time, unpaid. I was grateful to have the VBIOS mod on my RX 5600 XT, and kind of took for granted that we'd have the same by now with Radeon 6000. Alas~


----------



## J7SC

CS9K said:


> I feel you. It was a glorious day when the VBIOS mod for RDNA GPU's came to be, but alas, AMD made RDNA2's VBIOS checks _much_ more complex than RDNA's, and we're at the mercy of people figuring out this stuff on their own, in their free time, unpaid. I was grateful to have the VBIOS mod on my RX 5600 XT, and kind of took for granted that we'd have the same by now with Radeon 6000. Alas~


I have looked at the vbios of xtx and xtxh per hex editor before, but as you indicated, this isn't a simple one-line fix with RDNA2, but multiple interdependent parameters. What gives me a glimmer of hope is AMD GPU's bios details in the Windows registry which MPT also uses w/o reflashing vbios. Haven't hacked a Windows registry in a while


----------



## geriatricpollywog

ZealotKi11er said:


> I way playing around with my 290X and boy that thing was loud. AMD got it very wrong back then. $650 290X makes more sense than $650 Fury X. They could have had everything with 290X.


The FuryX was quiet though.


----------



## cromine11

Anybody know which waterblock is better for temps on a 69000 XT Red Devil. It's hard to find a comparison. The EK block is expensive but I don't mind if it does better


----------



## cfranko

Warzone is almost unplayable on my 6900 xt, the FPS is all over the place and I can't sustain 165 FPS which is my monitors refresh rate it always dips to the 130's does anyone have a similiar issue?


----------



## Conenubi701

cfranko said:


> Warzone is almost unplayable on my 6900 xt, the FPS is all over the place and I can't sustain 165 FPS which is my monitors refresh rate it always dips to the 130's does anyone have a similiar issue?


What cpu/specs/resolution? How is 130+ unplayable? Do you mean you're getting hitching/freezing? anywhere from 130-200+ is normal on Warzone.


----------



## cfranko

Conenubi701 said:


> What cpu/specs/resolution? How is 130+ unplayable? Do you mean you're getting hitching/freezing? anywhere from 130-200+ is normal on Warzone.


5900x and 1440p. it dips to 130 but feels like 30 I can visibly see that the game is stuttering.


----------



## ptt1982

cromine11 said:


> Anybody know which waterblock is better for temps on a 69000 XT Red Devil. It's hard to find a comparison. The EK block is expensive but I don't mind if it does better


Hi, I’m also in Japan, and bought the Alphacool block for my red devil. It was around 22K JPY. No complaints, temps are as good as your mount and thermal paste, and loop is! I have a non xtxh so the temps went up to 110C in timespy with the air cooler, at 330W. Now at 400W, hotspot peaks around 78C during winter, and 88C during Tokyo’s hottest summer time, in timespy. GPU temp goes to maybe absolute max 58C. Typically edge is 50-54C and hotspot 68-76C in normal gaming (it doesn’t need 400W in games when vsynced to 4K60.)

I’m running it now max 172W undervolted heavily, and it stays under 40C hotspot and gpu temp 28-33C or so, I forgot. 

My mount and paste application are not the best, and apartment during summer is smoking hot. The non-ultimate runs 10C hotter than the Ultimate, but it also has 15C higher throttling limit at 110C. If you have the ultimate your temps should be lower 5-15C, depending on paste/mount/loop.


----------



## 99belle99

geriatricpollywog said:


> The FuryX was quiet though.


That is correct as I owned both and the 290 was pretty loud while gaming. And didn't have zer0 fan mode either. My red devil 56 was the first card I had with zer0 fan mode as my 5700 XT had it as well as this current 6900 XT.


----------



## ZealotKi11er

99belle99 said:


> That is correct as I owned both and the 290 was pretty loud while gaming. And didn't have zer0 fan mode either. My red devil 56 was the first card I had with zer0 fan mode as my 5700 XT had it as well as this current 6900 XT.


Fury X was fantastic GPU in terms of cooling . What sucked about it was OC potential and 4GB HBM. Fury X really need more clock speed 1200-1400 to beat Nvidia Maxwell which was overclocking very well at the time.


----------



## cromine11

ptt1982 said:


> Hi, I’m also in Japan, and bought the Alphacool block for my red devil. It was around 22K JPY. No complaints, temps are as good as your mount and thermal paste, and loop is! I have a non xtxh so the temps went up to 110C in timespy with the air cooler, at 330W. Now at 400W, hotspot peaks around 78C during winter, and 88C during Tokyo’s hottest summer time, in timespy. GPU temp goes to maybe absolute max 58C. Typically edge is 50-54C and hotspot 68-76C in normal gaming (it doesn’t need 400W in games when vsynced to 4K60.)
> 
> I’m running it now max 172W undervolted heavily, and it stays under 40C hotspot and gpu temp 28-33C or so, I forgot.
> 
> My mount and paste application are not the best, and apartment during summer is smoking hot. The non-ultimate runs 10C hotter than the Ultimate, but it also has 15C higher throttling limit at 110C. If you have the ultimate your temps should be lower 5-15C, depending on paste/mount/loop.


That sounds pretty awesome. Just wish I could find someone with the ek block to compare it to.


----------



## Godhand007

Can someone clarify with certainty whether low FPS, hitching, stuttering etc. is related to power limited GPU (assume no other bottlenecks) ? I have heard this many times but haven't encountered this myself. My GPU is limited to ~400W; I hit this limit in many games frequently and the card just downclocks without any frame time issues? or is it game depended i.e. some games area able to handle dynamic clocks better?


----------



## ZealotKi11er

Has nothing to do with power limit.


----------



## Scorpion667

I just had the worst Sunday ever. Discovered last night that Timespy was shutting down my PC with 6900XT Toxic and EVGA 1300W PSU

I replaced the PSU with Seasonic 1000w titanium (TX-1000) today but same issue!

Tried:
-clearing CMOS
-Reinstalling OS
-reseating ALL connections
-connecting my Apex Xi supplemental PCIE power connector
-reseating RAM
-reseating CPU
-resize BAR disabled
-less torque on CPU cooler
-Unplugging all non-necessary devices
-air compressor on mobo and in PCIE slot

Took the card apart and noticed one corner of the chip wasn't making contact with the waterblock:


















Repasted card and successfully ran 25 loops of Timespy at 100Mhz higher than what it was shutting down at (2700 vs 2800Mhz).

Spent $385 CAD on new PSU and countless hours troubleshooting and the solution was a simple repaste... Learn from my mistakes!


----------



## Godhand007

Scorpion667 said:


> I just had the worst Sunday ever. Discovered last night that Timespy was shutting down my PC with 6900XT Toxic and EVGA 1300W PSU
> 
> I replaced the PSU with Seasonic 1000w titanium (TX-1000) today but same issue!
> 
> Tried:
> -clearing CMOS
> -Reinstalling OS
> -reseating ALL connections
> -connecting my Apex Xi supplemental PCIE power connector
> -reseating RAM
> -reseating CPU
> -resize BAR disabled
> -less torque on CPU cooler
> -Unplugging all non-necessary devices
> -air compressor on mobo and in PCIE slot
> 
> Took the card apart and noticed one corner of the chip wasn't making contact with the waterblock:
> 
> View attachment 2545931
> 
> View attachment 2545932
> 
> 
> Repasted card and successfully ran 25 loops of Timespy at 100Mhz higher than what it was shutting down at (2700 vs 2800Mhz).
> 
> Spent $385 CAD on new PSU and countless hours troubleshooting and the solution was a simple repaste... Learn from my mistakes!
> 
> View attachment 2545934
> 
> View attachment 2545935
> 
> View attachment 2545936
> 
> View attachment 2545937
> 
> View attachment 2545938


What were the temps when it was shutting down?


----------



## Scorpion667

Godhand007 said:


> What were the temps when it was shutting down?


50c GPU, 96c GPU hotspot


----------



## Godhand007

Scorpion667 said:


> 50c GPU, 96c GPU hotspot


Interesting. If your diagnosis is correct then AMD temp sensors are not covering the whole GPU die. Bad design in that case.


----------



## Godhand007

Godhand007 said:


> Can someone clarify with certainty whether low FPS, hitching, stuttering etc. is related to power limited GPU (assume no other bottlenecks) ? I have heard this many times but haven't encountered this myself. My GPU is limited to ~400W; I hit this limit in many games frequently and the card just downclocks without any frame time issues? or is it game depended i.e. some games area able to handle dynamic clocks better?


Any opinions on this topic?


----------



## CS9K

Scorpion667 said:


> I just had the worst Sunday ever. Discovered last night that Timespy was shutting down my PC with 6900XT Toxic and EVGA 1300W PSU
> 
> I replaced the PSU with Seasonic 1000w titanium (TX-1000) today but same issue!
> 
> Tried:
> -clearing CMOS
> -Reinstalling OS
> -reseating ALL connections
> -connecting my Apex Xi supplemental PCIE power connector
> -reseating RAM
> -reseating CPU
> -resize BAR disabled
> -less torque on CPU cooler
> -Unplugging all non-necessary devices
> -air compressor on mobo and in PCIE slot
> 
> Took the card apart and noticed one corner of the chip wasn't making contact with the waterblock:
> 
> View attachment 2545931
> 
> View attachment 2545932
> 
> 
> Repasted card and successfully ran 25 loops of Timespy at 100Mhz higher than what it was shutting down at (2700 vs 2800Mhz).
> 
> Spent $385 CAD on new PSU and countless hours troubleshooting and the solution was a simple repaste... Learn from my mistakes!
> 
> View attachment 2545934
> 
> View attachment 2545935
> 
> View attachment 2545936
> 
> View attachment 2545937
> 
> View attachment 2545938


Whoo buddy, yeah your GPU cooked that stock thermal paste right out. That looks like some thick stuff too, so it's wild to see it be pumped out. Though, it _also_ looks pre-applied to the AIO, so who knows _what_ the overall quality of it was.

I'm glad that a re-paste got you back in action!


----------



## alceryes

Are people getting 2600+MHz on air cooling?
I'm currently undervolting mine with a slight core/mem overclock and mem fast timings (see below). Seems to be rock solid. This is the reference 6900 XT. I did do the easy PCB back thermal pad mod.


----------



## alceryes

Scorpion667 said:


> I just had the worst Sunday ever. Discovered last night that Timespy was shutting down my PC with 6900XT Toxic and EVGA 1300W PSU
> 
> I replaced the PSU with Seasonic 1000w titanium (TX-1000) today but same issue!
> 
> Tried:
> -clearing CMOS
> -Reinstalling OS
> -reseating ALL connections
> -connecting my Apex Xi supplemental PCIE power connector
> -reseating RAM
> -reseating CPU
> -resize BAR disabled
> -less torque on CPU cooler
> -Unplugging all non-necessary devices
> -air compressor on mobo and in PCIE slot
> 
> Took the card apart and noticed one corner of the chip wasn't making contact with the waterblock:
> 
> View attachment 2545931
> 
> View attachment 2545932
> 
> 
> Repasted card and successfully ran 25 loops of Timespy at 100Mhz higher than what it was shutting down at (2700 vs 2800Mhz).
> 
> Spent $385 CAD on new PSU and countless hours troubleshooting and the solution was a simple repaste... Learn from my mistakes!
> 
> View attachment 2545934
> 
> View attachment 2545935
> 
> View attachment 2545936
> 
> View attachment 2545937
> 
> View attachment 2545938


What kind of paste are you using? There's definitely excess out the sides on both the copper block and GPU core substrate. That's not a good sign if it 'boiled' out.


----------



## J7SC

alceryes said:


> Are people getting 2600+MHz on air cooling?
> I'm currently undervolting mine with a slight core/mem overclock and mem fast timings (see below). Seems to be rock solid. This is the reference 6900 XT. I did do the easy PCB back thermal pad mod.
> 
> View attachment 2546101


Short answer, yes - but...

My air-cooled 6900XT totally stock on day one w/o MPT or such below after a few bench runs. The 'but' was the fan speeds > 3800 rpm  ...card is now water-cooled (450W+) with thermal putty on the VRAM and Gelid OC paste on the die.


----------



## Swenzzon86

After alot of testing and tweaks finaly was able to break 27K


----------



## alceryes

J7SC said:


> Short answer, yes - but...
> 
> My air-cooled 6900XT totally stock on day one w/o MPT or such below after a few bench runs. The 'but' was the fan speeds > 3800 rpm  ...card is now water-cooled (450W+) with thermal putty on the VRAM and Gelid OC paste on the die.
> View attachment 2546105


How loud was that compared to say, the vacuum-cleaner-blower fan Sapphire had on their Vega 64?


----------



## 99belle99

Swenzzon86 said:


> After alot of testing and tweaks finaly was able to break 27K
> View attachment 2546220


Picture of a computer screen. Screen shot baby and a link to the result.


----------



## Swenzzon86

99belle99 said:


> Picture of a computer screen. Screen shot baby and a link to the result.











Result not found







www.3dmark.com





There you have it


----------



## gtz

Swenzzon86 said:


> Result not found
> 
> 
> 
> 
> 
> 
> 
> www.3dmark.com
> 
> 
> 
> 
> 
> There you have it


Damn son


----------



## 99belle99

Swenzzon86 said:


> Result not found
> 
> 
> 
> 
> 
> 
> 
> www.3dmark.com
> 
> 
> 
> 
> 
> There you have it


Wow, that's some average clock and the temps were so low for the clocks. What have you got a chiller or something?


----------



## SoloCamo

alceryes said:


> How loud was that compared to say, the vacuum-cleaner-blower fan Sapphire had on their Vega 64?


The AMD branded 6900XT cooler maxed out near 2700rpm for me and compared to my Sapphire blower cooled Vega 64 I still have, I'd say 100% on the 6900XT matches around 50% fan speed noise on the Vega 64. They've made dramatic improvements.


----------



## alceryes

SoloCamo said:


> The AMD branded 6900XT cooler maxed out near 2700rpm for me and compared to my Sapphire blower cooled Vega 64 I still have, I'd say 100% on the 6900XT matches around 50% fan speed noise on the Vega 64. They've made dramatic improvements.


Good to hear. When I want to eke out a few more FPS from my 6900 XT in 4 years I can just bump up the fans since I wear headphones.

The above is a total lie as I know I won't be able to stop myself from upgrading in 2 years . The only thing that's currently holding me back is that I want an AM5 system.


----------



## J7SC

alceryes said:


> How loud was that compared to say, the vacuum-cleaner-blower fan Sapphire had on their Vega 64?





alceryes said:


> Good to hear. When I want to eke out a few more FPS from my 6900 XT in 4 years I can just bump up the fans since I wear headphones.
> 
> The above is a total lie as I know I won't be able to stop myself from upgrading in 2 years . The only thing that's currently holding me back is that I want an AM5 system.


...still quieter less loud than blower-style on max, still, 3 fans at 3800 rpm on my 6900XT is not s.th. I wanted to listen to a lot...but I usually w-cool my GPUs anyway


----------



## EastCoast

*alceryes
J7SC *

What driver are you using?


----------



## skline00

Really have that 5900x pushed!


----------



## alceryes

I'm using the old 21.8.2.
From what I've read, some of the newer drivers hinder performance/overclocking. Until driver release notes include a fix or performance increase, for a game I play, I'll stay with this golden oldie.


----------



## ZealotKi11er

alceryes said:


> I'm using the old 21.8.2.
> From what I've read, some of the newer drivers hinder performance/overclocking. Until driver release notes include a fix or performance increase, for a game I play, I'll stay with this golden oldie.


You never lose OC potential. New drivers extract more performance so it exposes unstable OC.


----------



## J7SC

EastCoast said:


> *alceryes
> J7SC *
> 
> What driver are you using?


...latest WHQL now - and whatever the current driver was for the air-cooled pic when I got the card back in May last year


----------



## Scorpion667

alceryes said:


> What kind of paste are you using? There's definitely excess out the sides on both the copper block and GPU core substrate. That's not a good sign if it 'boiled' out.


That's the factory paste; it's not great😂. I just used some Kryonaut that I had lying around and the spread was good (took off the first mount to check). What pastes would you recommend between gpu die and coldplate?


----------



## 6u4rdi4n

Scorpion667 said:


> That's the factory paste; it's not great😂. I just used some Kryonaut that I had lying around and the spread was good (took off the first mount to check). What pastes would you recommend between gpu die and coldplate?


Gelid GC-Extreme has been recommended a lot since it's not that prone to the pump-out effect. I've used GC-Extreme in the past myself, and can highly recommend it. I've been using Thermal Grizzly Kryonaut on my 6900 XT with a Bykski water block for about 10 months now and temperatures are the same as when I assembled it, so that's also recommended based on personal experience.


----------



## alceryes

ZealotKi11er said:


> You never lose OC potential. New drivers extract more performance so it exposes unstable OC.


While this may be true, I've heard complaints about a 'rock solid' 6900 XT that starts crashing and getting BSODs from just a driver update. These appear to be somewhat technical users and report that after the new driver install and resetting of their 'rock solid' OC settings, their card is no longer stable. You just have to go back a dozen pages in this topic to see these complaints.


----------



## SoloCamo

6u4rdi4n said:


> Gelid GC-Extreme has been recommended a lot since it's not that prone to the pump-out effect. I've used GC-Extreme in the past myself, and can highly recommend it. I've been using Thermal Grizzly Kryonaut on my 6900 XT with a Bykski water block for about 10 months now and temperatures are the same as when I assembled it, so that's also recommended based on personal experience.


Yup, I've been using GC-Extreme for years and recently moved to Kryonaut. Can't go wrong with either, gpu or cpu.


----------



## alceryes

SoloCamo said:


> Yup, I've been using GC-Extreme for years and recently moved to Kryonaut. Can't go wrong with either, gpu or cpu.


Ditto.
It's what I used for my PCB back thermal pad mod. Brought several temps down 4+C. My hotspot temp is still lower than before I started overclocking the core and mem.


----------



## Lavacon

How's everyone's 6900xt handling Fortnite?

I have a Red Devil and had to lower the quality from ultra to high because I was barely getting 30fps @ 1440

Edit: Turns out it's just not boosting at all in DX11. Clocks look as they should in DX12.
Looks like forcing the min clock via AMD software works for dx11. Still odd.

Any suggestions?


----------



## SoloCamo

Lavacon said:


> How's everyones 6900xt handling Fortnite?
> 
> I have a Red Devil and had to lower the quality from ultra to high because I was barely getting 30fps @ 1440
> 
> Didn't seem to be running super hot. Probably a random setting messing things up.
> 
> Any suggestions?


I'll test. Got better frames with my Radeon VII at 4k then you reported at 1440p.


----------



## Scorpion667

6u4rdi4n said:


> Gelid GC-Extreme has been recommended a lot since it's not that prone to the pump-out effect. I've used GC-Extreme in the past myself, and can highly recommend it. I've been using Thermal Grizzly Kryonaut on my 6900 XT with a Bykski water block for about 10 months now and temperatures are the same as when I assembled it, so that's also recommended based on personal experience.


Thanks! Just ordered 5 tubes. I had used GC Extreme on CPU until I ran out a couple years ago and was extremely happy with it. It seemed to do better with surfaces that have imperfections (super concave 3930k IHS) compared to others.

Still no further shutdowns after replacing paste but 98c hotspot (XTXH chip) in Timespy using 360mm AIO is far from ideal. I spoke to my friend who has the same 6900XT Toxic EE and he saw the same massive delta edge/hotspot which is sus. 

Gonna try the washer mod with fresh GC Extreme once it arrives.


----------



## CfYz

cfranko said:


> Warzone is almost unplayable on my 6900 xt, the FPS is all over the place and I can't sustain 165 FPS which is my monitors refresh rate it always dips to the 130's does anyone have a similiar issue?





cfranko said:


> 5900x and 1440p. it dips to 130 but feels like 30 I can visibly see that the game is stuttering.


Ultra settings? I got all low + textures/filtering on ultra, SMAA T2X - 130-150-180 FPS (plane 160) and this is only [email protected], MW on small maps 200-250+ FPS, try to set minimum freq to 1500-2000Mhz mb? And WZ itself sometimes had a laggy lobby, did you try resurgence when map is smaller?


----------



## EastCoast

Can we get some 6900xt owners to test out Warzone? I read a lot of reports that their's stuttering in that game if you use anything other then low texture settings.


----------



## CfYz

EastCoast said:


> Can we get some 6900xt owners to test out Warzone? I read a lot of reports that their's stuttering in that game if you use anything other then low texture settings.


If you playin a lot of WZ you should probably know who TGD is, so we can use this as a baseline results:


----------



## EastCoast

CfYz said:


> If you playin a lot of WZ you should probably know who TGD is, so we can use this as a baseline results:


Yes, I am familiar with him. This is the 1st PC review I've seen from him that was released today. Good find.


----------



## alceryes

EastCoast said:


> Can we get some 6900xt owners to test out Warzone? I read a lot of reports that their's stuttering in that game if you use anything other then low texture settings.


Isn't Warzone very CPU intensive?
Could be that some CPU OCs aren't as stable as people think they are but they're blaming it on the GPU.


----------



## EastCoast

alceryes said:


> Isn't Warzone very CPU intensive?
> Could be that some CPU OCs aren't as stable as people think they are but they're blaming it on the GPU.


Looks like a memory leak. I suggest using ISLC. 
---------------

BTW new driver dropped 22.2.1 


https://www.amd.com/en/support


----------



## Scorpion667

I think I have a way of getting my hands on the AMD LC (XTXH) variant. Does anyone have temp numbers for the stock cooler? I have searched and cannot find any reviews!


----------



## Godhand007

Scorpion667 said:


> I think I have a way of getting my hands on the AMD LC (XTXH) variant. Does anyone have temp numbers for the stock cooler? I have searched and cannot find any reviews!


You might want to wait on that. It seems 6950XT is on the Horizon.


----------



## Godhand007

alceryes said:


> Are people getting 2600+MHz on air cooling?
> I'm currently undervolting mine with a slight core/mem overclock and mem fast timings (see below). Seems to be rock solid. This is the reference 6900 XT. I did do the easy PCB back thermal pad mod.
> 
> View attachment 2546101


I can do ~2800 MHz on Sapphire reference with a power limit of ~390W (with vmin trick).


----------



## Godhand007

Godhand007 said:


> Can someone clarify with certainty whether low FPS, hitching, stuttering etc. is related to power limited GPU (assume no other bottlenecks) ? I have heard this many times but haven't encountered this myself. My GPU is limited to ~400W; I hit this limit in many games frequently and the card just downclocks without any frame time issues? or is it game depended i.e. some games area able to handle dynamic clocks better?


So no one knows/has experience with this topic?


----------



## CfYz

Scorpion667 said:


> Does anyone have temp numbers for the stock cooler?


90C Hot Spot limit is hardcoded, Fan speed maximum to 53% (~1900RPM) by default. So regardless of manual PL or freq settings the firmware try to stay at 90C HS temperature. Its exceeds sometimes to 92-97C HS, but no more. Mem 70-80C, VRM +-70-80C according to HWInfo. Something more specific you want to know?


----------



## ZealotKi11er

Godhand007 said:


> So no one knows/has experience with this topic?


What is ur cooling situation? Is it water cooling, custom card or reference cooler that you have it set to 400w?


----------



## Godhand007

ZealotKi11er said:


> What is ur cooling situation? Is it water cooling, custom card or reference cooler that you have it set to 400w?


Reference air cooler. 108c Hotspot Temp. ~1.23v (as per HW info) during 3d load and ~2850Mhz clocks unless limited by power consumption .


----------



## ZealotKi11er

Godhand007 said:


> Reference air cooler. 108c Hotspot Temp. ~1.23v (as per HW info) during 3d load and ~2850Mhz clocks unless limited by power consumption .


There is ur problem. Reference cooler is not design to handle 1.23v, 400w and run 108C Hotspot. You are basically running at limit. You are causing the gpu to trigger throttling to protect itself which is causing the stutter for you.


----------



## Godhand007

ZealotKi11er said:


> There is ur problem. Reference cooler is not design to handle 1.23v, 400w and run 108C Hotspot. You are basically running at limit. You are causing the gpu to trigger throttling to protect itself which is causing the stutter for you.





> Godhand007 said:
> Can someone clarify with certainty whether *low FPS, hitching, stuttering etc. is related to power limited GPU (assume no other bottlenecks) ? I have heard this many times but haven't encountered this myself.* My GPU is limited to ~400W; I hit this limit in many games frequently and the card just downclocks without any frame time issues? or is it game depended i.e. some games area able to handle dynamic clocks better?


No, Throttling is not my problem. My question was whether throttling causes FPS drops, hitching etc. or not and can someone confirm the same? It seems that for me it is not an issue or at least I can't confirm it whether it is.


----------



## alceryes

Godhand007 said:


> No, Throttling is not my problem. My question was whether throttling causes FPS drops, hitching etc. or not and can someone confirm the same? It seems that for me it is not an issue or at least I can't confirm it whether it is.


Yes, throttling can cause FPS drops/hitching.


----------



## alceryes

Godhand007 said:


> Reference air cooler. 108c Hotspot Temp. ~1.23v (as per HW info) during 3d load and ~2850Mhz clocks unless limited by power consumption .


Wow! You're running pretty close to tilt. I'd be careful with that voltage and HS temp. Maybe it works fine now but you're burning the life out of that GPU.
Candle lit at both ends - twice as bright but will only last half as long.

I'm not that adventurous. For now, I'm fine with my mild OC, fans running silent, and HS temp below 95C.


----------



## Godhand007

alceryes said:


> Yes, throttling *can* cause FPS drops/hitching.


So its not a given. Is it game dependent ? I haven't encountered them in many of the games that I play. Recent example would be Halo Infinite MP. Clocks are always in flux to a large extent without any stutters etc. (There are ping related hitches some times, I think).


----------



## Godhand007

alceryes said:


> Wow! You're running pretty close to tilt. I'd be careful with that voltage and HS temp. Maybe it works fine now but you're burning the life out of that GPU.
> I'm not that adventurous. For now, I'm fine with my mild OC, fans running silent, and HS temp below 95C.


Yeah. I know its close to limit and there will definitely be some chip degradation above normal. Though I don't share the concerns about temps that many have here. I have run many GPU's close to their thermal limit without any problems. Admittedly, I do buy/sell GPUs every ~2 years so don't know what happens to them afterwards.


> Candle lit at both ends - twice as bright but will only last half as long.


I don't think its lit a both ends, maybe a little breeze over it.


----------



## alceryes

Godhand007 said:


> So its not a given. Is it game dependent ? I haven't encountered them in many of the games that I play. Recent example would be Halo Infinite MP. Clocks are always in flux to a large extent without any stutters etc. (There are ping related hitches some times, I think).


Yes.
Each game is different on how it utilizes the CPU/GPU/RAM. Even within an individual game, settings can be changed to create bottlenecks where there were none before.
Online games add an additional variable to the diagnosis. Always test with offline games to remove latency/internet/game server connectivity as a possibility.


----------



## Godhand007

alceryes said:


> Yes.
> Each game is different on how it utilizes the CPU/GPU/RAM. Even within an individual game, settings can be changed to create bottlenecks where there were none before.
> Online games add an additional variable to the diagnosis. Always test with offline games to remove latency/internet/game server connectivity as a possibility.


I understand your point but we can still try to find patterns, right? What offline game(with a built in benchmark ) would you suggest for the same?


----------



## alceryes

Time Spy
Horizon Zero Dawn

You can really use any game where you experience this throttling. Just have HWiNFO64 running in the background (sensors only, logging on). Then go through the log to see if you see any kind of throttling.


----------



## ZealotKi11er

Time Spy stability test. It will tell you how stable the card is based on temp and how much the clk/fps drops from run to run.


----------



## Godhand007

alceryes said:


> Time Spy
> Horizon Zero Dawn
> 
> You can really use any game where you experience this throttling. Just have HWiNFO64 running in the background (sensors only, logging on). Then go through the log to see if you see any kind of throttling.


I have time spy. No FPS hitching/stutters as far as I can see. Here are the results : I scored 17 147 in Time Spy


----------



## Godhand007

ZealotKi11er said:


> Time Spy stability test. It will tell you how stable the card is based on temp and how much the clk/fps drops from run to run.


As per these results, card throttles a lot and FPS drops but no stutter/hitching is present.


----------



## ZealotKi11er

Godhand007 said:


> I have time spy. No FPS hitching/stutters as far as I can see. Here are the results : I scored 17 147 in Time Spy


Oh GOD that score..... people here can tell u what ur problem is.


----------



## Godhand007

ZealotKi11er said:


> Oh GOD that score..... people here can tell u what ur problem is.


You have misunderstand my question form the beginning. My issue is not with throttling but the effect of throttling as it pertain to stutter/hitches specifically. I know the score is lower due to unstable clocks but it doesn't matter for the games that I play as I don't encounter such a huge amount of throttling with most games I play.


----------



## ZealotKi11er

Godhand007 said:


> You have misunderstand my question form the beginning. My issue is not with throttling but the effect of throttling as it pertain to stutter/hitches specifically. I know the score is lower due to unstable clocks but it doesn't matter for the games that I play as I don't encounter such a huge amount of throttling with most games I play.


Games throttle differently but looking at the score u are throttling way too hard that in games u will feel stutter. Its normal. Have you tried to run stock like it was intendent?


----------



## Godhand007

ZealotKi11er said:


> Games throttle differently but looking at the score u are throttling way too har*d that in games u will feel stutter.* Its normal. Have you tried to run stock like it was intendent?


That's the point. I don't feel any stutter/hitches in games. There are normal FPS drops if an area in a game is more demanding but no stutter or hitches( frame time graph is a ~straight line). Running at stock is same with corresponding performance decrease.


----------



## alceryes

Godhand007 said:


> That's the point. I don't feel any stutter/hitches in games. There are normal FPS drops if an area in a game is more demanding but no stutter or hitches( frame time graph is a ~straight line). Running at stock is same with corresponding performance decrease.


What is your goal? Your 2800MHz clock setting isn't doing you any good. That's basically a stock settings score.
Give us a link to your Time Spy at stock speeds.


----------



## ZealotKi11er

alceryes said:


> What is your goal? Your 2800MHz clock setting isn't doing you any good. That's basically a stock settings score.
> Give us a link to your Time Spy at stock speeds.


Stock 6900 XT should be getting more than 17.xK


----------



## 99belle99

ZealotKi11er said:


> Stock 6900 XT should be getting more than 17.xK


I forget what stock is but I remember last January after I first picked up my 6900 XT and ran Timespy I kept getting low scores and couldn't figure out why. I forgot I capped the frame rate in Radeon settings at 120fps as I game on a 120Hz 4k Tv..


----------



## Godhand007

alceryes said:


> What is your goal? Your 2800MHz clock setting isn't doing you any good. That's basically a stock settings score.
> Give us a link to your Time Spy at stock speeds.





> Godhand007 said:
> Can someone clarify with certainty whether *low FPS, hitching, stuttering etc. is related to power limited GPU (assume no other bottlenecks) ? I have heard this many times but haven't encountered this myself.* My GPU is limited to ~400W; I hit this limit in many games frequently and the card just downclocks without any frame time issues? or is it game depended i.e. some games area able to handle dynamic clocks better?


I have been clear since the beginning as to what I wanted to know. My ask is a bit nuanced and I get now why there are no clear answers to this question on the internet.

To be clear I do not have any stutters/hitches in my games with my current settings due to PW limit. With Time spy as well, the frame time graph is straight without any stutters. As for low score on TS, it is expected as PW limit is being reached.


----------



## ZealotKi11er

Godhand007 said:


> I have been clear since the beginning as to what I wanted to know. My ask is a bit nuanced and I get now why there are no clear answers to this question on the internet.
> 
> To be clear I do not have any stutters/hitches in my games with my current settings due to PW limit. With Time spy as well, the frame time graph is straight without any stutters. As for low score on TS, it is expected as PW limit is being reached.


No, power limit is not being reach. You might get those clks and power for 1-2s and then you run at throttle state the entire time. You basically are killing the card and if you keep playing like this its will be time to RMA.


----------



## Scorpion667

CS9K said:


> Howdy! Let's check a few things first, because SOMEthing is preventing your water block from contacting the GPU's die properly. It could be one of many things, or a combination of all of them
> 
> - First off, did you use the thermal pads that came with the water block? If not, why? If not, do you still have them? If you have them, dismount and use them.
> Rationale: With the reference card and EK's Quantum Vector blocks, for example, one can NOT use aftermarket thermal pads like Thermal Grizzly Minus Pad 8, because the thermal pads that come with the EK block are VERY compressible, where TG's MP8 pads are not_._ Using TG's pads on a reference card with any cooler will prevent the die from contacting the block properly
> 
> - Thermal Paste Application. I personally use Gelid GC-Extreme for bare-die applications. Perhaps read up on how to apply it, then give it a try before going with thermal putty
> Rationale: I learned this one the hard way with prior hardware, and thankfully haven't had any trouble with my RX 6900 XT reference w/ek block, nor has the b/f had trouble with his 3080FE on water. A thin paste and/or oil-based emulsion pastes like NT-H1, Kryonaut, etc.etc.etc., don't work well on bare-die applications that get really hot, as the heat cycles will "pump out" the grease over time. The thicker, non-oil-based emulsion of pastes like Hydronaut, GC-Extreme, Ectotherm, et. al, are much more resistant to pump-out than their oil based counterparts. However, you can't have clearance issues as described above when using thicker pastes, for you have to use a spreader tool and apply a layer of GC-Extreme over the die, with enough paste that you can't see the die anymore, but _no_ more. An uneven spreading is unforgiving when using a water block, for the paste doesn't flow very much at all, so it won't fill in dips in the paste, especially if one is left in the center of the die.
> 
> But! If you get the application of GC-Extreme right, you will be blessed with 15C delta-T at 400W card power draw, and (my) max hotspot temp of 80C in TS #2, which it never gets close to in 4k120 gaming loads.
> 
> TL;DR: Take a step back. Take stock of how you've mounted your water block, and which thermal pads and thermal paste you've used in that process. Think logically about temperatures: There's either a flow problem, or a die-contact problem (more-likely given your delta-T). Figure out why you have that contact issue, take some of the advice above and in other posts to heart, and go from there.
> 
> I wish the best of luck to you. Let us know what you find 💗


This post fixed my TimeSpy thermal shutdown problem. THANK YOU!!!

TL : DR - dropped hotspot temps *20-25c* on a 6900XT that was seeing thermal shutdowns (108c on XTXH) in TS.

Card: Sapphire 6900XT Toxic EE XTXH (2730Mhz out of box, 360mm AIO cooler)

*The issue:*
-Observed last Saturday that TS was shutting down the PC. Card running stock thermal paste and 97-98c hotspot in TS per HWinfo
-Replaced PSU (EVGA 1300G2 to Seasonic TX-1000), plugged in mobo PCIe supplemental power but no change
-Replaced thermal paste with *3 year old Kryonaut - no more shutdowns but 97c hotspot
-Repasted again to try lowering hotspot and things only got worse from here across many repastes with said *3yo Kryonaut
-Spent 20+ hours troubleshooting, remounting, trying different things, researching the issue online especially in this thread (I went like 11 months back)

*The fix:*
-Used Metal Polish and this jewelry cleaning cloths on the copper coldplate to fix some scratches, pitting and imperfections
-Applied GC-Extreme as CS9k described in above quote
-Applied "washer mod" using 1mm M3 Nylon washers
-Max hotspot temps in TS: 83c, before clearly 108c (thermal shutdown threshold for XTXH)

PC has been in shambles and I've been working on it all week. I can finally game at 2730/2830 again and not care!

*The Kryonaut I used was perhaps too old (3-4yrs). I used it on CPU recently and it worked great so maybe it's like CS9k said, oil based pastes aren't great for bare die.


----------



## SoloCamo

Godhand007 said:


> I have time spy. No FPS hitching/stutters as far as I can see. Here are the results : I scored 17 147 in Time Spy


Meanwhile... stock clocks (mem at 2100 w/ fast timings though) and a p/l of -10 aka 229 watts vs 255 for stock.

AMD Radeon RX 6900 XT video card benchmark result - Intel Core i9-10900 Processor,ASUSTeK COMPUTER INC. TUF GAMING Z590-PLUS WIFI (3dmark.com)

My gpu score is 20k+ with an average core clock of just over 2118 yet you are pulling 17k with an average clock of 2489 and memory set faster? And considering your cpu is considerably faster to push the 6900XT at this low of a res well... you see the point.

You may think your performance in the games you play is ok, but your current setup makes absolutely zero sense and you are putting completely unnecessary strain on your gpu. Even if you don't encounter a ton of throttling your performance is all over the place.. and if the games you play aren't throttling the card much why are you trying to push such high clocks in the first place at that p/l?

My 229w setting for that 3dmark run is my 4k/60 setting and even that's overkill for plenty of modern games.

Edit: just maxed p/l to 293w (+15)

AMD Radeon RX 6900 XT video card benchmark result - Intel Core i9-10900 Processor,ASUSTeK COMPUTER INC. TUF GAMING Z590-PLUS WIFI (3dmark.com)

21,625 on gpu while still maintaining a 154mhz avg core clock deficit over your run.

Now considering my great case airflow, and also being on a reference cooler, there is no way your card is not getting roasted during games.


** Apparently I'm also the 3rd faster 10900 (non k) / 6900XT combo in the world too and the only one from the USA.... go team stock clocks with a p/l increase and slight mem oc? **


----------



## Godhand007

ZealotKi11er said:


> No, power limit is not being reach. You might get those clks and power for 1-2s and then you run at throttle state the entire time. You basically are killing the card and if you keep playing like this its will be time to RMA.


Yes it is. HwInfo, Rivatuner and GPU-Z all confirm it. TimeSpy is outlier and my clocks are always above 2700Mhz in game that I play and around ~2800 Mhz more often than not. As for RMA due to overclocking, that is risk that all overlockers take. Mine might be a little bit higher but every overclocker is ''killing" their card if they divert form specifications.


----------



## Godhand007

SoloCamo said:


> Meanwhile... stock clocks (mem at 2100 w/ fast timings though) and a p/l of -10 aka 229 watts vs 255 for stock.
> 
> AMD Radeon RX 6900 XT video card benchmark result - Intel Core i9-10900 Processor,ASUSTeK COMPUTER INC. TUF GAMING Z590-PLUS WIFI (3dmark.com)
> 
> My gpu score is 20k+ with an average core clock of just over 2118 yet you are pulling 17k with an average clock of 2489 and memory set faster? And considering your cpu is considerably faster to push the 6900XT at this low of a res well... you see the point.
> 
> You may think your performance in the games you play is ok, but your current setup makes absolutely zero sense and you are putting completely unnecessary strain on your gpu. Even if you don't encounter a ton of throttling your performance is all over the place.. and if the games you play aren't throttling the card much why are you trying to push such high clocks in the first place at that p/l?
> 
> My 229w setting for that 3dmark run is my 4k/60 setting and even that's overkill for plenty of modern games.
> 
> Edit: just maxed p/l to 293w (+15)
> 
> AMD Radeon RX 6900 XT video card benchmark result - Intel Core i9-10900 Processor,ASUSTeK COMPUTER INC. TUF GAMING Z590-PLUS WIFI (3dmark.com)
> 
> 21,625 on gpu while still maintaining a 154mhz avg core clock deficit over your run.
> 
> Now considering my great case airflow, and also being on a reference cooler, there is no way your card is not getting roasted during games.
> 
> 
> ** Apparently I'm also the 3rd faster 10900 (non k) / 6900XT combo in the world too and the only one from the USA.... go team stock clocks with a p/l increase and slight mem oc? **


My setup makes sense if I want to achieve the maximum FPS possible in a games where I am not power limited. The aforementioned condition necessitates putting such a strain on the GPU. As for your points about scores, I can get those (or even better) in TS with lower clocks and voltages but I will lose FPS in games where it matters more.


----------



## alceryes

Godhand007 said:


> I can get those (or even better) in TS with lower clocks and voltages but I will lose FPS in games where it matters more.


Interesting.
Can you give us some examples of games where you get higher performance with these high clocks in those games but a lower Time Spy? Do any of these games have built in benchmarks?


----------



## SoloCamo

Godhand007 said:


> My setup makes sense if I want to achieve the maximum FPS possible in a games where I am not power limited. The aforementioned condition necessitates putting such a strain on the GPU. As for your points about scores, I can get those (or even better) in TS with lower clocks and voltages but I will lose FPS in games where it matters more.


What game pushes those kind of temps, holds 2700mhz and doesn't hit power limit on on a stock cooler? I'm on a reference air cooler too and even at 293w games that I can hold higher clocks at lower voltags in still get very toasty after a short while.

Which goes on to my point about your setup throttling so hard in TS. TS is just very short runs with gaps between them which allows cooling off.. If it is throttling that hard here I'm struggling to see where you can game for a remotely extended period of time without major throttling.


----------



## ZealotKi11er

SoloCamo said:


> What game pushes those kind of temps, holds 2700mhz and doesn't hit power limit on on a stock cooler? I'm on a reference air cooler too and even at 293w games that I can hold higher clocks at lower voltags in still get very toasty after a short while.
> 
> Which goes on to my point about your setup throttling so hard in TS. TS is just very short runs with gaps between them which allows cooling off.. If it is throttling that hard here I'm struggling to see where you can game for a remotely extended period of time without major throttling.


If reference card could do 2800MHz, AMD would have clocked it that high and let it throttle 100% of the time.


----------



## Godhand007

SoloCamo said:


> What game pushes those kind of temps, holds 2700mhz and doesn't hit power limit on on a stock cooler? I'm on a reference air cooler too and even at 293w games that I can hold higher clocks at lower voltags in still get very toasty after a short while.
> 
> Which goes on to my point about your setup throttling so hard in TS. TS is just very short runs with gaps between them which allows cooling off.. If it is throttling that hard here I'm struggling to see where you can game for a remotely extended period of time without major throttling.


Halo Infinite, FFXIV (I know it's not a demanding game). I am looking into something cheap which has built in benchmarks. BTW, the discussion is diverting into a different direction from my initial point. Forget about my setup, do you or others encounter sudden abrupt hitches, spikes or stutters when PL is reached ?


----------



## Godhand007

ZealotKi11er said:


> If reference card could do 2800MHz, AMD would have clocked it that high and let it throttle 100% of the time.


But I can do 2800 Mhz. My card is the proof (even if it is with ifs and buts). I don't get TDR or crashes in any of the games that I play. As for why companies don't push things to the absolute limits, they have to cater for the worst case scenarios and stick to their committed specifications.


----------



## Godhand007

alceryes said:


> Interesting.
> Can you give us some examples of games where you get higher performance with these high clocks in those games but a lower Time Spy? Do any of these games have built in benchmarks?


Working on this. Does the Tomb Raider Demo contain a built in benchmark ? I am downloading ffxv benchmark (a bit older but still demanding in certain areas).


----------



## J7SC

...No matter which way you slice it, RDNA2 does boost quite high which also helps it to compete against the 3090 (I run both in a work-play setup). To get the clock-speeds with the 6900XT at lower left, I did have to lock voltage on my XTX to 1.218v. 'Normally', it runs between 2650 and 2750 in games - it is after all a boost-algorithm chip so it won't 'stand still'. MPT PL was a generous 450W for the HWInfo below.

@SoloCamo - re. the earlier posts on Superposition 4K and 'scene 10 vs scene 13/14' minimum fps, I am starting to understand it a bit better - in fact, I accidentally shifted the low fps from scene 10 to scene 13/14 per below on the Strix 3090...the change had to do with GSync and monitor refresh rates. At the end of the day, there will always be an absolute min fps unless it is all a straight line, but still - I wanted to make some progress re. improving min fps (I usually run all that on a C1 OLED 120Hz 4K).


----------



## ZealotKi11er

Godhand007 said:


> Halo Infinite, FFXIV (I know it's not a demanding game). I am looking into something cheap which has built in benchmarks. BTW, the discussion is diverting into a different direction from my initial point. Forget about my setup, do you or others encounter sudden abrupt hitches, spikes or stutters when PL is reached ?


Show me gameplay video of you running Halo Infinite with the card at 400W and 2800MHz. My water cooled card can almost keep up.


----------



## SoloCamo

Godhand007 said:


> Halo Infinite, FFXIV (I know it's not a demanding game). I am looking into something cheap which has built in benchmarks. BTW, the discussion is diverting into a different direction from my initial point. Forget about my setup, do you or others encounter sudden abrupt hitches, spikes or stutters when PL is reached ?


No I don't have any hitching or stutters at all, in fact this is one of the smoothest cards I've had in years. However, when testing overclocks where the card down clocks when the hot spot got too high there is a noticeable hitch due to the sudden performance level drop. 

But from a power limit perspective I've never had the issue, I set the limit and for the most part it settles at it. Even if there is a minor change it is not noticeable as my clocks remain pretty consistent.

How are you are you monitoring clock speeds while active in game?


----------



## alceryes

Godhand007 said:


> Working on this. Does the Tomb Raider Demo contain a built in benchmark ? I am downloading ffxv benchmark (a bit older but still demanding in certain areas).


It appears that SotTR demo does have a built in benchmark. If you run it, please post both your OCd scores and your stock scores.


----------



## J7SC

...interesting stuff about CDNA2. Hopefully, much of what AMD learns with tiles interconnects mGPU will make it into RDNA4 (3?)


----------



## Godhand007

ZealotKi11er said:


> Show me gameplay video of you running Halo Infinite with the card at 400W and 2800MHz. My water cooled card can almost keep up.


Here it is. Though this undue focus on my particular setup is not going to help us reach a conclusion about _hitching/stutter while PL is being hit_.


----------



## kratosatlante

J7SC said:


> ...No matter which way you slice it, RDNA2 does boost quite high which also helps it to compete against the 3090 (I run both in a work-play setup). To get the clock-speeds with the 6900XT at lower left, I did have to lock voltage on my XTX to 1.218v. 'Normally', it runs between 2650 and 2750 in games - it is after all a boost-algorithm chip so it won't 'stand still'. MPT PL was a generous 450W for the HWInfo below.
> 
> @SoloCamo - re. the earlier posts on Superposition 4K and 'scene 10 vs scene 13/14' minimum fps, I am starting to understand it a bit better - in fact, I accidentally shifted the low fps from scene 10 to scene 13/14 per below on the Strix 3090...the change had to do with GSync and monitor refresh rates. At the end of the day, there will always be an absolute min fps unless it is all a straight line, but still - I wanted to make some progress re. improving min fps (I usually run all that on a C1 OLED 120Hz 4K).
> 
> View attachment 2546946
> 
> 
> View attachment 2546947


insane perf, can you run sotor bench?
the best i get with stock paste, rx 6900 xt formula, 5600x pbo +200, 4x8 3800mhz cl14 trfc 228, graphics ultra 4k no taa


















rtx 3090 rog strix white only get this( ram 4x8 4000mh cl 16, 5600x curve optimizer)
pl 123% and vram +1600


















4k ultra no taa, rt ultra, dlss ultra ( 4x8 4000mhz cl 16, 5600x curve optimizer)


----------



## ZealotKi11er

Godhand007 said:


> Here it is. Though this undue focus on my particular setup is not going to help us reach a conclusion about _hitching/stutter while PL is being hit_.


Your card dont not make sense at all. Your temps are actually low for stock cooler running at 330-340W considering stock is 255W.


----------



## SoloCamo

ZealotKi11er said:


> Your card dont not make sense at all. Your temps are actually low for stock cooler running at 330-340W considering stock is 255W.


This is my confusion as well. I can hit those temps using 1.15mV and 293w PL with no issue at lower clocks. And again, I'm in a case with great cooling and room temps are rarely even above 70F. Going to have to run Halo myself and test because as is he either has Lisa Su's personal stock cooler 6900XT or something isn't accurate here.

On the note of hitting PL limit and stuttering, again - not an issue.


----------



## alceryes

ZealotKi11er said:


> Your card dont not make sense at all. Your temps are actually low for stock cooler running at 330-340W considering stock is 255W.


I thought stock was 265W(?). Add the 15% power limit adjustment and you get 305W which is the max I've seen listed on various sites (without flashing, using MPT, or whatever).


----------



## Godhand007

SoloCamo said:


> This is my confusion as well. I can hit those temps using 1.15mV and 293w PL with no issue at lower clocks. And again, I'm in a case with great cooling and room temps are rarely even above 70F. Going to have to run Halo myself and test because as is he either has Lisa Su's personal stock cooler 6900XT or something isn't accurate here.
> 
> On the note of hitting PL limit and stuttering, again - not an issue.





ZealotKi11er said:


> Your card dont not make sense at all. Your temps are actually low for stock cooler running at 330-340W considering stock is 255W.


May be this (refer last part of the video) is more to your liking. These are the temps when playing on a BTB map in Halo Infinite. FYI, My GPU cooler is running at max speeds with no side panel on the case.










As to the point about PL limit, I seem to remember asking the same question to Gamers Nexus and they had said that it can cause hitching. It was also the case for one of their charts. Trying to find that comment.


----------



## SoloCamo

Godhand007 said:


> May be this (refer last part of the video) is more to your liking. These are the temps when playing on a BTB map in Halo Infinite. FYI, My GPU cooler is running at max speeds with no side panel on the case.
> 
> View attachment 2547132


So Halo at 1440 res is why, that's all. I just did a few round on the same map. At 4k, these clocks are completely unreasonable. However, just did a few runs at 1125mV hovering just shy of 2700mhz with my stock cooler only running 1400rpm and temps were WELL below throttling at 1440p. My PL barely showed 225w used when I left pl at the stock 255w.

If this is the main game you play and one you are so concerned this then begs the question as to why you are dumping so much power into it?


----------



## alceryes

Godhand007 said:


> Working on this. Does the Tomb Raider Demo contain a built in benchmark ? I am downloading ffxv benchmark (a bit older but still demanding in certain areas).


Can you post your Tomb Raider benchmark score?


----------



## Godhand007

SoloCamo said:


> So Halo at 1440 res is why, that's all. I just did a few round on the same map. At 4k, these clocks are completely unreasonable. However, just did a few runs at 1125mV hovering just shy of 2700mhz with my stock cooler only running 1400rpm and temps were WELL below throttling at 1440p. My PL barely showed 225w used when I left pl at the stock 255w.
> 
> If this is the main game you play and one you are so concerned this then begs the question as to why you are dumping so much power into it?


Clock speeds are inversely proportional to resolution, that's is a known thing. I have mentioned many times that for the games that I play, these settings are fine. They are not going to work for everybody. You are undervolting your card a lot, mine would crash at even 1150 mv (actual voltage) on moderate clocks (~2600 Mhz). 

I am not concerned about anything related to the card. The performance/overclocks and setting are all great and stable. My only doubt was PL and it's relation to hitching since their is no concrete info for this topic on the internet. People just got too interested in my personal settings. Also, I am just putting around 70-80w more than what AMD allows you to do with 15% PL limit. It's actually the voltage that should have caught your eye.


----------



## Godhand007

alceryes said:


> Can you post your Tomb Raider benchmark score?


It's just around ~10% more than stock. Nothing special (1440p/ SMAA4x).


----------



## ZealotKi11er

Godhand007 said:


> Clock speeds are inversely proportional to resolution, that's is a known thing. I have mentioned many times that for the games that I play, these settings are fine. They are not going to work for everybody. You are undervolting your card a lot, mine would crash at even 1150 mv (actual voltage) on moderate clocks (~2600 Mhz).
> 
> I am not concerned about anything related to the card. The performance/overclocks and setting are all great and stable. My only doubt was PL and it's relation to hitching since their is no concrete info for this topic on the internet. People just got too interested in my personal settings. Also, I am just putting around 70-80w more than what AMD allows you to do with 15% PL limit. It's actually the voltage that should have caught your eye.


PL can not cause hitch because all stock cards are PL limited and are running at max PL, subject closed. No need to drag this any longer.


----------



## Godhand007

ZealotKi11er said:


> PL can not cause hitch because all stock cards are PL limited and are running at max PL, subject closed. No need to drag this any longer.


Dude, You were the one digressing and dragging the whole subject in different direction by focusing on my personal setup. In any case, I still like to respond to people who join a discussion a bit late, hence the reply and explanation. We only have opinions from three people that PL is not issue. Obviously, it's true for stock clocks but we are talking about overclocks (implied). You are free to not respond if you don't like the topic.
Also, I have not asked the question again since I got opinions of few people on the topic.


----------



## alceryes

What country


Godhand007 said:


> May be this (refer last part of the video) is more to your liking. These are the temps when playing on a BTB map in Halo Infinite. FYI, My GPU cooler is running at max speeds with no side panel on the case.
> 
> View attachment 2547132
> 
> 
> As to the point about PL limit, I seem to remember asking the same question to Gamers Nexus and they had said that it can cause hitching. It was also the case for one of their charts. Trying to find that comment.


...and I start getting worried when my HS temp hits 98C in Pathfinder: Kingmaker! For some reason Pathfinder: Kingmaker really tests the HS on GPUs.
God speed. You're much braver than I am with a $1000 MSRP GPU.


----------



## Godhand007

alceryes said:


> What country
> 
> ...and I start getting worried when my HS temp hits 98C in Pathfinder: Kingmaker! For some reason Pathfinder: Kingmaker really tests the HS on GPUs.
> God speed. You're much braver than I am with a $1000 MSRP GPU.


I don't live on the edge in most areas of life. Figured fiddling with GPU is something that I can risk a little .

BTW, Do you have a 3466Mhz ram (CMR16GX4M2C3466C16)? I have this too (in spare). Can't seem to find it these days.


----------



## alceryes

Godhand007 said:


> BTW, Do you have a 3466Mhz ram (CMR16GX4M2C3466C16)? I have this too (in spare). Can't seem to find it these days.


Close. It's CMK16GX4M2B3466C16.
I'm able to run it at pretty decent timings (CL 15, tRCD 17, tRP 17, tRAS 35, tRFC 619, CR 1) and speed but it's a pain getting them here if I ever want to test out different sticks or different memory settings. I have to go through 3 BIOS POSTS, settings a different setting each POST. I know it sounds strange, but if I try to set all 3 settings in one trip to BIOS it will fail to POST 100% of the time. However, once I set them up this way they're rock solid (Memtest86 pass and all).


----------



## SoloCamo

Godhand007 said:


> I don't live on the edge in most areas of life. Figured fiddling with GPU is something that I can risk a little .
> 
> BTW, Do you have a 3466Mhz ram (CMR16GX4M2C3466C16)? I have this too (in spare). Can't seem to find it these days.


I think the concern is that you aren't really getting much of a performance increase versus how much stress you are putting on your card.

For example... just ran the exact settings in SOTR (1440P / HIGHEST preset / 4x SMAA)

First results are stock clocks with the only difference being mem at 2100mhz w/ fast timings. Second ressults are P/L set to +15 (293w) and clock set to 2550 (held 2500 for most of the run). You are dumping a tremendous amount of voltage and heat into your card (especially when you can't even get a smooth TS.

Please keep in mind, I'm on a 10900 non k and you are on a 12700KF and SOTR will benefit from the cpu upgrade as you are getting far higher average and max cpu results. So I'm confident if I paired my current gpu setting with your cpu my results would match yours at much lower clocks, temps, etc.


----------



## Godhand007

SoloCamo said:


> I think the concern is that you aren't really getting much of a performance increase versus how much stress you are putting on your card.
> 
> For example... just ran the exact settings in SOTR (1440P / HIGHEST preset / 4x SMAA)
> 
> First results are stock clocks with the only difference being mem at 2100mhz w/ fast timings. Second ressults are P/L set to +15 (293w) and clock set to 2550 (held 2500 for most of the run). You are dumping a tremendous amount of voltage and heat into your card (especially when you can't even get a smooth TS.
> 
> Please keep in mind, I'm on a 10900 non k and you are on a 12700KF and SOTR will benefit from the cpu upgrade as you are getting far higher average and max cpu results. So I'm confident if I paired my current gpu setting with your cpu my results would match yours at much lower clocks, temps, etc.
> 
> View attachment 2547146


There is same truth in your statements. There are two reasons for why I am using these settings:

1. The games I am playing right now benefit from it.
2. I want the max FPS possible out of my GPU without burning it. It is not a need or requirement, I just want it.

BTW, What are your RAM timings? Its seems this benchmark is impacted by ram frequencies and timings beyond margin of error stuff. Mine are given below. FYI, The IMCs on 12700Ks sucks, well at least, as far as I have read (in case you are planning an upgrade). Can't get anything over 3866 MHz to boot.


----------



## alceryes

Here's mine from a 9900k. Still rockin' after 3+ years.









@Godhand007 You're leaving some CPU performance on the table by sticking with Windows 10 on a 12th gen Intel CPU. Although I'm not a fan of Win 11, the thread director does give 12th gen Intel CPUs a nice boost.


----------



## Godhand007

alceryes said:


> Here's mine from a 9900k. Still rockin' after 3+ years.
> View attachment 2547153
> 
> 
> @Godhand007 You're leaving some CPU performance on the table by sticking with Windows 10 on a 12th gen Intel CPU. Although I'm not a fan of Win 11, the thread director does give 12th gen Intel CPUs a nice boost.


I am one windows 11. Let me try something. Nevermind.


----------



## alceryes

Ahhh, SotTR doesn't recognize Windows 11 so it just lists it as Windows 10.


----------



## SoloCamo

Godhand007 said:


> There is same truth in your statements. There are two reasons for why I am using these settings:
> 
> 1. The games I am playing right now benefit from it.
> 2. I want the max FPS possible out of my GPU without burning it. It is not a need or requirement, I just want it.
> 
> BTW, What are your RAM timings? Its seems this benchmark is impacted by ram frequencies and timings beyond margin of error stuff. Mine are given below. FYI, The IMCs on 12700Ks sucks, well at least, as far as I have read (in case you are planning an upgrade). Can't get anything over 3866 MHz to boot.
> 
> View attachment 2547151


I'm running a cheap pair of T-Force Vulcan Z's (TLZRD432G3600HC18JDC01) I got when DDR4 was way more pricey (about $160 for this kit when it's now $116 or so). Running the same timings as XMP but bumped clocks from 3600 to 4000. This is not fast memory unfortunately and I kick myself for it, but at 4k it's not making or breaking anything for me. It refuses to do any tighter timings, and yes my tRFC is absolutely atrocious. Wouldn't even do 800 so I said screw it.


----------



## 99belle99

I ran shadow of the tomb raider last night and got 99% GPU bound too with a 3700X at 4k.And same as above windows 11 is reported at windows 10.

Sorry I don't have a screenshot I forgot to take one and cannot be bothered to run it again.


----------



## SoloCamo

alceryes said:


> Here's mine from a 9900k. Still rockin' after 3+ years.
> View attachment 2547153
> 
> 
> @Godhand007 You're leaving some CPU performance on the table by sticking with Windows 10 on a 12th gen Intel CPU. Although I'm not a fan of Win 11, the thread director does give 12th gen Intel CPUs a nice boost.


What clocks for this run? I've also noticed all three of us are on different drivers which is throwing a monkey wrench into this.


----------



## alceryes

SoloCamo said:


> What clocks for this run? I've also noticed all three of us are on different drivers which is throwing a monkey wrench into this.


GPU - I'm undervolted to 1125mV. I've got the core set to 2600MHz. The VRAM is set to 2050MHz with 'fast timing' on.
While benching/gaming voltage maxes out at 1141mV and the core bounces around between 2550 and 2575MHz.
CPU and RAM info in sig.


----------



## ZealotKi11er

alceryes said:


> GPU - I'm undervolted to 1125mV. I've got the core set to 2600MHz. The VRAM is set to 2050MHz with 'fast timing' on.
> While benching/gaming voltage maxes out at 1141mV and the core bounces around between 2550 and 2575MHz.
> CPU and RAM info in sig.


If should not go to 1141mV is you are undervolted to 1125mV.


----------



## alceryes

ZealotKi11er said:


> If should not go to 1141mV is you are undervolted to 1125mV.


I thought the PL boost allowed it to go above the base (1125mV) I set?
I know it's working in some way as it would normally go up to 1175mV without any voltage adjustment.


----------



## ZealotKi11er

alceryes said:


> I thought the PL boost allowed it to go above the base (1125mV) I set?
> I know it's working in some way as it would normally go up to 1175mV without any voltage adjustment.


It should not go above. If you set 1125 with Radeon Setting then its not really 1125. Its what ever the point in curve it decides to use if you OCed the Core. That slider only works correctly if you dont touch the core clock. For example if stock core clock is 2400MHz and stock voltage is 1175, if you set voltage to 1125 u are under-volting by 50. If you get the core to 2500 and voltage 1125, you are still under-volting by 50 but the voltage for 2500 for that card is not 1175, Its probably 1200 or 1225 so you subtract 50 from that and you get your new voltage. If you set with MPT the MAX voltage then you know for sure you are not going over. Right now if you have core set to 2600 its actually using 1141.


----------



## J7SC

kratosatlante said:


> insane perf, can you run sotor bench?
> the best i get with stock paste, rx 6900 xt formula, 5600x pbo +200, 4x8 3800mhz cl14 trfc 228, graphics ultra 4k no taa
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> rtx 3090 rog strix white only get this( ram 4x8 4000mh cl 16, 5600x curve optimizer)
> pl 123% and vram +1600
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 4k ultra no taa, rt ultra, dlss ultra ( 4x8 4000mhz cl 16, 5600x curve optimizer)


...I don't have Sotor / bench, but will try to get it ...Also, I managed to improve my Superposition 4K scores for both 6900XT and Strix 3090 - the latter managed to break 20k , and that is w/o MSI AB curve or -smi, just sliders and a slightly cooler ambient temp at 15 C.

6900XT (XTX)









3090 Strix


----------



## alceryes

ZealotKi11er said:


> It should not go above. If you set 1125 with Radeon Setting then its not really 1125. Its what ever the point in curve it decides to use if you OCed the Core. That slider only works correctly if you dont touch the core clock. For example if stock core clock is 2400MHz and stock voltage is 1175, if you set voltage to 1125 u are under-volting by 50. If you get the core to 2500 and voltage 1125, you are still under-volting by 50 but the voltage for 2500 for that card is not 1175, Its probably 1200 or 1225 so you subtract 50 from that and you get your new voltage. If you set with MPT the MAX voltage then you know for sure you are not going over. Right now if you have core set to 2600 its actually using 1141.


Understood.
So instead of setting a static max, it appears that I just lowered the voltage range that the card uses. I'm okay with that. I just wanted to reduce heat as much as possible, without sacraficing performance.


----------



## bunnybutt

You guys have put the fear into me.
I guess I should be thankful that I became a member and got some insight

-However, now Im just stressed out. I think I need to bunny up and learn to repaste my 6900xt toxic.


----------



## alceryes

bunnybutt said:


> However, now Im just stressed out. I think I need to bunny up and learn to repaste my 6900xt toxic.


Repasting is not a requirement unless your card is regularly throttling due to thermal limitations. You're reading/posting in an enthusist thread. Many here will repaste because they are pushing high voltages and clocks. Others do it just because they can.
I did the PCB back thermal pad mod because it's super easy and lowers temps for the little bit of overclock I'm currently doing. I'm not doing the repaste (yet) because it appears that AMD did a decent job on the front of my card (reference RX 6900 XT).


----------



## SoloCamo

ZealotKi11er said:


> It should not go above. If you set 1125 with Radeon Setting then its not really 1125. Its what ever the point in curve it decides to use if you OCed the Core. That slider only works correctly if you dont touch the core clock. For example if stock core clock is 2400MHz and stock voltage is 1175, if you set voltage to 1125 u are under-volting by 50. If you get the core to 2500 and voltage 1125, you are still under-volting by 50 but the voltage for 2500 for that card is not 1175, Its probably 1200 or 1225 so you subtract 50 from that and you get your new voltage. If you set with MPT the MAX voltage then you know for sure you are not going over. Right now if you have core set to 2600 its actually using 1141.


Correct. Any changes on the core clock and the radeon software ignores your voltage input. I run stock clocks for this reason. Timespy is the only bench that needs me to run at 1115mV for 100% stability. 99% of games I play can get by at 1075mV with one or two wanting 1100mV.

I keep my at these lower voltage and p/l at -10 (229w) and yet I typically hold higher and more consistent clocks then stock. Even if I leave the P/L at stock w/ the undervolt I'm rarely below 2400mhz unless the game is really demanding. Fan rpm is barely above 1200rpm and hot spot rarely hits 80.

On the note of hotspots... My card does concern me a bit. At P/L of stock or below the delta is very close between hotspot and gpu overall temp. If I do the 293w P/L (+15) the hotspot temp seems to run away dramatically (I've seen 30+ deltas). Tempted to take it apart and repaste but don't know if this is a known issue on the reference cooler?


----------



## jonRock1992

I'm hoping the rumored 6950X vBIOS can be flashed onto the XTXH GPU's. I still can't unlock my memory because the 6900XT LC vBIOS doesn't work with my specific mobo+GPU combo (won't post due to USB over current protection after flashing the vBIOS).


----------



## ZealotKi11er

jonRock1992 said:


> I'm hoping the rumored 6950X vBIOS can be flashed onto the XTXH GPU's. I still can't unlock my memory because the 6900XT LC vBIOS doesn't work with my specific mobo+GPU combo (won't post due to USB over current protection after flashing the vBIOS).


If it has different device id it will not work.


----------



## alceryes

SoloCamo said:


> Correct. Any changes on the core clock and the radeon software ignores your voltage input. I run stock clocks for this reason. Timespy is the only bench that needs me to run at 1115mV for 100% stability. 99% of games I play can get by at 1075mV with one or two wanting 1100mV.
> 
> I keep my at these lower voltage and p/l at -10 (229w) and yet I typically hold higher and more consistent clocks then stock. Even if I leave the P/L at stock w/ the undervolt I'm rarely below 2400mhz unless the game is really demanding. Fan rpm is barely above 1200rpm and hot spot rarely hits 80.
> 
> On the note of hotspots... My card does concern me a bit. At P/L of stock or below the delta is very close between hotspot and gpu overall temp. If I do the 293w P/L (+15) the hotspot temp seems to run away dramatically (I've seen 30+ deltas). Tempted to take it apart and repaste but don't know if this is a known issue on the reference cooler?


But it's not fully ignoring my voltage setting it's just lowering the 'range' that the card's voltage will fluctuate. If I leave voltage at stock it'll run up to 1175mV. With an undervolt of 1125mV it's high is now 1141mV. I'd still call it a valid undervolt it's just not a static high mark.

What do your temps (HS and otherwise) actually get to after 30 mins of a demanding game/benchmark? When I play Pathfinder: Kingmaker my HS temp gets to 98ºC. It's a little worrying but is still within spec. Other games don't push my HS temp above 93-96ºC.


----------



## SoloCamo

alceryes said:


> But it's not fully ignoring my voltage setting it's just lowering the 'range' that the card's voltage will fluctuate. If I leave voltage at stock it'll run up to 1175mV. With an undervolt of 1125mV it's high is now 1141mV. I'd still call it a valid undervolt it's just not a static high mark.
> 
> What do your temps (HS and otherwise) actually get to after 30 mins of a demanding game/benchmark? When I play Pathfinder: Kingmaker my HS temp gets to 98ºC. It's a little worrying but is still within spec. Other games don't push my HS temp above 93-96ºC.


I'll have to test some more, but on some games that run higher clocks (Even BFV at 4k) at 293w will hit 100-107C hot spots in an hour or so. I may need to turn up the fan rpm though. Again, this is at a 293w P/l though.


----------



## alceryes

SoloCamo said:


> I'll have to test some more, but on some games that run higher clocks (Even BFV at 4k) at 293w will hit 100-107C hot spots in an hour or so. I may need to turn up the fan rpm though. Again, this is at a 293w P/l though.


107ºC is definitely too hot for me. What model is yours? Done any mods?


----------



## SoloCamo

alceryes said:


> 107ºC is definitely too hot for me. What model is yours? Done any mods?


Ebay special... AMD 6900XT. Reference cooler.


----------



## J7SC

jonRock1992 said:


> I'm hoping the rumored 6950X vBIOS can be flashed onto the XTXH GPU's. I still can't unlock my memory because the 6900XT LC vBIOS doesn't work with my specific mobo+GPU combo (won't post due to USB over current protection after flashing the vBIOS).


Per my more or less weekly address / begging, I'm still hoping for the MPT team to find that 'VRAM > 2150 INHIBITOR' ....the last thing I would like to mod on my XTX.

On Hotspot temps, even with hours of intense gaming / benching on 400W (before PL slider) yesterday, Hotspot stayed in the low 60s. Card is water-cooled with thermal putty on the VRAM and an extra heatsink on the back.

Cooling VRAM works, but only to point...it can avoid errors due to heat, but can't magically increase effective VRAM ceiling


----------



## jonRock1992

J7SC said:


> Per my more or less weekly address / begging, I'm still hoping for the MPT team to find that 'VRAM > 2150 INHIBITOR' ....the last thing I would like to mod on my XTX.
> 
> On Hotspot temps, even with hours of intense gaming / benching on 400W (before PL slider) yesterday, Hotspot stayed in the low 60s. Card is water-cooled with thermal putty on the VRAM and an extra heatsink on the back.
> 
> Cooling VRAM works, but only to point...it can avoid errors due to heat, but can't magically increase effective VRAM ceiling


I just want those 2400MHz VRAM clocks. Definitely don't want to get a 6950XT if I could theoretically get the same performance on my current hardware. I really wish there was a way to disable the USB over current protection on my mobo so I could post with the LC vBIOS.


----------



## J7SC

jonRock1992 said:


> I just want those 2400MHz VRAM clocks. Definitely don't want to get a 6950XT if I could theoretically get the same performance on my current hardware. I really wish there was a way to disable the USB over current protection on my mobo so I could post with the LC vBIOS.


You probably already tried all that, but couldn't you disable USB overcurrent protection (and the relevant USB port itself) in the bios ?


----------



## jonRock1992

J7SC said:


> You probably already tried all that, but couldn't you disable USB overcurrent protection (and the relevant USB port itself) in the bios ?


I believe the issue resides in the fact that my GPU does not have a USB-C output.


----------



## J7SC

jonRock1992 said:


> I believe the issue resides in the fact that my GPU does not have a USB-C output.


ahh, ok.


----------



## alceryes

SoloCamo said:


> Ebay special... AMD 6900XT. Reference cooler.


You should definitely do the easy PCB back thermal pad mod. It's literally 5 screws (maybe 6), take off the back plate, put nicely cut pads all over, screw plate back on.









[Official] AMD Radeon RX 6900 XT Owner's Club


I've been looking into it. The Merc319 limited black is basically the same PCB as a red devil Ultimate, but with slightly better input filtering. The MSI gaming trio z has the second best PCB, but the cooler allows rather high junction temps and is rather noisy. AsRock formula OC is the best...




www.overclock.net


----------



## Godhand007

alceryes said:


> You should definitely do the easy PCB back thermal pad mod. It's literally 5 screws (maybe 6), take off the back plate, put nicely cut pads all over, screw plate back on.
> 
> 
> 
> 
> 
> 
> 
> 
> 
> [Official] AMD Radeon RX 6900 XT Owner's Club
> 
> 
> I've been looking into it. The Merc319 limited black is basically the same PCB as a red devil Ultimate, but with slightly better input filtering. The MSI gaming trio z has the second best PCB, but the cooler allows rather high junction temps and is rather noisy. AsRock formula OC is the best...
> 
> 
> 
> 
> www.overclock.net


I have done this too. I was hoping this would reduce HS temps (by brining the overall cooling up) but it had zero impact.


----------



## Scorpion667

TimeSpy score: 22593, GPU 25557

GPU: 6900XT Toxic EE (XTXH, 360mm AIO)
GPU Clock: 2802Mhz
VRAM Clock: 2164
Driver: 22.1.2
Wattman voltage: 1080mv
Power Limit: 420W
*Hotspot Temp: 92C*
GPU Temp: 50c
Ambient: 21c
Paste: GC Extreme, spread with card, thick, tightened screws in x-pattern half turn at a time, medium mounting pressure
CPU: 9900KS @ 5.2
Mem: 4400c16 CR1
Mobo: Apex XI with supplemental PCIe power connected
Side panels: none

Having a hard time with the hotspot otherwise I think this card can do better. Out of the box it was hitting 95c+ hotspot as well at 2800Mhz with PL slider maxed and stock 1.2v. Tried to repaste 15+ times trying various methods (spread, line, x-pattern, thick/medium/small amount), tightening screws in x pattern half turn at a time, washer mod, different pastes etc and this is probably the best or second best mount. I think short of custom loop, this is as far as she goes 

Open to any and all suggestions for lowering those hotspot temps further! Would love to try 440w power limit but it's pointless if I'm headbutting the 95c hotspot throttle threshold on this XTXH chip


----------



## alceryes

Godhand007 said:


> I have done this too. I was hoping this would reduce HS temps (by brining the overall cooling up) but it had zero impact.


It could be that you're just giving it too much voltage and running it at too high a frequency for it to make a difference.
Do you have pics of the pad placement? What pads did you use?


----------



## jonRock1992

Scorpion667 said:


> TimeSpy score: 22593, GPU 25557
> 
> GPU: 6900XT Toxic EE (XTXH, 360mm AIO)
> GPU Clock: 2802Mhz
> VRAM Clock: 2164
> Driver: 22.1.2
> Wattman voltage: 1080mv
> Power Limit: 420W
> *Hotspot Temp: 92C*
> GPU Temp: 50c
> Ambient: 21c
> Paste: GC Extreme, spread with card, thick, tightened screws in x-pattern half turn at a time, medium mounting pressure
> CPU: 9900KS @ 5.2
> Mem: 4400c16 CR1
> Mobo: Apex XI with supplemental PCIe power connected
> Side panels: none
> 
> Having a hard time with the hotspot otherwise I think this card can do better. Out of the box it was hitting 95c+ hotspot as well at 2800Mhz with PL slider maxed and stock 1.2v. Tried to repaste 15+ times trying various methods (spread, line, x-pattern, thick/medium/small amount), tightening screws in x pattern half turn at a time, washer mod, different pastes etc and this is probably the best or second best mount. I think short of custom loop, this is as far as she goes
> 
> Open to any and all suggestions for lowering those hotspot temps further! Would love to try 440w power limit but it's pointless if I'm headbutting the 95c hotspot throttle threshold on this XTXH chip


Have you tried liquid metal? It should drop your hotspot temp by around 15C if it's done right. I use liquid metal with my XTXH GPU and it allows my gpu to clock higher and have a ton of thermal headroom. I use a 450W PL and 1287mV at 2875MHz in games and my hotspot is usually in the low to mid 60's. My loop is simple with only a single 360mm slim rad dedicated to the GPU with gentle typhoon fans. My CPU uses an AIO so I don't have to worry about the CPU heating up my GPU loop.


----------



## Godhand007

alceryes said:


> It could be that you're just giving it too much voltage and running it at too high a frequency for it to make a difference.
> Do you have pics of the pad placement? What pads did you use?


Doubt it as reducing power consumption has a direct impact on HS, even with my settings. I am using these _Gelid Solutions GP-Extreme 12W-Thermal Pad 80x40x3.0mm. _Don't have any pics though.


----------



## J7SC

Godhand007 said:


> Doubt it as reducing power consumption has a direct impact on HS, even with my settings. I am using these _Gelid Solutions GP-Extreme 12W-Thermal Pad 80x40x3.0mm. _Don't have any pics though.


I'm using Gelid OC Extreme thermal paste on the die and PG-10 thermal putty on the VRAM and on the back...even w/ 450W and 2800+ on the GPU, GPU and RAM temps stay in the low 40s, and Hotspot in the mid-60s...mind you, all that is with a big loop and double / triple core rads (1200x64).

All that said, if your GPU temp is relatively ok but your HS is out of line and you remounted already several times, it likely is due to uneven contact somewhere, perhaps even the cooler assembly itself. Alternatively, pad size on VRAM / power stages could also be an issue. If you don't manage to get the HS delta settled, I would consider switching VRAM and perhaps power stages over to thermal putty - it conforms much more easily to the available space, nooks and crannies. 

Another issue might be that either your die or the cooler's cold-plate are not flat - this is one reason why folks recommend Gelid-OC on the die as it is thicker..but I think you already tried that ?


----------



## Godhand007

J7SC said:


> I'm using Gelid OC Extreme thermal paste on the die and PG-10 thermal putty on the VRAM and on the back...even w/ 450W and 2800+ on the GPU, GPU and RAM temps stay in the low 40s, and Hotspot in the mid-60s...mind you, all that is with a big loop and double / triple core rads (1200x64).
> 
> All that said, if your GPU temp is relatively ok but your HS is out of line and you remounted already several times, it likely is due to uneven contact somewhere, perhaps even the cooler assembly itself. Alternatively, pad size on VRAM / power stages could also be an issue. If you don't manage to get the HS delta settled, I would consider switching VRAM and perhaps power stages over to thermal putty - it conforms much more easily to the available space, nooks and crannies.
> 
> Another issue might be that either your die or the cooler's cold-plate are not flat - this is one reason why folks recommend Gelid-OC on the die as it is thicker..but I think you already tried that ?


Yours is completely different situation. I am on a stock air cooler. I have just done the backplate mod.


----------



## J7SC

Godhand007 said:


> Yours is completely different situation. I am on a stock air cooler. I have just done the backplate mod.


I understand - but the paste and putty I referred to can be used on a stock cooler as well and might / might not help...


----------



## alceryes

Godhand007 said:


> Doubt it as reducing power consumption has a direct impact on HS, even with my settings. I am using these _Gelid Solutions GP-Extreme 12W-Thermal Pad 80x40x3.0mm. _Don't have any pics though.


Did you do just the memory module (underside) area with those pads or the VRMs as well?


----------



## Godhand007

alceryes said:


> Did you do just the memory module (underside) area with those pads or the VRMs as well?


I think I covered most of the relevant stuff.


----------



## alceryes

Godhand007 said:


> I think I covered most of the relevant stuff.


If you've got 3mm pads on the VRMs (which are significantely higher than the PCB) you may have light or even no contact between pads directly on the PCB and the back plate. The Gelid pads are pretty squishy but it's something to think about. I used 3mm directly on the PCB but only 1mm on all the VRMs to be sure I had even contact on the back plate.


----------



## Scorpion667

jonRock1992 said:


> Have you tried liquid metal? It should drop your hotspot temp by around 15C if it's done right. I use liquid metal with my XTXH GPU and it allows my gpu to clock higher and have a ton of thermal headroom. I use a 450W PL and 1287mV at 2875MHz in games and my hotspot is usually in the low to mid 60's. My loop is simple with only a single 360mm slim rad dedicated to the GPU with gentle typhoon fans. My CPU uses an AIO so I don't have to worry about the CPU heating up my GPU loop.


I considered LM but I'm trying to avoid custom loop. The 360mm AIO on this Sapphire Toxic has a copper coldplate which apparently doesn't mesh well with LM.

I was watching an EK block installation video on 6900XT and the guy used a star pattern on his thermal paste application. I just tried it with GC extreme and my hotspot temps ingame dropped 7c with edge temps the same. When mating the PCB and cold plate I didn't press down as hard and ensured screws were tightened in x pattern half turn at a time. Used the washer mod as well (nylon, M3, 1mm thickness). Admittedly I didn't run TimeSpy after this mount... I have a feeling that benchmark causes very rapid pump out on my sample. Maybe the die or coldplate is more convex than intended.

At 25.6c ambient highest hotspot after 60 mins of Vanguard was 81c and edge 57. Previous mount (GC extreme spread with card, no washers) saw 89c hotspot ingame.


----------



## Justye95

Hi all guys, I have a question, I have a reference rx 6900 XT (custom liquid cooled)
I have set with MPT max 350 watts
instead of 300 watts
is this dangerous for everyday use? I only noticed slightly higher temperatures, at most the temperature is 57 58 degrees and hotspot about 72/75 degrees


----------



## SoloCamo

Justye95 said:


> Hi all guys, I have a question, I have a reference rx 6900 XT (custom liquid cooled)
> I have set with MPT max 350 watts
> instead of 300 watts
> is this dangerous for everyday use? I only noticed slightly higher temperatures, at most the temperature is 57 58 degrees and hotspot about 72/75 degrees


what voltage?


----------



## Justye95

SoloCamo said:


> what voltage?


Stock 1175v


----------



## SoloCamo

Justye95 said:


> Stock 1175v



You are fine. Just keep temps in check and it won't be a problem.


----------



## Lavacon

Anyone else running an OC'd 8700k with a 6900xt? 

I'm running at 5ghz w/ 6900xt Red Devil and my benchmarks are trash compared to what I'm seeing here. I can't seem to crack 18,000 in timespy for example. I'll link the result later.

VIA AMD software; power limit is maxed, memory timings set to fast and pretty sure my max clock is set to 2600. I had to use the min clock slider (2500) to stabilize some DX11 titles.

Temps don't seem to be an issue either. Perhaps my good ol' CPU has finally started bottlenecking me.


----------



## Justye95

SoloCamo said:


> Stai bene. Tieni sotto controllo le temperature e non sarà un problema.
> [/CITAZIONE]
> Ok grazie, ma è possibile tramite mpt aumentare la tensione massima?


----------



## Justye95

SoloCamo said:


> You are fine. Just keep temps in check and it won't be a problem.


Ok thanks, but is it possible through mpt to increase the maximum voltage?


----------



## alceryes

Lavacon said:


> Anyone else running an OC'd 8700k with a 6900xt?
> 
> I'm running at 5ghz w/ 6900xt Red Devil and my benchmarks are trash compared to what I'm seeing here. I can't seem to crack 18,000 in timespy for example. I'll link the result later.
> 
> VIA AMD software; power limit is maxed, memory timings set to fast and pretty sure my max clock is set to 2600. I had to use the min clock slider (2500) to stabilize some DX11 titles.
> 
> Temps don't seem to be an issue either. Perhaps my good ol' CPU has finally started bottlenecking me.


*Remember that posted benchmarks/gaming FPS scores are usually the top .1%. They are not the norm - they are the outlier.*
No one's going to post 'meh' benchmarks/gaming FPS videos. Benchers/reviewers/enthusiasts tweak, bench, tweak, bench, tweak, bench, etc., until they get a score that they deem 'worthy' and only then does it gets posted. These are usually people who work with technology for a living and know the ins and outs of settings and components. Many of them also know how to 'cheat' the system by running the benchmarks with unrealistic settings. They change everything from CPU priority, to benchmark config changes, to doing borderless/alt tab crap. In short, don't worry about hitting published benchmark numbers.

Are you hitting the 'norm' for Time Spy?


----------



## Lavacon

alceryes said:


> Are you hitting the 'norm' for Time Spy?


Good question. I seem to be inline with what the Guru3D review was, but when I look in this thread and on reddit, my score is 2-3k less than what folks are posting.


----------



## alceryes

The norm for your CPU is a little over 7800 in Time Spy.








Intel Core i7-8700K Processor Review


Intel Core i7-8700K Processor Processor review with benchmark scores. See how it compares with other popular models.




benchmarks.ul.com


----------



## Lavacon

alceryes said:


> The norm for your CPU is a little over 7800 in Time Spy.
> 
> 
> 
> 
> 
> 
> 
> 
> Intel Core i7-8700K Processor Review
> 
> 
> Intel Core i7-8700K Processor Processor review with benchmark scores. See how it compares with other popular models.
> 
> 
> 
> 
> benchmarks.ul.com


CPU @ 8425
GPU @ 21534

Looking low on GPU side when looking at results vs. similar setups on 3Dmarks website.

My best run: I scored 17 459 in Time Spy

I did notice my mem clock was only running @2K. Guess I'll start there tonight.


----------



## alceryes

Your CPU will hold back your overall score but the GPU score looks good.
I'm currently running a slight undervolt, the core set at 2600MHz, and the mem at 2050MHz with fast timings. For some reason the core won't go above 2400MHz in TS but gets up to 2570MHz in some games. My HS temp does get to 98ºC in a particularly core-heavy game so I don't really have a want to go higher just yet. Maybe in 2 years when I want some more FPS out of it I'll push it more.

What are your GPU temps after heavy gaming for 30+ mins?


----------



## SoloCamo

Lavacon said:


> CPU @ 8425
> GPU @ 21534
> 
> Looking low on GPU side when looking at results vs. similar setups on 3Dmarks website.
> 
> My best run: I scored 17 459 in Time Spy
> 
> I did notice my mem clock was only running @2K. Guess I'll start there tonight.


That's about right.









I scored 19 627 in Time Spy


Intel Core i9-10900 Processor, AMD Radeon RX 6900 XT x 1, 32768 MB, 64-bit Windows 10}




www.3dmark.com





I had a gpu score of 21,625 here. A slower clock speed average then your run, but my mem was at +100mhz w/ fast timings which made my score slightly better then yours.

My cpu being 10c/20t is the main thing that brought my overall score up a bit.


----------



## 99belle99

Lavacon said:


> CPU @ 8425
> GPU @ 21534
> 
> Looking low on GPU side when looking at results vs. similar setups on 3Dmarks website.
> 
> My best run: I scored 17 459 in Time Spy
> 
> I did notice my mem clock was only running @2K. Guess I'll start there tonight.


Does that CPU have smart access memory? As that gives a decent bump up in score on Timespy.


----------



## Lavacon

alceryes said:


> What are your GPU temps after heavy gaming for 30+ mins?


Topping out at 91 in TS, never goes above 65 playing Fortnite in dx11 mode, max quality @ 1440. Lost Arc runs under 40.

I'm running an aggressive fan curve.



99belle99 said:


> Does that CPU have smart access memory? As that gives a decent bump up in score on Timespy.


My old ass Mobo finally got a beta bios update supporting this. Just haven't tried it yet.
Update: Running SAM now, didn't really do much for my score, but now I have it, can't hurt.


----------



## CS9K

Scorpion667 said:


> TimeSpy score: 22593, GPU 25557
> 
> GPU: 6900XT Toxic EE (XTXH, 360mm AIO)
> GPU Clock: 2802Mhz
> VRAM Clock: 2164
> Driver: 22.1.2
> Wattman voltage: 1080mv
> Power Limit: 420W
> *Hotspot Temp: 92C*
> GPU Temp: 50c
> Ambient: 21c
> Paste: GC Extreme, spread with card, thick, tightened screws in x-pattern half turn at a time, medium mounting pressure
> Having a hard time with the hotspot otherwise I think this card can do better.


Howdy! I've been away from the forum for a bit over a week, and just got caught up with the thread. I'm glad my post could be of help!

That said, for the clock speed and power limit that you've given your GPU, a 92C hotspot temp is right about where it should be.

Look at it this way: You're pulling probably 350W through _just_ the GPU core. Gelid GC-Extreme is probably the best chance you have at getting your temps as low as they can go without using liquid metal and/or sub-ambient cooling.

For a data-point, my reference RX 6900 XT wearing an EK block, 400W power limit, 2700MHz, hotspot gets up around 85 at its absolute warmest in benchmarks. It got into the high 80's at 420W. 

Your GPU is performing exactly as it should with an AIO. There's nothing wrong with your AIO nor GPU nor paste-job. Knowing what you've told us: it's just a matter of thermodynamics... 350W is a LOT of power for the (relatively) tiny RDNA2 GPU core to use.


----------



## J7SC

CS9K said:


> (...)
> 
> That said, for the clock speed and power limit that you've given your GPU, a 92C hotspot temp is right about where it should be.
> 
> Look at it this way: You're pulling probably 350W through _just_ the GPU core. Gelid GC-Extreme is probably the best chance you have at getting your temps as low as they can go without using liquid metal and/or sub-ambient cooling.
> 
> For a data-point, my reference RX 6900 XT wearing an EK block, 400W power limit, 2700MHz, hotspot gets up around 85 at its absolute warmest in benchmarks. It got into the high 80's at 420W.


Not that max Hotspot of 85 C is in the danger zone, far from it, but I'm a bit surprised at that level with a full-cover block. It's not a 'race', but here are my typical temps with similar peak power and clocks...it all sits in an oversized custom loop though. What rad-space + fans are you running ?


----------



## mastertrixter

Ok guys. I’m either missing something or my card is just maxed out. I have a power color red devil 6900xt. Running a full cover alphacool waterblock on it.
Set to 2600mhz and 2100 mem with fast timings in Radeon software. Voltage maxed at 1175mv. Power limit set to 15%.

mpt is set with 325w and 350tdc. Card will pull around 400w occasionally during benches. Temps stay around 57c core and 80c hotspot.

I cannot seem to go any higher on the core clock. 2625 will crash. Any suggestions to push it a little further?

With my 12700k running at 5ghz p core and 4ghz e core I’m currently pulling 21500 in time spy. Hoping to get over 22k.


----------



## alceryes

Lavacon said:


> Topping out at 91 in TS, never goes above 65 playing Fortnite in dx11 mode, max quality @ 1440. Lost Arc runs under 40.


I know this is very specific, but do you happen to have Pathfinder: Kingmaker (or the newer Wrath of the Righteous)? I'd love to see what your hot spot temps are after running that at max settings for 60+ mins. My HS temp gets up to 99C according to HWiNFO64.


----------



## Godhand007

mastertrixter said:


> Ok guys. I’m either missing something or my card is just maxed out. I have a power color red devil 6900xt. Running a full cover alphacool waterblock on it.
> Set to 2600mhz and 2100 mem with fast timings in Radeon software. Voltage maxed at 1175mv. Power limit set to 15%.
> 
> mpt is set with 325w and 350tdc. Card will pull around 400w occasionally during benches. Temps stay around 57c core and 80c hotspot.
> 
> I cannot seem to go any higher on the core clock. 2625 will crash. Any suggestions to push it a little further?
> 
> With my 12700k running at 5ghz p core and 4ghz e core I’m currently pulling 21500 in time spy. Hoping to get over 22k.


What do you mean by crash; TDR, System reboot , black screen or is it a case of TimeSpy just erroring out? In case of TS error, It might be lack of power.


----------



## CS9K

J7SC said:


> Not that max Hotspot of 85 C is in the danger zone, far from it, but I'm a bit surprised at that level with a full-cover block. It's not a 'race', but here are my typical temps with similar peak power and clocks...it all sits in an oversized custom loop though. What rad-space + fans are you running ?
> View attachment 2548001


I do appreciate that it does seem a bit high with only a single data point, but 85C is the absolute maximum I have seen it in recent weeks, and only gets that high _at_ 375-400W of card power draw, and only in Time Spy benchmark. Daily-driving never regularly gets above 80C.

I _am_ also a few weeks short of 1 year since I last pasted the GPU. The Gelid GC-Extreme that I preach about, isn't immune to pump-out, but is merely _really_ resistant to pump-out. 85C is _about_ 5-7C over what temps were right after pasting a year ago (and core is up 3-5C over the year's time, too).


----------



## ptt1982

Quick update on new drivers, 22.2.1:

-Max OC had to be lowered by 70mhz, just as with the previous January released driver, 2650mhz-->2580mhz, and it's barely stable
-TS score dropped by 1030 points, 23500 --> 22470
-Once TS froze on default settings
-Some games seem to be more unstable than before (halo infinite and forza horizon 5)

My guess is that they simply added game support on the crappy drivers they put out early Jan, and left everything else broken until they release a non-optional update.
Avoid at all cost unless you play the latest games.

(Tested on vanilla Red Devil 6900xt under custom loop, all settings fully defaulted first, and next MPT 350W+15% which is my usual test run)


----------



## mastertrixter

Godhand007 said:


> What do you mean by crash; TDR, System reboot , black screen or is it a case of TimeSpy just erroring out? In case of TS error, It might be lack of power.


TS error usually. Think a 1000w psu isn't enough for this card at 400w?


----------



## Godhand007

mastertrixter said:


> TS error usually. Think a 1000w psu isn't enough for this card at 400w?


I was talking about the PL limit on the card.


----------



## mastertrixter

Godhand007 said:


> I was talking about the PL limit on the card.


oh you mean increase the power limit in MPT? What’s the max I can safely throw at this thing?


----------



## Godhand007

mastertrixter said:


> oh you mean increase the power limit in MPT? What’s the max I can safely throw at this thing?


I have tested 450w on mine.


----------



## mastertrixter

Godhand007 said:


> I have tested 450w on mine.


was that what you set it at in MPT? And was that the watts or the tdc?

thanks for you your help btw.


----------



## Godhand007

mastertrixter said:


> was that what you set it at in MPT? And was that the watts or the tdc?
> 
> thanks for you your help btw.


Just watts.


----------



## CfYz

ptt1982 said:


> Avoid at all cost unless you play the latest games.


So reverting back to older version (which number btw?) - helped?


----------



## mando10

jonRock1992 said:


> Just a heads up. I tried flashing the LC vbios in Linux on my 6900 XT Red Devil Ultimate. It flashed successfully, but my system would not post with it. I would only try flashing it if you're familiar with an external flashing tool or you have a dual bios switch. Luckily I had a dual bios switch so all I had to do was flip my switch, boot back into Linux, and then flip the switch back to the incompatible vbios before flashing my backup.


just flashed my red devil ult to LC bios in Linux and everything is working fine in windows 11


----------



## jonRock1992

mando10 said:


> just flashed my red devil ult to LC bios in Linux and everything is working fine in windows 11


Sweet. I'm pissed because I think my issue is something specific to the Dark Hero motherboard. I contacted @safedisk for some help with the issue, but haven't heard back from him yet. I believe my motherboard is detecting the lack of a USB-C output as an issue, and it's falsely triggering a USB over current protection.


----------



## mastertrixter

Godhand007 said:


> Just watts.


do you just leave tdc at stock?


----------



## Piro Fyre

mastertrixter said:


> Ok guys. I’m either missing something or my card is just maxed out. I have a power color red devil 6900xt. Running a full cover alphacool waterblock on it.
> Set to 2600mhz and 2100 mem with fast timings in Radeon software. Voltage maxed at 1175mv. Power limit set to 15%.
> 
> mpt is set with 325w and 350tdc. Card will pull around 400w occasionally during benches. Temps stay around 57c core and 80c hotspot.
> 
> I cannot seem to go any higher on the core clock. 2625 will crash. Any suggestions to push it a little further?
> 
> With my 12700k running at 5ghz p core and 4ghz e core I’m currently pulling 21500 in time spy. Hoping to get over 22k.


Try leaving the VRAM settings alone once. I know when I first got my Asrock 6900XT Phantom, I set the mem to 2150 max and fast timing right off the bat. Couldn't get the core past 2690 max. Whenever I tried to save 2700 max, driver would crash immediately without even benching. Now, I'm at 2700-2800 for core and 2112 and fast timing for memory. Everything works great. I know some 6900 XTs out there just don't like 2100+ mem. Lucky for me, mine can do just slightly over 2100.


----------



## mastertrixter

Piro Fyre said:


> Try leaving the VRAM settings alone once. I know when I first got my Asrock 6900XT Phantom, I set the mem to 2150 max and fast timing right off the bat. Couldn't get the core past 2690 max. Whenever I tried to save 2700 max, driver would crash immediately without even benching. Now, I'm at 2700-2800 for core and 2112 and fast timing for memory. Everything works great. I know some 6900 XTs out there just don't like 2100+ mem. Lucky for me, mine can do just slightly over 2100.


i have tried this method. Just clocking the core while leaving the memory at stock. Seems to make no difference. Fast timings can cause crashes if I push too much core and over 2100 memory though. 2112 and fast seems to be the sweet spot.

I have managed to get the core up to 2670 with the power level set at 400w in mpt. I also have the tdc set at 420. The few guides I’ve seen say to put the tdc 20-25 above the core wattage. Hopefully someone can confirm that is correct or not here.

All of that considered I’ve see spikes as high as 475w for power draw currently. But temps are at 57c core and 79c hotspot max as well.


----------



## Piro Fyre

mastertrixter said:


> i have tried this method. Just clocking the core while leaving the memory at stock. Seems to make no difference. Fast timings can cause crashes if I push too much core and over 2100 memory though. 2112 and fast seems to be the sweet spot.
> 
> I have managed to get the core up to 2670 with the power level set at 400w in mpt. I also have the tdc set at 420. The few guides I’ve seen say to put the tdc 20-25 above the core wattage. Hopefully someone can confirm that is correct or not here.
> 
> All of that considered I’ve see spikes as high as 475w for power draw currently. But temps are at 57c core and 79c hotspot max as well.


Oh that sucks. It's sounding more and more like you got a bad binned chip. Wattage really shouldn't be spiking all the way up there. Running Superposition, core end up at 2740-2750 with a power draw of 325W and 350W spike only once per run. I set the mpt to 350 and tdc 402 since that's what 15% over is. My card is also under a custom loop with 2x360 EK XE rads and the hotspot only goes up to 67C when everything normalizes.

If I was you, I'd look into a new card. Don't be like me with my last build and a 1080 Ti FE that also wasn't a good binned chip. That card would not do over 2000 core for the life of me. It honestly left me feeling defeated and depressed lol. It broke me so hard that I vow to spend the extra cash on a non-reference design card for once as my past 10+ GPUs were all reference. It's really funny to think back that a GPU broke me in such a way but it really did lol.


----------



## mastertrixter

Piro Fyre said:


> Oh that sucks. It's sounding more and more like you got a bad binned chip. Wattage really shouldn't be spiking all the way up there. Running Superposition, core end up at 2740-2750 with a power draw of 325W and 350W spike only once per run. I set the mpt to 350 and tdc 402 since that's what 15% over is. My card is also under a custom loop with 2x360 EK XE rads and the hotspot only goes up to 67C when everything normalizes.
> 
> If I was you, I'd look into a new card. Don't be like me with my last build and a 1080 Ti FE that also wasn't a good binned chip. That card would not do over 2000 core for the life of me. It honestly left me feeling defeated and depressed lol. It broke me so hard that I vow to spend the extra cash on a non-reference design card for once as my past 10+ GPUs were all reference. It's really funny to think back that a GPU broke me in such a way but it really did lol.


That’s what I am thinking too. That spike I think is ok because I’m still using +15% in Radeon software too. So that’s about where it would cap at. But like you said that’s a lot of wattage for 2675 core clock.

It won’t break me by any means but it is kinda a bummer. May be time to get a different card or snag a 6950 when they drop.


----------



## CS9K

It sounds like you're running out of voltage, @mastertrixter. Make sure your HWINFO64 is up to date, set it to 1000ms sample rate, and reset it before every run. HWINFO64 will actually tell you what the current power limit is via "PPT Limit", so check that too.

@Piro Fyre, Keep in mind, that those of us using water cooling, will run about 30-40W lower power draw on a tuned GPU just due to the lower temps. My reference RX 6900 XT air cooled would hit 400W before I went water, and only got up to 365W with identical settings after, with no limits being met in either run.


----------



## PanZwu

hey guys, had to put 400W+ ppt into my card to get over 2600mhz coreclocks.


----------



## aslidop

Afraid I bricked my GPU 😩

Just installed the EK-Quantum Red Devil waterblock and backplate on my Red Devil 6900XT, and now I'm getting no video out. Verified it's the card by installing another GPU and the machine boots right up no problem.

Am I screwed? Is there a common gotcha I missed? I've installed numerous waterblocks over the years and was incredibly careful on this one (I mean, who wants to kill a $1500 GPU?) and don't believe I damaged the card in any way.


----------



## J7SC

aslidop said:


> Afraid I bricked my GPU 😩
> 
> Just installed the EK-Quantum Red Devil waterblock and backplate on my Red Devil 6900XT, and now I'm getting no video out. Verified it's the card by installing another GPU and the machine boots right up no problem.
> 
> Am I screwed? Is there a common gotcha I missed? I've installed numerous waterblocks over the years and was incredibly careful on this one (I mean, who wants to kill a $1500 GPU?) and don't believe I damaged the card in any way.


...sometimes, uneven or overtightening the block on the die and/or PCB can cause this - either temporarily or permanently. You might want to take the block off and (and loosely) mount the original cooler just to test.


----------



## aslidop

J7SC said:


> ...sometimes, uneven or overtightening the block on the die and/or PCB can cause this - either temporarily or permanently. You might want to take the block off and (and loosely) mount the original cooler just to test.


Just took off the block and backplate and reinstalled, still no signal. Guess I'll drain the loop and put the stock cooler back on.


----------



## jonRock1992

aslidop said:


> Just took off the block and backplate and reinstalled, still no signal. Guess I'll drain the loop and put the stock cooler back on.


If it works after putting the stock cooler back on, then maybe your block is preventing it from plugging all the way into the PCI slot on your motherboard. I've had this happen to me on my X570 Dark Hero because the heatsinks around the PCI slot are a little too tall.


----------



## aslidop

jonRock1992 said:


> If it works after putting the stock cooler back on, then maybe your block is preventing it from plugging all the way into the PCI slot on your motherboard. I've had this happen to me on my X570 Dark Hero because the heatsinks around the PCI slot are a little too tall.


You know what? This might be it. Motherboard is a Gigabyte B550 Vision-D and the PCH heatsink is definitely making contact with the GPU waterblock. It _looks _like its fully in the PCIE slot, but it's actually at a bit of an angle. Welp...


----------



## Godhand007

CS9K said:


> It sounds like you're running out of voltage, @mastertrixter. Make sure your HWINFO64 is up to date, set it to 1000ms sample rate, and reset it before every run. HWINFO64 will actually tell you what the current power limit is via "PPT Limit", so check that too.
> 
> @Piro Fyre, Keep in mind, that those of us using water cooling, will run about 30-40W lower power draw on a tuned GPU just due to the lower temps. My reference RX 6900 XT air cooled would hit 400W before I went water, and only got up to 365W with identical settings after, with no limits being met in either run.


Can you provide a reference for this "_will run about 30-40W lower power draw on a tuned GPU just due to the lower temps_"?, I mean explanation for this phenomenon.


----------



## mastertrixter

CS9K said:


> It sounds like you're running out of voltage, @mastertrixter. Make sure your HWINFO64 is up to date, set it to 1000ms sample rate, and reset it before every run. HWINFO64 will actually tell you what the current power limit is via "PPT Limit", so check that too.



Interesting. I’ve been using Aida 64 and Radeon software to monitor wattage and temps. They seem to be spot on with each other.
Should I increase the voltage cap in MPT and see if that helps?


----------



## CS9K

Godhand007 said:


> Can you provide a reference for this "_will run about 30-40W lower power draw on a tuned GPU just due to the lower temps_"?, I mean explanation for this phenomenon.


First-hand observation is where I observed it. 

As for an explanation: The effect that's causing it is a known effect with silicon where higher temps mean higher resistance, and thus, higher current required to achieve the same voltage, thus higher total wattage. It works to the extreme with LN2 cooling; it's why GPU's on LN2 can pull insane wattage numbers without turning the GPU core into charcoal.

I don't know that it has a name, but if I find an article or better explanation, I'll post it here and tag you.



mastertrixter said:


> Interesting. I’ve been using Aida 64 and Radeon software to monitor wattage and temps. They seem to be spot on with each other.
> Should I increase the voltage cap in MPT and see if that helps?


I would not mess with voltage, no. There is no "safe daily-driver" method to voltage modification, and won't be until we get bios modification control, something that RDNA2 does not have currently.


----------



## Godhand007

CS9K said:


> First-hand observation is where I observed it.
> 
> As for an explanation: The effect that's causing it is a known effect with silicon where higher temps mean higher resistance, and thus, higher current required to achieve the same voltage, thus higher total wattage. It works to the extreme with LN2 cooling; it's why GPU's on LN2 can pull insane wattage numbers without turning the GPU core into charcoal.
> 
> I don't know that it has a name, but if I find an article or better explanation, I'll post it here and tag you.


Got it, essentially superconductivity at a lower scale. BTW, I wasn't questioning your observations.


----------



## CS9K

Godhand007 said:


> Got it, essentially superconductivity at a lower scale. BTW, I wasn't questioning your observations.


All good, I misread your post on the first pass and had to edit my post. My bad💖


----------



## Godhand007

Hey guys, Need help/inputs from all of you. I am on driver 22.2.1. My settings are/were as shown below. These settings had been working fine for 5 days, mostly playing Halo: Infinite. But I started getting black screens (no display, sound working properly ) with the same settings in Halo: Infinite during the Lobby screen. As far as I know there have not been any updates or changes to anything. I have reinstalled the drivers multiple time (used DDU) but I am getting same issue. What could be causing this? If these settings and drivers or their combination were an issue, why did it work properly for last 5 days ?

*Note*: I ran 22 loops of TS GT2 on default (No MPT) without any issues.
Halo : Infinite has not been updated since 10 days (as per steam).
I can see "driver recovered..." message in Event Viewer after I reboot from black screen.

Clocks @ 2850-2750Mhz, 2150Mhz memory.


----------



## CfYz

Godhand007 said:


> Need help/inputs from all of you.


I don't know what cause the issue, but from MPT screenshot - too low TDC amperes. By default the 6900 XTXH LC OEM has 284W/320A limits (284W is default showing inside MPT, with max +15% 326W), if you do the math from this values, then with starting TDP of 360W you would need 406A for TDC. Your TDC is too low I think. Don't know if it helps, but you may lower the TDP or raise TDC and see what happens.

Please if someone know more correct me if I'm wrong...

P.S. I don't have any black screens on 22.2.1 nor any other issues. Normal driver for slight OC or no OC.


----------



## Godhand007

CfYz said:


> I don't know what cause the issue, but from MPT screenshot - too low TDC amperes. By default the 6900 XTXH LC OEM has 284W/320A limits (284W is default showing inside MPT, with max +15% 326W), if you do the math from this values, then with starting TDP of 360W you would need 406A for TDC. Your TDC is too low I think. Don't know if it helps, but you may lower the TDP or raise TDC and see what happens.
> Please if someone know more correct me if I'm wrong...


Many cards have different TDC values and their is no fixed formula for it (refer screenshot below). I did try ~370 TDC, same issue. The problem is regarding_ working before_ and _not working now_ with the same settings.










> P.S. I don't have any black screens on 22.2.1 nor any other issues. Normal driver for slight OC or no OC.


This is not an relevant.


----------



## mastertrixter

CfYz said:


> I don't know what cause the issue, but from MPT screenshot - too low TDC amperes. By default the 6900 XTXH LC OEM has 284W/320A limits (284W is default showing inside MPT, with max +15% 326W), if you do the math from this values, then with starting TDP of 360W you would need 406A for TDC. Your TDC is too low I think. Don't know if it helps, but you may lower the TDP or raise TDC and see what happens.
> 
> Please if someone know more correct me if I'm wrong...
> 
> P.S. I don't have any black screens on 22.2.1 nor any other issues. Normal driver for slight OC or no OC.


22.2.1 forced me to drop clocks by 50mhz or so. I’ve had the best luck with the non optional drivers. They seem much more consistent and much less problematic.

Also, did you cooling change? Ambient change? Could be that your hitting temp limits?


----------



## Godhand007

Godhand007 said:


> Hey guys, Need help/inputs from all of you. I am on driver 22.2.1. My settings are/were as shown below. These settings had been working fine for 5 days, mostly playing Halo: Infinite. But I started getting black screens (no display, sound working properly ) with the same settings in Halo: Infinite during the Lobby screen. As far as I know there have not been any updates or changes to anything. I have reinstalled the drivers multiple time (used DDU) but I am getting same issue. What could be causing this? If these settings and drivers or their combination were an issue, why did it work properly for last 5 days ?
> 
> *Note*: I ran 22 loops of TS GT2 on default (No MPT) without any issues.
> Halo : Infinite has not been updated since 10 days (as per steam).
> I can see "driver recovered..." message in Event Viewer after I reboot from black screen.
> 
> Clocks @ 2850-2750Mhz, 2150Mhz memory.
> View attachment 2548477


Yep, happened again (with clocks being lowered by 50 Mhz). Before I assume the worst, anything you guys suggest that I do to confirm if the card is Ok. At default (No MPT), TS, Halo seems to run fine.


----------



## Godhand007

mastertrixter said:


> 22.2.1 forced me to drop clocks by 50mhz or so. I’ve had the best luck with the non optional drivers. They seem much more consistent and much less problematic.
> 
> Also, did you cooling change? Ambient change? Could be that your hitting temp limits?


Temp limits and cooling are same.


----------



## IIISLIDEIII

I bought a 6900xt 2 months ago, initially in oc on time spy I always made a graphic score around 24,300/24.400 / 24,600, my best is 24.800.

From one day to the next my score dropped to around 23,600, I was never even able to get close to 24,000 with the same oc.

The card does not give any problems but I do not understand this decline on timespy, has anyone else had a similar experience?


----------



## greg1184

I just got the AsRock RX6900XT Phantom Gaming. So far I am really liking it. Unlike my previous history with Radeon the video card worked flawlessly out the box. All games that I have tried runned flawless on 4k on stock settings. Look forward to tweaking the card.


----------



## Godhand007

Godhand007 said:


> Yep, happened again (with clocks being lowered by 50 Mhz). Before I assume the worst, anything you guys suggest that I do to confirm if the card is Ok. At default (No MPT), TS, Halo seems to run fine.


Happened with 22.1.2 as well.


----------



## mastertrixter

Godhand007 said:


> Temp limits and cooling are same.


Right, simply meant that if something changed in his cooling it could be throttling.


----------



## Scorpion667

Godhand007 said:


> Happened with 22.1.2 as well.


This is a cookie cutter symptom of unstable OC especially considering no issues at stock clocks. Does TS crash if you loop tests 1 and 2 on your 2850Mhz OC for ~15 mins?

If it crashes I would try below settings in order:
-Remove VRAM OC. Lower GPU OC by 150Mhz and retest
-Increase GPU OC 50Mhz and retest
-Increase GPU OC 25Mhz and retest
-Add back VRAM OC and retest

Normally I would say go back to Recommended driver (21.10.2) but it's from October and I think patch notes indicated increased performance in Halo Infinite in later updates. Probably better to stay on the later drivers in your case.

As others have said, drivers can lower stable OC clock speeds on these and you're at a higher OC than most. Perhaps in your case the "lower clock" drivers require -75Mhz on your sample vs -50Mhz for someone running 2750Mhz.


----------



## Godhand007

Scorpion667 said:


> This is a cookie cutter symptom of unstable OC especially considering no issues at stock clocks. Does TS crash if you loop tests 1 and 2 on your 2850Mhz OC for ~15 mins?
> 
> If it crashes I would try below settings in order:
> -Remove VRAM OC. Lower GPU OC by 150Mhz and retest
> -Increase GPU OC 50Mhz and retest
> -Increase GPU OC 25Mhz and retest
> -Add back VRAM OC and retest
> 
> Normally I would say go back to Recommended driver (21.10.2) but it's from October and I think patch notes indicated increased performance in Halo Infinite in later updates. Probably better to stay on the later drivers in your case.
> 
> As others have said, drivers can lower stable OC clock speeds on these and you're at a higher OC than most. Perhaps in your case the "lower clock" drivers require -75Mhz on your sample vs -50Mhz for someone running 2750Mhz.


Is it though? The original settings (2850Mhz ) had been working fine for 5 days, mostly playing Halo: Infinite with 22.2.1. Before that, Same settings were working fine with 22.1.2 as well. I tried lowering clocks by 50 Mhz, still the same issue. Unstable OC would usually give a TDR /driver reset. This black screen (sound working)and then driver not loading after restart is something I have seen for the first time since I bought the card more than a year ago.

I am hugely PL limited for default TS loops. But I had run a custom TS loop on GT1 for 18 runs and It worked fine.


----------



## WR-HW95

Godhand007 said:


> Is it though? The original settings (2850Mhz ) had been working fine for 5 days, mostly playing Halo: Infinite with 22.2.1. Before that, Same settings were working fine with 22.1.2 as well. I tried lowering clocks by 50 Mhz, still the same issue. Unstable OC would usually give a TDR /driver reset. This black screen (sound working)and then driver not loading after restart is something I have seen for the first time since I bought the card more than a year ago.
> 
> I am hugely PL limited for default TS loops. But I had run a custom TS loop on GT1 for 18 runs and It worked fine.


So you ran the GPU on 1.28V untill it went unstable on set OC before?
That should tell whats going on.


----------



## Godhand007

WR-HW95 said:


> So you ran the GPU on 1.28V untill it went unstable on set OC before?
> That should tell whats going on.


Not on 1.28v, actual v is ~1.23v. It does indicate what has happened but does not confirm it, hence the questions. Cards usually go kaput or not. If an OC works before and then does not (everything else being same) then it is an highly unusual scenario, at least in my experience.


----------



## WR-HW95

Godhand007 said:


> Not on 1.28v, actual v is ~1.23v. It does indicate what has happened but does not confirm it, hence the questions. Cards usually go kaput or not. If an OC works before and then does not (everything else being same) then it is an highly unusual scenario, at least in my experience.


You should find out what is currently stable clocks.
Usually chips starts to degrade with excessive voltage way before dying if feeded voltage isnt on absurd level.
I have killed GF2 card with both. When i was running it on over voltage daily, over time it degraded and on last 3Dmark try, voltage i set to it was too much and it died.


----------



## Godhand007

WR-HW95 said:


> You should find out what is currently stable clocks.
> Usually chips starts to degrade with excessive voltage way before dying if feeded voltage isnt on absurd level.
> I have killed GF2 card with both. When i was running it on over voltage daily, over time it degraded and on last 3Dmark try, voltage i set to it was too much and it died.


Right now, it seems anything without voltage increase which was working previously is working fine (around 2550 Mhz). I understand chip degradation but again, it should be dead like in your example of 3dmark or did you also saw signs of degradation like lower clocks, instability and diver crashes long before the your GPUs actually died?

Though I have to say if it is confirmed degradation, then it was fast , not even~2 months.


----------



## WR-HW95

Godhand007 said:


> Right now, it seems anything without voltage increase which was working previously is working fine (around 2550 Mhz). I understand chip degradation but again, it should be dead like in your example of 3dmark or did you also saw signs of degradation like lower clocks, instability and diver crashes long before the your GPUs actually died?


Degradation shows as instability which means that, to get chip stable again it needs to run lower clock or increase voltage (bad choice) . I´m not pro on this, because i have tried to avoid that after by starting to overclock with getting temperatures low as I can first. So how its related on stock settings vs OC with voltage I cant say.
I had Asus gtx980 poseidon that I ran 3Dmark once with 1.5V+ Vcore and it lost all OC capability on stock Vcore, but I never had change to properly test it before that run.


----------



## Godhand007

WR-HW95 said:


> Degradation shows as instability which means that, to get chip stable again it needs to run lower clock or increase voltage (bad choice) . I´m not pro on this, because i have tried to avoid that after by starting to overclock with getting temperatures low as I can first. So how its related on stock settings vs OC with voltage I cant say.
> I had Asus gtx980 poseidon that I ran 3Dmark once with 1.5V+ Vcore and it lost all OC capability on stock Vcore, but I never had change to properly test it before that run.


To unpack things:

1. "Degradation shows as instability which means that, to get chip stable again it needs to run lower clock or increase voltage (bad choice)" ; This might be applicable to my situation as I am getting instability with same settings which worked before. Did you encounter this scenario yourself?
2. "I had Asus gtx980 poseidon that I ran 3Dmark once with 1.5V+ Vcore.." My voltage was high but not absurd to the level of 200-300mv. It was around ~60mv (actual) over default. 
3. "and it lost all OC capability on stock Vcore, but I never had change to properly test it before that run." So was it working fine with default settings even after your application of 1.5V+ vcore?


----------



## alceryes

To add to the above, temps are not the only enemy.
Not saying you are doing this, but, I see it all the time on other forums where a poster describes textbook CPU/GPU degradation and then refuses to entertain the idea that their CPU or GPU is starting to degrade or fail because 'temps have always been normal or low'.

You could have perfectly normal (even cool) temps, 100% of the time, and have a GPU show signs of degradation. You don't even need to be overclocking/overvolting. GPU's are extremely complex. Everything from an SMD or SMD solder joint, to some of the _*billions*_ of transistors in the core itself, could be failing or having slight switching issues.


----------



## Godhand007

alceryes said:


> To add to the above, temps are not the only enemy.
> Not saying you are doing this, but, I see it all the time on other forums where a poster describes textbook CPU/GPU degradation and then refuses to entertain the idea that their CPU or GPU is starting to degrade or fail because 'temps have always been normal or low'.
> 
> You could have perfectly normal (even cool) temps, 100% of the time, and have a GPU show signs of degradation. You don't even need to be overclocking/overvolting. GPU's are extremely complex. Everything from an SMD or SMD solder joint, to some of the _*billions*_ of transistors in the core itself, could be failing or having slight switching issues.


"describes textbook CPU/GPU degradation " ; Can you give some proper examples of this or even your own anecdotes?


----------



## alceryes

The most basic example of possible CPU/GPU degradation would be something along the lines of - 
"Worked before. No changes have been made. Doesn't work now."

If overclocking, the following sentance can also be added - 
"Worked before. Doesn't work now. If I lower my overclocks/overvolts/timings adjustment it works again."

My main point isn't pointing a finger, it's that low temps don't exclude the possibility of failures or degradation happening. You seem to be very analytical and are going through the possibilities. I just wanted say that degradation/failure doesn't get removed from the table of possibilities just because temps are fine.


----------



## Godhand007

alceryes said:


> The most basic example of possible CPU/GPU degradation would be something along the lines of -
> "Worked before. No changes have been made. Doesn't work now."
> 
> If overclocking, the following sentance can also be added -
> "Worked before. Doesn't work now. If I lower my overclocks/overvolts/timings adjustment it works again."


Second one certainly applies to my situation. I guess only few more anecdotes and/or confirmations are needed to come to a conclusion.



> My main point isn't pointing a finger, it's that low temps don't exclude the possibility of failures or degradation happening. You seem to be very analytical and are going through the possibilities. I just wanted say that degradation/failure doesn't get removed from the table of possibilities just because temps are fine.


Makes sense. It's kind of like having a 360mm AIO for your GPU with excellent cooling, but if you are feeding it 1.4 volts don't be surprised if things don't work after some time.


----------



## alceryes

Godhand007 said:


> Makes sense. It's kind of like having a 360mm AIO for your GPU with excellent cooling, but if you are feeding it 1.4 volts don't be surprised if things don't work after some time.


Exactly.


----------



## CS9K

Godhand007 said:


> Not on 1.28v, actual v is ~1.23v. It does indicate what has happened but does not confirm it, hence the questions. Cards usually go kaput or not. If an OC works before and then does not (everything else being same) then it is an highly unusual scenario, at least in my experience.


Not so unusual with RDNA2 GPU's. Looking through your posts, it appears that you're simply running out of voltage for the clock speed... let me explain.

There's two different things going on here I think:

- Max overclock stability differs from one driver to the next. That's just a thing with AMD drivers, has been since RDNA2 release. It's silly, it's different, but it _is_ a thing, so get used to it.

- Time Spy Graphics Test 2, and Port Royal to a lesser extent, will tell you if a clock speed setting is stable or not. Yes, both programs use a LOT of power. 

Way back in this thread, there were some that claimed "I'm only stable up to x speed in Time Spy, but x+150MHz "works just fine" in other games. To those people, I made the counter-point that, "your x+150MHz _is not actually stable_, they just had not crashed _yet_". I experienced this myself, and never set my core clock speed faster than it is stable in Time Spy GT2.

I think what you are experiencing is a version of this situation. Because you have not tested the extreme clock speeds that you're running your GPU core at, you don't actually _know_ if it is stable or not, and because you haven't/can't test it in Time Spy/Port Royal due to thermal and/or power limits, you don't have an objective "maximum clock speed" that you _know_ is your maximum stable clock speed, so you're setting the clock speed too high for your voltage, and you crash.


----------



## Godhand007

CS9K said:


> Not so unusual with RDNA2 GPU's. Looking through your posts, it appears that you're simply running out of voltage for the clock speed... let me explain.
> 
> There's two different things going on here I think:
> 
> - Max overclock stability differs from one driver to the next. That's just a thing with AMD drivers, has been since RDNA2 release. It's silly, it's different, but it _is_ a thing, so get used to it.


This is not relevant to my situation and it has been said many times. I don't have any disagreements on this point.



> - Time Spy Graphics Test 2, and Port Royal to a lesser extent, will tell you if a clock speed setting is stable or not. Yes, both programs use a LOT of power.
> 
> Way back in this thread, there were some that claimed "I'm only stable up to x speed in Time Spy, but x+150MHz "works just fine" in other games. To those people, I made the counter-point that, "your x+150MHz _is not actually stable_, they just had not crashed _yet_". I experienced this myself, and never set my core clock speed faster than it is stable in Time Spy GT2.
> 
> I think what you are experiencing is a version of this situation. Because you have not tested the extreme clock speeds that you're running your GPU core at, you don't actually _know_ if it is stable or not, and because you haven't/can't test it in Time Spy/Port Royal due to thermal and/or power limits, you don't have an objective "maximum clock speed" that you _know_ is your maximum stable clock speed, so you're setting the clock speed too high for your voltage, and you crash.


So, if you have a paid version of TS, you can select a lower resolution and run TS with those clocks. Selecting lower resolution limits you power consumption and allows higher clocks. I ran bout 18 loops of GT1 with 50 Mhz lower clocks and they ran fine, yet I still faced the same issue with these clocks in-game. The unique point of my situation is that these settings were working fine before and they are consistently crashing now with same game settings, game scene, driver and temps.


----------



## jonRock1992

Godhand007 said:


> This is not relevant to my situation and it has been said many times. I don't have any disagreements on this point.
> 
> 
> 
> So, if you have a paid version of TS, you can select a lower resolution and run TS with those clocks. Selecting lower resolution limits you power consumption and allows higher clocks. I ran bout 18 loops of GT1 with 50 Mhz lower clocks and they ran fine, yet I still faced the same issue with these clocks in-game. The unique point of my situation is that these settings were working fine before and they are consistently crashing now with same game settings, game scene, driver and temps.


So what exactly have you done differently between being stable and not stable, if anything at all?


----------



## Godhand007

jonRock1992 said:


> So what exactly have you done differently between being stable and not stable, if anything at all?


Nothing ! And that is why I am so perplexed. It seems that this might be case of chip degradation, in which case, this is important for most of us to know/confirm.


----------



## alceryes

I just discovered something VERY interesting. Zero RPM fans have been a thing for over a decade, but I didn't realize how much of a detriment they can be to GPU temps.
Zero RPM fan settings are the enemy!

BY DEFAULT, the Radeon software had my GPU fans completely off until around 62ºC (guessing standard GPU temp sensor). Now I believe that, to go from zero RPM to say 60% fan RPM probably takes around 2 seconds. Next, for the airflow to start having a positive effect on the heatsink/radiator (cooling it down) probably takes another 2-3 seconds. So we have a span of about 5 seconds where the cooling potential of my reference cooler is not realized and the GPU is allowed to get hotter than the cooling is _supposed to_ allow.

Previously, my GPU hotspot was ALWAYS peaking at 99ºC in a particular game. It was a 100% definite. After playing this game for any length of time HWiNFO64 would report a hot spot temp of 99ºC. I am now fairly certain that the zero RPM setting is the culprit for this burst temp reading! I just set a very lazy/low fan curve (barely audible even with headphones off) and now, with the same game, the hot spot temp peaks at 78ºC. A full 21ºC lower!!! I am completely floored by this result! I want this card to last 4+ years but Radeon's own stock driver settings has the card getting WAAAY hotter than it should (even if it's just a 5 second burst temp), possibly lowering the lifespan of the card.

Unfortunately, I have to run into the city and will be there for a few days so won't be able to test further but holy S$%#!!! Does any one else here run with a zero RPM fan setting?


----------



## CS9K

Godhand007 said:


> Nothing ! And that is why I am so perplexed. It seems that this might be case of chip degradation, in which case, this is important for most of us to know/confirm.


I saw it mentioned already, that chip degradation is usually an all-or-nothing thing, so I won't elaborate further on that front.

I'm not sure what you're doing differently now, vs then, if anything, but I still would suspect that your GPU was never completely stable to begin with, at the higher of the clock settings that you've used.

And as for Time Spy GT2 and/or Port Royal: I do not change any settings from the defaults when I use each of those programs to stability test. If you lack the paid versions, then the next-best-thing I've found for stability testing is Heaven. 

Yes, heaven. Hear me out:

Sure, Heaven is _ancient_ so far as benchmarking programs go, but with the right settings, it is still capable of pushing one's GPU core to its absolute max, without running into power nor memory limitations, which is all one could ask when testing GPU-core-clock stability.

Use the settings below, *exactly* as shown, with your proposed overclock settings, and turn Heaven loose for an hour. If it makes it one hour, the settings that you used _should_ be stable in non-Ray Traced games. Knock 50MHz off of your GPU core clock and it _should_ be stable for Ray Traced games. 









If you crash before one hour has elapsed, then you have probably run out of voltage for the currently-set GPU core clock speed. Drop it by 25Mhz and try again.


----------



## nucl3arbong

I have a Red Devil Ultimate and was wondering if using MPT is the only way to undervolt the gpu, or can I do it within Amd software by limiting power? Also can someone direct me to a guide for things to look for when Oc'n my card? Been running the core as high as 2730ish (usually with the standard 100 between low-high), and 2200 on memory (Have run as high as 2225 with no issues). Also with fast timings and power limits maxed. I would just like to find the best sweet spot for gaming, and a little bit of benching. I mean as is I already max my fps at 1440p, but do like seeing/comparing bench scores.


----------



## WR-HW95

alceryes said:


> To add to the above, temps are not the only enemy.
> Not saying you are doing this, but, I see it all the time on other forums where a poster describes textbook CPU/GPU degradation and then refuses to entertain the idea that their CPU or GPU is starting to degrade or fail because 'temps have always been normal or low'.
> 
> You could have perfectly normal (even cool) temps, 100% of the time, and have a GPU show signs of degradation. You don't even need to be overclocking/overvolting. GPU's are extremely complex. Everything from an SMD or SMD solder joint, to some of the _*billions*_ of transistors in the core itself, could be failing or having slight switching issues.


This is also true. Chips will degrade over time anyway by my experience.
I have 2 of Galaxy gtx980 HOF cards and I ran GPUgrid about 2.5 years with watercooling. Temps never went over 42degrees, but after that when I took those off loop and put back original coolers both cards OC worse than they did new. Actually higher binned V1 card that had 1431MHz factory OC had to underclock at level of V2 card 1380MHz to be stable.


----------



## Godhand007

CS9K said:


> I saw it mentioned already, that chip degradation is usually an all-or-nothing thing, so I won't elaborate further on that front.


That's what I think but can't confirm. So you are also of the view that chip degradation is an all or nothing thing?



> I'm not sure what you're doing differently now, vs then, if anything, but I still would suspect that your GPU was never completely stable to begin with, at the higher of the clock settings that you've used.


Again nothing different from previous settings. It might be a case that GPU was not stable but for more than 2 months it ran fine with different drivers. Now with same drivers, same settings, same game ,same scene issue is reproducible consistently



> And as for Time Spy GT2 and/or Port Royal: I do not change any settings from the defaults when I use each of those programs to stability test. If you lack the paid versions, then the next-best-thing I've found for stability testing is Heaven.


I do have the paid version and that's how I tested 18 GT1 loops (with lower resolution so that clocks can stay high). Can you confirm one thing though, there is point in GT2 where there is massive power consumption and clocks dip below for most people. Do you get same clocks rates throughout GT2 run from beginning to end?



> Yes, heaven. Hear me out:
> 
> Sure, Heaven is _ancient_ so far as benchmarking programs go, but with the right settings, it is still capable of pushing one's GPU core to its absolute max, without running into power nor memory limitations, which is all one could ask when testing GPU-core-clock stability.
> 
> Use the settings below, *exactly* as shown, with your proposed overclock settings, and turn Heaven loose for an hour. If it makes it one hour, the settings that you used _should_ be stable in non-Ray Traced games. Knock 50MHz off of your GPU core clock and it _should_ be stable for Ray Traced games.
> View attachment 2548681
> 
> 
> If you crash before one hour has elapsed, then you have probably run out of voltage for the currently-set GPU core clock speed. Drop it by 25Mhz and try again.


Will do. This type of exact testing methodology and process is what's missing from GPU overclocking.

Edit : One important thing to not here is the way the GPU fails; The screen goes black, sound and CPU (shutdown commands) work fine. After restart the driver does not load and the card is shown as disabled. I have to enable the card and restart the system for everything to go back to normal. I have never encountered such a unique way of GPU failure with this card before.


----------



## IIISLIDEIII

as I wrote before, it is really possible that there is some degradation of the card.
I was steadily doing 24,500 graphics score on timespy, now I don't even get to 24000 anymore, something has changed in the gpu.


----------



## Godhand007

nucl3arbong said:


> I have a Red Devil Ultimate and was wondering if using MPT is the only way to undervolt the gpu, or can I do it within Amd software by limiting power? Also can someone direct me to a guide for things to look for when Oc'n my card? Been running the core as high as 2730ish (usually with the standard 100 between low-high), and 2200 on memory (Have run as high as 2225 with no issues). Also with fast timings and power limits maxed. I would just like to find the best sweet spot for gaming, and a little bit of benching. I mean as is I already max my fps at 1440p, but do like seeing/comparing bench scores.


Good cooling can improve you clocks buy a small amount which could explain the discrepancies with your OC.


----------



## Godhand007

IIISLIDEIII said:


> as I wrote before, it is really possible that there is some degradation of the card.
> I was steadily doing 24,500 graphics score on timespy, now I don't even get to 24000 anymore, something has changed in the gpu.


Could be due to newer drivers or many updates that TS has received.


----------



## Scorpion667

Godhand007 said:


> That's what I think but can't confirm. So you are also of the view that chip degradation is an all or nothing thing?
> 
> 
> Again nothing different from previous settings. It might be a case that GPU was not stable but for more than 2 months it ran fine with different drivers. Now with same drivers, same settings, same game ,same scene issue is reproducible consistently
> 
> 
> I do have the paid version and that's how I tested 18 GT1 loops (with lower resolution so that clocks can stay high). Can you confirm one thing though, there is point in GT2 where there is massive power consumption and clocks dip below for most people. Do you get same clocks rates throughout GT2 run from beginning to end?
> 
> 
> Will do. This type of exact testing methodology and process is what's missing from GPU overclocking.
> 
> Edit : One important thing to not here is the way the GPU fails; The screen goes black, sound and CPU (shutdown commands) work fine. After restart the driver does not load and the card is shown as disabled. I have to enable the card and restart the system for everything to go back to normal. I have never encountered such a unique way of GPU failure with this card before.


Found a post where a guy claimed that raising VCCIO/VCCSA on his 8086k from 1.2v to 1.3v fixed his 6800XT black screen issue. If you're still on 8700k maybe give that a try:


__
https://www.reddit.com/r/Amd/comments/lfgzst

Now that I understand your symptom better I must admit it's very unusual. I would maybe also try:
-reseating card (push it till you see there PCIe tab lock)
-reseating the PCIe power cables on both sides
-fresh Windows 10 21H2 install on spare drive, just install GFX driver, win updates, halo and retest with your OC. Don't let any auto installs like Razer Synapse, Corsair iCUE go through. 

Have you tried checking with the vendor if there's anything they can do for you? I've seen some Reddit threads where MSI 6800XT users were plagued with black screens and eventually vendor published a vBIOS which fixed that:


__
https://www.reddit.com/r/Amd/comments/lh626e


----------



## Godhand007

Scorpion667 said:


> Found a post where a guy claimed that raising VCCIO/VCCSA on his 8086k from 1.2v to 1.3v fixed his 6800XT black screen issue. If you're still on 8700k maybe give that a try:
> 
> 
> __
> https://www.reddit.com/r/Amd/comments/lfgzst
> 
> Now that I understand your symptom better I must admit it's very unusual. I would maybe also try:
> -reseating card (push it till you see there PCIe tab lock)
> -reseating the PCIe power cables on both sides
> -fresh Windows 10 21H2 install on spare drive, just install GFX driver, win updates, halo and retest with your OC. Don't let any auto installs like Razer Synapse, Corsair iCUE go through.
> 
> Have you tried checking with the vendor if there's anything they can do for you? I've seen some Reddit threads where MSI 6800XT users were plagued with black screens and eventually vendor published a vBIOS which fixed that:
> 
> 
> __
> https://www.reddit.com/r/Amd/comments/lh626e


Thanks for checking out my posts and finding out these resources, unfortunately, they are not relevant to my situation. First one is for hard lock up, which does not happen in my case. My SA is set to 1.3 (manual) and I have run TM5 for multiple hours and multiple times without any issues. I have moved on to 12700k BTW. Second one discusses about issues similar to mine but again their symptoms (for few of the posts that I have read) are not similar to mine. I can do a deep dive on the MSI thread but Sapphire (manufacturer ) haven't delivered a bios update for my card, from what I can gather at least.

I am going to try your suggestions one by one, beginning with use of spare windows 10 on my sata m.2 and see if that helps. I do have some other things to try first though.


----------



## nucl3arbong

Godhand007 said:


> Good cooling can improve you clocks buy a small amount which could explain the discrepancies with your OC.


Discrepancies? Card stays pretty cool with my junction only hitting 70-73deg on a long gaming session.


----------



## Godhand007

So I am trying to load MPT on my older windows 10 m.2 sata drive but dropdown is empty. Tried reinstalling MPT, reinstalling drivers but nothing seems to be working. What am I (probably obvious) thing that I am missing here?


----------



## Godhand007

> Use the settings below, *exactly* as shown, with your proposed overclock settings, and turn Heaven loose for an hour. If it makes it one hour, the settings that you used _should_ be stable in non-Ray Traced games. Knock 50MHz off of your GPU core clock and it _should_ be stable for Ray Traced games.
> View attachment 2548681
> 
> 
> If you crash before one hour has elapsed, then you have probably run out of voltage for the currently-set GPU core clock speed. Drop it by 25Mhz and try again.


Update on this:

1. Ran the below-mentioned settings for reference purposes (No MPT) and they ran fine for one hour. Actual clocks stayed around ~2450 Mhz.









2. With My older OC settings;_ black screen with sound working _within seconds.


----------



## 99belle99

Godhand007 said:


> So I am trying to load MPT on my older windows 10 m.2 sata drive but dropdown is empty. Tried reinstalling MPT, reinstalling drivers but nothing seems to be working. What am I (probably obvious) thing that I am missing here?
> 
> 
> View attachment 2548808


Load your bios.


----------



## Godhand007

99belle99 said:


> Load your bios.


Not working.


----------



## CfYz

Godhand007 said:


> probably obvious


Fresh install doesn't have SPPT in win registry. Load BIOS, change settings, write SPPT, reboot...


----------



## Godhand007

CfYz said:


> Fresh install doesn't have SPPT in win registry. Load BIOS, change settings, write SPPT, reboot...


It's not a fresh install, windows 10 was already present on m.2. I tried loading bios (after saving from GPU-z) but it wouldn't load.


----------



## 99belle99

Godhand007 said:


> It's not a fresh install, windows 10 was already present on m.2. I tried loading bios (after saving from GPU-z) but it wouldn't load.


Load bios and then select card from pull down menu.


----------



## J7SC

FYI, regarding chip degradation, Buildzoid (and also other pros) have some info on that, per YT below. Suffice it to say that if you run higher-than-stock voltage and/or PL, you _theoretically_ move closer to degrading faster, though you can counter that to some extent with better cooling, given the relationship between temps and degradation. 

In my personal experience, which includes a lot of sub-zero at HWBot, I never degraded a CPU or GPU (though outright killed one w/ 'excessive voltage' far beyond what was discussed above)...it would take a long time and a lax approach to cooling while running big loads > than stock design for quite a while. My personal limit on my 6900XT XTX (w/ big w-cooling per below) is 1.218v for the odd bench, but mostly, I run stock voltage (up to 1.175v) but with MPT PL at 450W as this is my daily 'work' slugger - so far so good, with no degradation after 10 months of daily use like this.


----------



## KGV

Ok guys.
I have installed waterblock to my Asock 6900XT. And got 69c gpu temp and 102C hot spot temp (cpuid hardware monitor) (3dmark shows 69C), when running Time spy extreme at 2k resolution on 2500mhz gpu clock.
Those temperaures is ok, or not?


----------



## hellm

Godhand007 said:


> It's not a fresh install, windows 10 was already present on m.2. I tried loading bios (after saving from GPU-z) but it wouldn't load.


Version of Windows and MPT? If the dropdown is empty, either admin rights are not available or the integrated winAPI library is not able to access the registry. Please try one of the beta versions.

MPT v1.3.8 final will be released very soon. Newest W11 SDK, this should run on W10 21H2 as well as on other versions. Problem with not integrating the library was the same, MPT can't access the registry after a windows update.
With the upcoming release MPT will be able to save registry files. Just use the .reg ending, otherwise a binary file will be saved, as usual.


----------



## Godhand007

99belle99 said:


> Load bios and then select card from pull down menu.


This is what I have tried:
1.







2.







3.








4.







5.







Write SPPT/Delete SPPT button do not work on click.


----------



## Godhand007

hellm said:


> Version of Windows and MPT? If the dropdown is empty, either admin rights are not available or the integrated winAPI library is not able to access the registry. Please try one of the beta versions.
> 
> MPT v1.3.8 final will be released very soon. Newest W11 SDK, this should run on W10 21H2 as well as on other versions. Problem with not integrating the library was the same, MPT can't access the registry after a windows update.
> With the upcoming release MPT will be able to save registry files. Just use the .reg ending, otherwise a binary file will be saved, as usual.

















(MorePowerTool_Setup_1_3_8_b2)

I have tried the 1.3.7 final version as well.


----------



## Godhand007

J7SC said:


> FYI, regarding chip degradation, Buildzoid (and also other pros) have some info on that, per YT below. Suffice it to say that if you run higher-than-stock voltage and/or PL, you _theoretically_ move closer to degrading faster, though you can counter that to some extent with better cooling, given the relationship between temps and degradation.
> 
> In my personal experience, which includes a lot of sub-zero at HWBot, I never degraded a CPU or GPU (though outright killed one w/ 'excessive voltage' far beyond what was discussed above)...it would take a long time and a lax approach to cooling while running big loads > than stock design for quite a while. My personal limit on my 6900XT XTX (w/ big w-cooling per below) is 1.218v for the odd bench, but mostly, I run stock voltage (up to 1.175v) but with MPT PL at 450W as this is my daily 'work' slugger - so far so good, with no degradation after 10 months of daily use like this.
> 
> 
> 
> 
> 
> 
> View attachment 2548953


For me, It has only been around 1-1.5 months of running higher voltage and PL (~400W max). But if my chip or any other components on my card have been degraded then it is way faster than what your experience has been.


----------



## J7SC

Godhand007 said:


> For me, It has only been around 1-1.5 months of running higher voltage and PL (~400W max). But if my chip or any other components on my card have been degraded then it is way faster than what your experience has been.


...it depends on what you mean by 'higher voltage', and also your cooling setup. But to switch gears:

I actually had a similar problem last weekend after my Windows 10 Pro and AMD driver update - the same version of MPT I have used for a while all of a sudden did not show the 6900XT bios in the drop-down menu anymore, and loading a saved custom profile in MPT didn't work either. It wasn't until I loaded the original vBios for my card via MPT, 'wrote' to SPPT and rebooted that it all worked again. Then I could mod the vBios to my heart's content after said reboot. I don't know whether the culprit was the Win 10 Pro update or the AMD driver update, but have you tried to revert back to your original unmolested vBios in MPT ?


----------



## CfYz

Godhand007 said:


> MorePowerTool_Setup_1_3_8_b2


1.3.8 not working, don't know why, use 1.3.7 final


----------



## llDevilDriverll

Godhand007 said:


> This is what I have tried:
> 1.
> View attachment 2549006
> 2.
> View attachment 2549007
> 3.
> View attachment 2549008
> 
> 4.
> View attachment 2549009
> 5.
> View attachment 2549010
> Write SPPT/Delete SPPT button do not work on click.


Use DDU to reinstall gpu driver


----------



## llDevilDriverll

Has anyone already tested Adrenalin 22.2.2 ?


----------



## KGV

I checked all. Dissasembe and assemble it again. I did use Thermalright pads .


----------



## Godhand007

So, Finally got MPT to work after re-setting windows 10. Unfortunately, the same issue occurred with win 10 (on different drive), albeit it took more time to more time to manifest itself. I think we can pretty much conclude that *this is a chip degradation issue *unless people have anything else to add or want me to test. All inputs welcome.
@J7SC @Scorpion667 @CS9K


----------



## Godhand007

CfYz said:


> 1.3.8 not working, don't know why, use 1.3.7 final


Re-setting windows did the trick.


----------



## Godhand007

llDevilDriverll said:


> Use DDU to reinstall gpu driver


Didn't work. Had to re-set windows.


----------



## hellm

Godhand007 said:


> View attachment 2549012
> View attachment 2549013
> (MorePowerTool_Setup_1_3_8_b2)
> 
> I have tried the 1.3.7 final version as well.





CfYz said:


> 1.3.8 not working, don't know why, use 1.3.7 final


Hm. This came up with W11 and all the updates. With build-In lib from 10.0.19041 it did work for W11, but then not for W10 21H2. I have 21H1, which is build 19043, and here it works with the current .22000 SDK. But both versions not working, i don't have an answer to that right now. I will ask Igor to keep 1.3.7 final online till we know more.

Update:
We decided to remove 1.3.7 final. All the beta's are still online, for those who have a problem, please try one of them. And please report your problems with the version numbers of your Windows installation and the MPT versions not working or working. Before beta 10 of 1.3.7, there are a few versions that don't include the winAPI library, this didn't work for W11. Maybe this is gone with some Windows Update or we will find another solution.

Nevertheless, everyone can use 1.3.8 final, even if MPT is unable to access the Registry. This is due to the new feature to save registry files. Only the key number, where your SPPT would be saved, is "0000" by default, if no key is selected in the dropdown menu. So, for this workaround, you have to click on save, use the ending .reg and maybe edit the registry file for the right key number.


----------



## KGV

Ok. I found what i did wrong. I put thermal pads exactly like they was on original radiator - and that was an mistake. I removed some of them.
Now i got 57C gpu and 83c hot spot at Time spy.


----------



## Scorpion667

Godhand007 said:


> So, Finally got MPT to work after re-setting windows 10. Unfortunately, the same issue occurred with win 10 (on different drive), albeit it took more time to more time to manifest itself. I think we can pretty much conclude that t*his is a chip degradation issue *unless people have anything else to add or want me to test. All inputs welcome.
> @J7SC @Scorpion667 @CS9K


Sorry to hear. The only other thing I would try is stock or lower memory speed like 2100 with the GPU OC still in play. If that doesn't help, only way forward appears to be reducing the GPU frequency till the crashes stop.

FYI I asked Sapphire if repasting voids the warranty and they said yes. I think there is legislation in US and Canada where manufacturers are restricted from refusing warranty over "Warranty-Void" stickers. Thankfully I don't need support but you never know with hardware.

Now I remember why I was an EVGA fanboy for the last decade.









[EDIT] Might as well have some fun with it since they're being dicks!








[/EDIT]


----------



## CS9K

Scorpion667 said:


> Sorry to hear. The only other thing I would try is stock or lower memory speed like 2100 with the GPU OC still in play. If that doesn't help, only way forward appears to be reducing the GPU frequency till the crashes stop.
> 
> FYI I asked Sapphire if repasting voids the warranty and they said yes. I think there is legislation in US and Canada where manufacturers are restricted from refusing warranty over "Warranty-Void" stickers. Thankfully I don't need support but you never know with hardware.
> 
> Now I remember why I was an EVGA fanboy for the last decade.
> View attachment 2549246
> 
> 
> [EDIT] Might as well have some fun with it since they're being dicks!
> View attachment 2549249
> 
> [/EDIT]


This is especially disappointing to read, since I've seen two instances now where the AIO Toxic RX 6900 XT's _absolutely cook the stock thermal paste to death_ and _need_ to be re-pasted to work properly even within the labeled power envelope.

In the U.S., no company can deny one's warranty based on a busted "warranty" sticker. 

However, it IS your word against theirs, and _they_ have the final say in the validity of your GPU's warranty. They also _have_ to assume that all users are idiots. This results in a tough position for those of us who modify our hardware properly, but still get stuck with a bad GPU or other hardware that _should_ be replaced via RMA.


----------



## Godhand007

Scorpion667 said:


> Sorry to hear. The only other thing I would try is stock or lower memory speed like 2100 with the GPU OC still in play. If that doesn't help, only way forward appears to be reducing the GPU frequency till the crashes stop.
> 
> FYI I asked Sapphire if repasting voids the warranty and they said yes. I think there is legislation in US and Canada where manufacturers are restricted from refusing warranty over "Warranty-Void" stickers. Thankfully I don't need support but you never know with hardware.
> 
> Now I remember why I was an EVGA fanboy for the last decade.
> View attachment 2549246
> 
> 
> [EDIT] Might as well have some fun with it since they're being dicks!
> View attachment 2549249
> 
> [/EDIT]


Well, It is still working fine with clocks at ~2450 Mhz. I can probably add around 100Mhz or more, so still a decent/mild OC but yeah, I am a bit disappointed. About lowering memory clocks, I am not sure that would help but might be worth a try. I guess I will wait for proper bios editor or a new GPU purchase before going in for GPU OC again. I think I have hit the limits of this card.

*Warranty conditions should not be taken lightly*. Depending upon the country and local laws, even a minor scratch on warranty sticker can be a cause for warranty refusal.


----------



## Scorpion667

CS9K said:


> This is especially disappointing to read, since I've seen two instances now where the AIO Toxic RX 6900 XT's _absolutely cook the stock thermal paste to death_ and _need_ to be re-pasted to work properly even within the labeled power envelope.
> 
> In the U.S., no company can deny one's warranty based on a busted "warranty" sticker.
> 
> However, it IS your word against theirs, and _they_ have the final say in the validity of your GPU's warranty. They also _have_ to assume that all users are idiots. This results in a tough position for those of us who modify our hardware properly, but still get stuck with a bad GPU or other hardware that _should_ be replaced via RMA.


I was one of those folks with the cooked paste lol. See attached pics

I used to troubleshoot and arrange warranty repairs for enterprise desktops, laptops, printers and servers eons ago so I fully understand the need to have those policies in place. People will abuse anything if they can get away with it. I think we can agree those policies are not suited for every scenario however. I'm certified to perform warranty repairs on HP and Lenovo Desktops, Laptops, Servers (yes, Lenovo makes servers!) but apparently Sapphire makes cards out of unobtanium and I couldn't possibly fathom how to work with this advanced alien technology, amirite?

At this stage in my life if the 6900XT hypothetically dies tomorrow I will have a new video card in my PC same day so it's no biggie. But if this happened to me in my teens where a VRM blew and I needed to RMA it would be heartbreaking to be denied warranty. I'd have to use a poverty level GFX card for many months until I could scrape together the cash for a mid tier AGP card.

Nowadays GPUs produce so much heat in such a small envelope that thermal paste seems to pump out faster than prior generations. Turn around time for an RMA could be 2-4 weeks in my region especially considering I have to ship it across the border to 'murica. That's IF there's no supply issues with getting the actual replacement. Assuming one sells their old video card when purchasing a new one, you'd be stuck playing solitaire on integrated graphics for months due to an issue that a capable person can resolve in 30 minutes or less. There should be some provisions or exceptions to allow this like EVGA does (and they're not hurting for money). Something like:
-cut a ticket, provide evidence of temp issues
-provide some evidence that you are capable of fixing computers or have experience building. IT guys can tell pretty quickly if someone is full of **** about IT stuff.
-customer agrees they are liable if the act of replacing thermal paste damages card
-manufacturer gives you the green light to replace thermal paste thus not affecting warranty

Think of the reputation and optics too. I sure as **** will never purchase a Sapphire product again and will boycott them if any of my coworkers, friends or family considering buying their products. I can't imagine anybody in this situation would have anything nice to say about them. I just wish EVGA made AMD cards!

Anyway Sapphire responded with:









I emailed this "Althon" place which is located in my city as per their website which recommends 800x600 resolution, uses http (only) and is Copyright 2003. My Cybersecurity guys are gonna have a laugh about this on Monday. I tried calling them but they politely told me to eff off and email instead in broken english. That's almost industry standard these days so I'm not shook. Let's see how they respond to my email, should be entertaining.


----------



## Godhand007

mando10 said:


> just flashed my red devil ult to LC bios in Linux and everything is working fine in windows 11


So, Were you able to get the memory speeds up ?


----------



## KGV

I did rebuild all again, and now i have 53C gpu, hot spot 77C at time spy. 24C ambient.
Is that normal temperatures with water cooling?


----------



## CS9K

KGV said:


> I did rebuild all again, and now i have 53C gpu, hot spot 77C at time spy. 24C ambient.
> Is that normal temperatures with water cooling?


For a GPU power limit of 300-350W on water cooling, yes those temperatures are what you would expect.


----------



## PINKTULIPS7

*Hi I just got the Sapphire NITRO+ AMD RADEON RX 6900 XT SE GAMING OC Graphics Card With 16GB GDDR6 HDMI / TRIPLE DP but anything below 1150mv Time Spy crashed as well as Fire Strike Ultra even when GPU Clock down to 2000Mhz!! It didn't crash in a Game FARCRY PRIMAL with 1050mv*


----------



## LtMatt

What is the best thermal paste to use these days outside of Liquid Metal?


----------



## CS9K

LtMatt said:


> What is the best thermal paste to use these days outside of Liquid Metal?


I'll stand by my vote for Gelid GC-Extreme for bare-die paste-jobs.


----------



## KGV

LtMatt said:


> What is the best thermal paste to use these days outside of Liquid Metal?


MX-5


----------



## LtMatt

Cheers folks. Has anyone tried the SYY stuff?
SYY Thermal Paste, 10 Grams CPU Thermal Paste Thermal Compound Paste Heatsink for IC/Processor/CPU/All Coolers, 15.7W/m.k Carbon Based High Performance, Thermal Interface Material, CPU Paste : Amazon.co.uk: Computers & Accessories


----------



## CS9K

LtMatt said:


> Cheers folks. Has another tried the SYY stuff?
> SYY Thermal Paste, 10 Grams CPU Thermal Paste Thermal Compound Paste Heatsink for IC/Processor/CPU/All Coolers, 15.7W/m.k Carbon Based High Performance, Thermal Interface Material, CPU Paste : Amazon.co.uk: Computers & Accessories


A 5-minute google didn't turn up anything remarkable about it, aside from its presence on over 9000 "best paste" lists. 

Density and viscosity look average for the pastes that work best on IHS/Metal heat spreaders, and I sadly can't find anything on its composition, which I'd like to know if it is being considered for use on top of a bare-die.


----------



## LtMatt

CS9K said:


> A 5-minute google didn't turn up anything remarkable about it, aside from its presence on over 9000 "best paste" lists.
> 
> Density and viscosity look average for the pastes that work best on IHS/Metal heat spreaders, and I sadly can't find anything on its composition, which I'd like to know if it is being considered for use on top of a bare-die.


I was attracted by the 15.7W/m.k rating. I have seen a few 3080/3090 folk using it with some success, but it seems to be quite thick and difficult to spread.


----------



## CS9K

LtMatt said:


> I was attracted by the 15.7W/m.k rating. I have seen a few 3080/3090 folk using it with some success, but it seems to be quite thick and difficult to spread.


Much like Gelid GC-Extreme. I made a video on the subject since people couldn't wrap their head around:

"GC-Extreme acts like a non-Newtonian fluid: The more force you use to move it, the more it resists that force... be patient and let the paste spread itself"

But! That density and viscosity is why I recommend GC-Extreme. Once you get a good spread, and a solid mount, _it stays put for a long time_, and resists pump-out quite well, vs thinner pastes which sometimes initially perform better, but will quickly pump out thanks to the ridiculous concentrated-heat of today's spicy GPU dies.

I personally learned this the hard way with Kryonaut: Great initial temps, but pumps out at the speed of light on top of a bare-die. Not sure why Kryonaut got so much attention in the first place. I suppose not even _I_ realized it at first, that Kryonaut isn't even meant for ambient-temperature cooling solutions, that's what Hydronaut and Aeronaut are for; Kryonaut is so thin so that it can still do the thing under LN2.


----------



## LtMatt

CS9K said:


> Much like Gelid GC-Extreme. I made a video on the subject since people couldn't wrap their head around:
> 
> "GC-Extreme acts like a non-Newtonian fluid: The more force you use to move it, the more it resists that force... be patient and let the paste spread itself"
> 
> But! That density and viscosity is why I recommend GC-Extreme. Once you get a good spread, and a solid mount, _it stays put for a long time_, and resists pump-out quite well, vs thinner pastes which sometimes initially perform better, but will quickly pump out thanks to the ridiculous concentrated-heat of today's spicy GPU dies.
> 
> I personally learned this the hard way with Kryonaut: Great initial temps, but pumps out at the speed of light on top of a bare-die. Not sure why Kryonaut got so much attention in the first place. I suppose not even _I_ realized it at first, that Kryonaut isn't even meant for ambient-temperature cooling solutions, that's what Hydronaut and Aeronaut are for; Kryonaut is so thin so that it can still do the thing under LN2.


Nice video! I've ordered some of that stuff so will see how it goes. 

According to Amazon reviews, heating with a hair dryer can help it spread. Looks like it might be thicker than Gelid...


----------



## Blameless

LtMatt said:


> Cheers folks. Has anyone tried the SYY stuff?
> SYY Thermal Paste, 10 Grams CPU Thermal Paste Thermal Compound Paste Heatsink for IC/Processor/CPU/All Coolers, 15.7W/m.k Carbon Based High Performance, Thermal Interface Material, CPU Paste : Amazon.co.uk: Computers & Accessories


Using it on both my 6800 XT (liquid metal finally started to crystalize and it was easier to sand it off and replace it than treat the cooler base again and hope I slowed alloying down enough to not have to dismantle the card after another year) and 6900 XT currently, among other things.

It's essentially as good as any non-liquid metal TIM I've used, within margin of error.



LtMatt said:


> I was attracted by the 15.7W/m.k rating. I have seen a few 3080/3090 folk using it with some success, but it seems to be quite thick and difficult to spread.


Moderately high viscosity is generally a good thing and SYY 157 isn't particularly hard to work with. That said, you probably don't want to spread it manually and all the junk it comes with (spatual and stencils) is largely pointless.

Personally, I tint both surfaces, then put enough on the die (in the case of Navi21 two lines down the center, perpendicular to each other, with small dabs just inside each corner) and allow mounting pressure to do the rest. Coverage is complete, final bondline is thin, and while there is a little excess that gets pushed out, it's certainly not going to harm thermal performance and I habitually seal any SMDs, not that I think it's conductive enough to be an issue in this use.



CS9K said:


> Density and viscosity look average for the pastes that work best on IHS/Metal heat spreaders, and I sadly can't find anything on its composition, which I'd like to know if it is being considered for use on top of a bare-die.


It's primarily carbon filler in silicone oil, like quite a few other modern TIMs, such as Thermalright's TFX, CoolerMaster Master Gel, or other grey pastes that look black when wiped up with solvent and have low double-digit thermal conductivity ratings. Doesn't appear to be abrasive or particularly conductive, but I'd keep it off exposed contacts and surface mount components none the less.


----------



## CS9K

Blameless said:


> It's primarily carbon filler in silicone oil, like quite a few other modern TIMs


_There's_ what I was looking for. So far in my thermal paste journey over the years, oil-based emulsions are more-prone to pumping out in bare-die applications, vs non-oil-based pastes, in my experience.

I'm curious how your temperatures have/will change over time vs my own and others.


----------



## Blameless

CS9K said:


> _There's_ what I was looking for. So far in my thermal paste journey over the years, oil-based emulsions are more-prone to pumping out in bare-die applications, vs non-oil-based pastes, in my experience.
> 
> I'm curious how your temperatures have/will change over time vs my own and others.


What pastes _don't_ use some kind of oil as a binder?


----------



## CS9K

Blameless said:


> What pastes _don't_ use some kind of oil as a binder?


As best as I can tell:

GC-Extreme
Thermal Grizzly Hydronaut
EK Ectotherm (I think)


----------



## J7SC

CS9K said:


> As best as I can tell:
> 
> GC-Extreme
> Thermal Grizzly Hydronaut
> EK Ectotherm (I think)


I used TG Kryonaut on several GPUs before (incl. 3090 etc) but switched to GC-Extreme for the dies, including the 6900XT and the 3090 Strix. Temp-wise, there's little difference on my builds, but I like the thicker thermal paste for bigger dies (along with thermal putty for VRAM etc). So far, absolutely no deterioration issues in terms of temps over six months or so of daily use.


----------



## Blameless

CS9K said:


> As best as I can tell:
> 
> GC-Extreme
> Thermal Grizzly Hydronaut
> EK Ectotherm (I think)


I'm reasonably confident the EK and Gelid pastes are using silicone oil binders. Thermal Grizzly Hydronaut is specifically listed as non-silicone, but that almost certainly means some other form of synthetic oil binder.

Honestly, I can't even imagine how it would be possible to manufacturer a non-curing, filled, thermal interface material that doesn't use some kind of binder that would be described as an oil.


----------



## CS9K

Blameless said:


> I'm reasonably confident the EK and Gelid pastes are using silicone oil binders. Thermal Grizzly Hydronaut is specifically listed as non-silicone, but that almost certainly means some other form of synthetic oil binder.
> 
> Honestly, I can't even imagine how it would be possible to manufacturer a non-curing, filled, thermal interface material that doesn't use some kind of binder that would be described as an oil.


Perhaps my explanation of non-oil-based is incorrect, but only in that it's non-silicone-oil-based instead. I's tough to find any kind of white-paper information on composition

I am a dummy and forgot about Safety Data Sheets. It looks like they're all made of some type of silicone compounds or another, some SDS's are more-detailed than others.


----------



## llDevilDriverll

LtMatt said:


> What is the best thermal paste to use these days outside of Liquid Metal?


I'm using KPX kingpin tp. Very easy to work with and good performance ratio. I was ordering it(30g) on there official website to avoid fakes. I've been using for a year now. the paste in the tube remains in excellent condition.


----------



## Huseyinbaykal

Hey all 
Need some advice. I cant decide between sapphire toxic extreme edition vs asus strix lc Top oc xtxh one I think. 
which one will you guys recomend and why

thanks for help


----------



## KGV

Huseyinbaykal said:


> Hey all
> Need some advice. I cant decide between sapphire toxic extreme edition vs asus strix lc Top oc xtxh one I think.
> which one will you guys recomend and why
> 
> thanks for help


When i was choosing from all 6900xt, i choose Asrok Phantom Gaming. It have best air cooling and very high gpu boost. With water cooing easy works on 2560 (i did't change bios setting, but i think if i change them, it will work on higher clock without problems).


----------



## ptt1982

KGV said:


> I did rebuild all again, and now i have 53C gpu, hot spot 77C at time spy. 24C ambient.
> Is that normal temperatures with water cooling?


I've got quite similar temps for my 6900xt under a custom loop using 400W via MPT and core at 2650mhz max, memory 2100mhz. I think I could maybe shave off another 5C from the hotspot delta, but the repasting is such a frikkin lottery so I gave up after the second try. My card is Red Devil (non-ultimate) so it shuts down at 118C and throttles at around 110C. Hotspot peaks at 87C during summer and during winter is 77C. So there's still a good 23C until throttling which is pretty much what I'm comfortable with. 

Although, right now hotspot peaks at maybe 45C, core is at around 34C, because I've got an extreme undervolt on it. 20% less performance from max OC, and maybe 10% less than stock performance, but uses less than half the power at max 170W, typically around 120-150W during gameplay. Don't ask me why, but I get more kicks out of undervolting than OCing nowadays! (sorry everyone!)


----------



## llDevilDriverll

KGV said:


> I did rebuild all again, and now i have 53C gpu, hot spot 77C at time spy. 24C ambient.
> Is that normal temperatures with water cooling?


my temps if you interested


----------



## KGV

As i can see, all is almost the same.


----------



## ZealotKi11er

This video feels wrong coming from an owner of 6900 XT and 3080 Ti TUF.
His reference 6900 XT is running 22xxMHz while 3080 12GB is maxed out already (OC Model) with 5% max OC left.

50 Game Benchmark: GeForce RTX 3080 12GB vs. Radeon RX 6900 XT - YouTube


----------



## KGV

ZealotKi11er said:


> This video feels wrong coming from an owner of 6900 XT and 3080 Ti TUF.
> His reference 6900 XT is running 22xxMHz while 3080 12GB is maxed out already (OC Model) with 5% max OC left.
> 
> 50 Game Benchmark: GeForce RTX 3080 12GB vs. Radeon RX 6900 XT - YouTube


Even like this, 6900XT beats 3080 in many test. LMAO !


----------



## PINKTULIPS7

Weird thing happened, after ran Fire Strike Extreme Benchmark with 1.50 mv/2550 mhz upon restart all the sudden no Display Signal at all!!! Reset the BIOS, unplug/replug DP Cable didn't solve the issue then after 15 minutes Signal came back. My Computer didn't crash during the Benchmark...........


----------



## 99belle99

Anyone try out the latest driver?


----------



## jonRock1992

ZealotKi11er said:


> This video feels wrong coming from an owner of 6900 XT and 3080 Ti TUF.
> His reference 6900 XT is running 22xxMHz while 3080 12GB is maxed out already (OC Model) with 5% max OC left.
> 
> 50 Game Benchmark: GeForce RTX 3080 12GB vs. Radeon RX 6900 XT - YouTube


Never been a fan of HUB game benchmarks. I don't like his approach at benching AMD GPU's.


----------



## EastCoast

ZealotKi11er said:


> This video feels wrong coming from an owner of 6900 XT and 3080 Ti TUF.
> His reference 6900 XT is running 22xxMHz while 3080 12GB is maxed out already (OC Model) with 5% max OC left.
> 
> 50 Game Benchmark: GeForce RTX 3080 12GB vs. Radeon RX 6900 XT - YouTube


Didn't watch the video yet but he did state that he would hamstring Radeon by disabling SAM support because Nvidia didn't benefit from it.








[HUB]HUB Disables SAM Even Though Radeon GPUs See Large...


Introduction In the latest installment of benchmarks a reviewer tested a game called Halo Infinite. In their 1st video they touted the 3000 series GPUs to be faster then the 6000 series GPUs in nearly all resolutions. With a 6900xt slower then a 3080. You can watch the original benchmark...




www.overclock.net





Also, in the comments he states he used 22.2.1 and not 22.2.2. 
Which is why I don't watch his videos.


----------



## SamSqautch84

jonRock1992 said:


> Never been a fan of HUB game benchmarks. I don't like his approach at benching AMD GPU's.


Wow, I thought I was the only one that the way AMD's cards get benched makes them look worst than they are. AMD cards treat voltage and clock speeds differently. Nvidia cards there's barely any oc room. A refrence AMD card has way more overhead that is wasted if you don't up the sliders in Radeon software. In the HUB video for MW they got like 170 fps with the AMD. My 6900 I run like 300fps at 1080p on ultra. Blowing the Nvidia cards away. Nobody uses RT and I don't count DLSS, it's fake fps in my book. Mention anything over on Reddeleteit and you get downvoted into oblivion. Nvidia cards aren't that impressive...at all. AMD's cards are ducking fast. I once had an argument with a kid on COD because he didn't belive me that my 6900 got 280fps at 1440p, blowing the doors off his 10900k and 3090. His excuse, "well all the benchmarks show the 3090 beating the 6900." Blame the benchmarkers, they don't do the AMD line justice.


----------



## cfranko

KGV said:


> I did rebuild all again, and now i have 53C gpu, hot spot 77C at time spy. 24C ambient.
> Is that normal temperatures with water cooling?


I have a Asrock Phantom Gaming 6900 xt with bykski waterblock and get 65 hotspot temperature at 300 watts, with liquid metal. I don’t know your power limit but that seems a bit hotter than what it should be.


----------



## KGV

cfranko said:


> I have a Asrock Phantom Gaming 6900 xt with bykski waterblock and get 65 hotspot temperature at 300 watts, with liquid metal. I don’t know your power limit but that seems a bit hotter than what it should be.


350Watt. I do not use liquid metal because it will make reaction with anodized copper of waterblock.


----------



## Scorpion667

SamSqautch84 said:


> Wow, I thought I was the only one that the way AMD's cards get benched makes them look worst than they are. AMD cards treat voltage and clock speeds differently. Nvidia cards there's barely any oc room. A refrence AMD card has way more overhead that is wasted if you don't up the sliders in Radeon software. In the HUB video for MW they got like 170 fps with the AMD. My 6900 I run like 300fps at 1080p on ultra. Blowing the Nvidia cards away. Nobody uses RT and I don't count DLSS, it's fake fps in my book. Mention anything over on Reddeleteit and you get downvoted into oblivion. Nvidia cards aren't that impressive...at all. AMD's cards are ducking fast. I once had an argument with a kid on COD because he didn't belive me that my 6900 got 280fps at 1440p, blowing the doors off his 10900k and 3090. His excuse, "well all the benchmarks show the 3090 beating the 6900." Blame the benchmarkers, they don't do the AMD line justice.


It does seem these cards are slept on in COD. Fr33thy did a YT video a while back showing 6900XT beating his 3090 in MW by 150fps @1080p. Below are my results with older CPU (9900KS) in Vanguard:

21-02-2022, 16:19:26 Vanguard.exe benchmark completed, 5729 frames rendered in 13.500 s
Average framerate : 424.3 FPS
Minimum framerate : 401.6 FPS
Maximum framerate : 447.0 FPS
1% low framerate : 338.5 FPS
0.1% low framerate : 309.6 FPS

21-02-2022, 16:20:01 Vanguard.exe benchmark completed, 5708 frames rendered in 13.547 s
Average framerate : 421.3 FPS
Minimum framerate : 401.0 FPS
Maximum framerate : 450.3 FPS
1% low framerate : 337.2 FPS
0.1% low framerate : 317.0 FPS

21-02-2022, 16:20:33 Vanguard.exe benchmark completed, 5782 frames rendered in 13.625 s
Average framerate : 424.3 FPS
Minimum framerate : 396.5 FPS
Maximum framerate : 451.9 FPS
1% low framerate : 337.0 FPS
0.1% low framerate : 304.5 FPS


----------



## alceryes

Scorpion667 said:


> View attachment 2549249


You are 100% correct. In the USA, these stickers can't be used as evidence in denying a warranty claim. Mfgs. still use them as a deterrent though.


----------



## Godhand007

Has anyone had proper success with swapping 6900 XT LC's bios on XTXH chip? I am thinking of getting a *RX 6900 XT Toxic Extreme Edition* but only want to invest in it if I can do unlock 18 Gbps memory overclock.


----------



## ZealotKi11er

Godhand007 said:


> Has anyone had proper success with swapping 6900 XT LC's bios on XTXH chip? I am thinking of getting a *RX 6900 XT Toxic Extreme Edition* but only want to invest on it if I can unlock 18 Gbps memory.


You cant unlock 18Gbps ... Some card can OC to 18 but are not 18Gbps modules.


----------



## Godhand007

ZealotKi11er said:


> You cant unlock 18Gbps ... Some card can OC to 18 but are not 18Gbps modules.


Yeah, I meant OC to 18Gbps. So do we have some examples of this working out successfully for *Toxic* or any other XTXH chips by using the bios from LC?


----------



## jonRock1992

Godhand007 said:


> Yeah, I meant OC to 18Gbps. So do we have some examples of this working out successfully for *Toxic* or any other XTXH chips by using the bios from LC?


There's been quite a few working examples in this thread. All of the top overclockers use it. It didn't work for me though because of my motherboard.


----------



## J7SC

jonRock1992 said:


> There's been quite a few working examples in this thread. All of the top overclockers use it. It didn't work for me though because of my motherboard.


Subject to a cooperating mobo and potentially differing I/O, it makes sense to upload the faster VRAM bios as long as you can a.) lower VRAM speed to the ideal efficiency level (easy) once in Windows, and b.) deal with boot-up before your VRAM speed control kicks in (not so easy, possibly requires s.th. external, like ElmorLabs EVC2SX). It comes down to how forgiving your native VRAM is.


----------



## Godhand007

jonRock1992 said:


> There's been quite a few working examples in this thread. All of the top overclockers use it. It didn't work for me though because of my motherboard.


I already tried searching the thread for it before posting my question but the Info is spread to thin without conclusive statements. I admit that I haven't gone through each and every mention of _LC_ and bios _flash _though.


----------



## Godhand007

J7SC said:


> Subject to a cooperating mobo and potentially differing I/O, it makes sense to upload the faster VRAM bios as long as you can a.) lower VRAM speed to the ideal efficiency level (easy) once in Windows, and b.) deal with boot-up before your VRAM speed control kicks in (not so easy, possibly requires s.th. external, like ElmorLabs EVC2SX). It comes down to how forgiving your native VRAM is.


Can you explain those two points or point me to the discussion on it ? My mobo is MSI PRO Z690 DDR4.


----------



## J7SC

Godhand007 said:


> Can you explain those two points or point me to the discussion on it ? My mobo is MSI PRO Z690 DDR4.


I don't know anything about that mobo you listed, but in more general terms, fleshing a 'foreign' vbios onto a card with a PCB with a different I/O layout can (but does not have to) cause other system problems, sometimes around USB-C. 

On my second point in the earlier post, I used to do a lot of HWBot and would load extreme XOC vbios onto select cards - they could have voltage requirements that could be addressed once in windows with software, but cold-booting was a different animal, often resulting in black screens. Fortunately, I have EVBots (a bit like EVC2SX, but only for top-end NVidia cards) and would have to boost voltages manually right after the GPU was recognized and during bootup until windows had fully loaded.


----------



## Godhand007

J7SC said:


> I don't know anything about that mobo you listed, but in more general terms, fleshing a 'foreign' vbios onto a card with a PCB with a different I/O layout can (but does not have to) cause other system problems, sometimes around USB-C.


Got it. I actually already asked one of the members who had flashed LC bios on their card but they haven't responded.


> On my second point in the earlier post, I used to do a lot of HWBot and would load extreme XOC vbios onto select cards - they could have voltage requirements that could be addressed once in windows with software, but cold-booting was a different animal, often resulting in black screens. Fortunately, I have EVBots (a bit like EVC2SX, but only for top-end NVidia cards) and would have to boost voltages manually right after the GPU was recognized and during bootup until windows had fully loaded.


Hmm, interesting.


----------



## 6u4rdi4n

I've been reading a lot of posts almost bashing Thermal Grizzly, in particular the Kryonaut. 

So I thought I would just chime in with my own experience using it. My 6900 XT has been running with a water block and Kryonaut for 11 months now, and my temperatures are still the same as 11 months ago. I've set a 400W limit with MPT and I play a lot of Battlefield V on my 2560x1440 240Hz monitor, so it gets to stretch its legs.


----------



## EastCoast

6u4rdi4n said:


> I've been reading a lot of posts almost bashing Thermal Grizzly, in particular the Kryonaut.
> 
> So I thought I would just chime in with my own experience using it. My 6900 XT has been running with a water block and Kryonaut for 11 months now, and my temperatures are still the same as 11 months ago. I've set a 400W limit with MPT and I play a lot of Battlefield V on my 2560x1440 240Hz monitor, so it gets to stretch its legs.


Who's been bashing Kryonaut? That's some good stuff. For me, I have to put on a thicker layer to prevent pumpout. But other then that it's golden.


----------



## 6u4rdi4n

EastCoast said:


> Who's been bashing Kryonaut? That's some good stuff. For me, I have to put on a thicker layer to prevent pumpout. But other then that it's golden.


I won't call out any names, because that's irrelevant. 

I didn't write bashing, I wrote "almost bashing". I've read a lot of posts in this and other threads about Kryonaut that have been on the negative side. It's mostly been about pumpout and that it won't hold up for a long time with these high wattage chips. 

Like I wrote in my last post, I'm just chiming in with my own experience using Kryonaut, and I think a good 11 months is a long enough time to see if it holds up or not.


----------



## J7SC

...using words like 'bashing' isn't perhaps the smartest way to go about things. For the record, I have over 25 GPUs here (many used for business also), and I am also not particularly partial to just one brand of paste, per pic below. That said, I initially had TG Kryonaut on both my 6900XT and 3090 Strix and changed over to Gelid - so a direct comparison. 

It isn't so much about a particular brand loyalty, but about biggish dies and whether there's some unevenness on the die and / or cooler plate, and longer-term pump-out. FYI, I still use Kryonaut on various other applications, such as select CPUs.


----------



## 6u4rdi4n

J7SC said:


> ...using words like 'bashing' isn't perhaps the smartest way to go about things. For the record, I have over 25 GPUs here (many used for business also), and I am also not particularly partial to just one brand of paste, per pic below. That said, I initially had TG Kryonaut on both my 6900XT and 3090 Strix and changed over to Gelid - so a direct comparison.
> 
> It isn't so much about a particular brand loyalty, but about biggish dies and whether there's some unevenness on the die and / or cooler plate, and longer-term pump-out. FYI, I still use Kryonaut on various other applications, such as select CPUs.
> View attachment 2550158


So sorry I chose that word then. Yikes... I guess the post made no sense at all then.


----------



## Blameless

J7SC said:


> It isn't so much about a particular brand loyalty, but about biggish dies and whether there's some unevenness on the die and / or cooler plate, and longer-term pump-out. FYI, I still use Kryonaut on various other applications, such as select CPUs.
> View attachment 2550158


On a semi-related tangent, Noctua NT-H2 is the only higher-end paste I've tried in recent memory that I've really had an issue with.

It's not hard to apply, performs fine, and seems like it will last, but it's a real pain to clean up. In my experience it tends to flake apart when wiped up then I either have to vacuum up the flakes or put some solvent on a swab and dab them up. I'll probably finish off the 10g tube I have (same one that's in your picture), but I don't think I'll buy more of it.


----------



## D-EJ915

I stopped using kryonaut bc it was scratching my block cold plates but never had issues with the temps from it. I got an alphacool block for my devil 6900 should put it on at some point lol.


----------



## Godhand007

Hi Guys,
Need an urgent advice. TOXIC AMD Radeon™ RX 6900 XT Extreme Edition is available at decent price for me. Is it worth it over a reference RX 6900 XT? I am talking about everting from cooling, performance, dual bios, power limit etc. It would cost me about ~400$ to upgrade to this. I am an OC enthusiast and I like to push my GPUs to their maximum limits.


----------



## alceryes

Godhand007 said:


> Hi Guys,
> Need an urgent advice. TOXIC AMD Radeon™ RX 6900 XT Extreme Edition is available at decent price for me. Is it worth it over a reference RX 6900 XT? I am talking about everting from cooling, performance, dual bios, power limit etc. It would cost me about ~400$ to upgrade to this. I am an OC enthusiast and I like to push my GPUs to their maximum limits.


It's an XTXH version. Assuming you have the correct aux power connectors and space to hang the rad, the real question is whether it will provide you with $400+ worth of happiness. That's a question only you can answer. 

Overclocking is always a gamble. It could be that the core you get isn't really that good (comparatively speaking) and that the reference card you currently have actually overclocks better. The chance of this is probably low but it's not zero.


----------



## Godhand007

alceryes said:


> It's an XTXH version. Assuming you have the correct aux power connectors and space to hang the rad, the real question is whether it will provide you with $400+ worth of happiness. That's a question only you can answer.
> 
> Overclocking is always a gamble. It could be that the core you get isn't really that good (comparatively speaking) and that the reference card you currently have actually overclocks better. The chance of this is probably low but it's not zero.


What do you mean by "correct aux power" ? I have corsair RM1000i which should be fine for this card. This card is guaranteed to be able to do 2700 Mhz (Toxic boost) and my reference taps out at about 2500-2550 Mhz. So I am hoping for at least 6-7% performance increase even with the worst case scenario.

About happiness point, If it overclocks to 2850 Mhz and I can flash the bios of LC version on it, I would consider 400$ spent on this appropriate .


----------



## 6u4rdi4n

Only you can make that decision. I wouldn't bother at this point. 6950 XT rumours, next gen rumours. I feel like right now is a bad time to spend a significant amount of money on an insignificant improvement. 

But that's just me, and as I wrote: Only you can make that decision.


----------



## alceryes

Godhand007 said:


> What do you mean by "correct aux power" ? I have corsair RM1000i which should be fine for this card. This card is guaranteed to be able to do 2700 Mhz (Toxic boost) and my reference taps out at about 2500-2550 Mhz. So I am hoping for at least 6-7% performance increase even with the worst case scenario.
> 
> About happiness point, If it overclocks to 2850 Mhz and I can flash the bios of LC version on it, I would consider 400$ spent on this appropriate .


Just having the 2x8-pin and 1x6-pin aux power requirement.  The specs description does have a carefully worded 'disclaimer' regarding max boost speeds under the _engine clock_ section. Just be careful if you do get it. You may have already experienced degradation on your current card. This is probably because you pushed it with too high voltage at too high frequency.


----------



## Godhand007

alceryes said:


> Just having the 2x8-pin and 1x6-pin aux power requirement.  The specs description does have a carefully worded 'disclaimer' regarding max boost speeds under the _engine clock_ section. Just be careful if you do get it. You may have already experienced degradation on your current card. This is probably because you pushed it with too high voltage at too high frequency.


Still not sure about the degradation thing though, cause it's an all or nothing kind of situation in my experience (this is what others have said as well). It is possible though. Also, could be case of degradation of one the GPU components or it could be something unknown which is at play.


----------



## alceryes

Decided to edge my VRAM up a little more and was rewarded with a 22K Graphics Score in Time Spy! Woohoo!


----------



## Th3Fly1ngCow

Got my hands on a 6900xt sapphire SE about 6 months ago today it started hitting over 100c hotspot almost immediately 3D mark shut me down twice at 100% fan speed could the thermal paste already failed? bad mounting job from factory I’ve always had insane differences in temp and junction temp but recently it’s been 40 or more C is it okay to repast with mx5 I’ve seen some on here say it’s too thin also would it be wise to washer mod with some thin nylon washers has anyone has success


----------



## WR-HW95

Th3Fly1ngCow said:


> Got my hands on a 6900xt sapphire SE about 6 months ago today it started hitting over 100c hotspot almost immediately 3D mark shut me down twice at 100% fan speed could the thermal paste already failed? bad mounting job from factory I’ve always had insane differences in temp and junction temp but recently it’s been 40 or more C is it okay to repast with mx5 I’ve seen some on here say it’s too thin also would it be wise to washer mod with some thin nylon washers has anyone has success


I wouldnt waste my time with MX-5 if its anything like MX-4.
MX-4 pumped out from 1080Ti ~2.5months.
For 6900XT I bought Phobya HeGrease Extreme and its been working well with 1080Ti too last 10months.


----------



## Godhand007

Th3Fly1ngCow said:


> Got my hands on a 6900xt sapphire SE about 6 months ago today it started hitting over 100c hotspot almost immediately 3D mark shut me down twice at 100% fan speed could the thermal paste already failed? bad mounting job from factory I’ve always had insane differences in temp and junction temp but recently it’s been 40 or more C is it okay to repast with mx5 I’ve seen some on here say it’s too thin also would it be wise to washer mod with some thin nylon washers has anyone has success


Before pulling out the big guns, lets answer few questions:

1. What do you mean by '3D mark shut me down' ? System shutdown or did 3d mark error out?
2. Is card OCed ? If everything is at default including the fan speeds then for reference cards (assuming that's what your mean by SE) 100 C is not completely unreasonable. 
3. So your Edge temp is ~60 C and Hotspot is 100 C, that is certainly not expected. Play around with fan profile once and also check if all the fans are running properly.


----------



## Th3Fly1ngCow

Godhand007 said:


> Before pulling out the big guns, lets answer few questions:
> 
> 1. What do you mean by '3D mark shut me down' ? System shutdown or did 3d mark error out?
> 2. Is card OCed ? If everything is at default including the fan speeds then for reference cards (assuming that's what your mean by SE) 100 C is not completely unreasonable.
> 3. So your Edge temp is ~60 C and Hotspot is 100 C, that is certainly not expected. Play around with fan profile once and also check if all the fans are running properly.


1.3D has completely killed my power twice entire PC shutdown . 
2.it is a sapphire nitro SE it’s OC’d out of the box I have a custom profile with 100 percent fan speed for hard gaming
I used sapphires trixx software fans are 100 percent healthy all spinning max 3200 RPM 
3. Yes basically anything from dying light to cyperpunk and doom eternal no RT 1440p 165hz monitor 99 percent usage shoots me almost to almost 100c junction then over 110 in under 5 to 10 min of playing


----------



## Godhand007

Th3Fly1ngCow said:


> 1.3D has completely killed my power twice entire PC shutdown .
> 2.it is a sapphire nitro SE it’s OC’d out of the box I have a custom profile with 100 percent fan speed for hard gaming
> I used sapphires trixx software fans are 100 percent healthy all spinning max 3200 RPM
> 3. Yes basically anything from dying light to cyperpunk and doom eternal no RT 1440p 165hz monitor 99 percent usage shoots me almost to almost 100c junction then over 110 in under 5 to 10 min of playing


Well, assuming it's not a PSU issue and you are getting 110 C with 100% fan speeds, it does seem like cooling is being hampered somehow. FYI, When I meant check fans, I meant whether all them are physically spinning. Software might still report proper RPMs even if one of the fans is down. 

Have you used MPT and forgotten ? The old PL limits might still be in play. Try a driver reset once just be sure before you start with new paste job.


----------



## Th3Fly1ngCow

Godhand007 said:


> Well, assuming it's not a PSU issue and you are getting 110 C with 100% fan speeds, it does seem like cooling is being hampered somehow. FYI, When I meant check fans, I meant whether all them are physically spinning. Software might still report proper RPMs even if one of the fans is down.
> 
> Have you used MPT and forgotten ? The old PL limits might still be in play. Try a driver reset once just be sure before you start with new paste job.


PSU is a brand new EVGA 850 I never touched anything outside of just adrenaline never used MPT I also rolled back drivers twice even trying to the WHQL drivers this card was open box from microcenter but it’s odd it never cracked over 100 c the last like 8 months even on extreme games like metro exodus so I’m at a loss for reason I do have it vertical mounted but it’s in an open air case so air flow is top tier


----------



## Godhand007

Th3Fly1ngCow said:


> PSU is a brand new EVGA 850 I never touched anything outside of just adrenaline never used MPT I also rolled back drivers twice even trying to the WHQL drivers this card was open box from microcenter but it’s odd it never cracked over 100 c the last like 8 months even on extreme games like metro exodus so I’m at a loss for reason I do have it vertical mounted but it’s in an open air case so air flow is top tier


If the fans are working properly (physically checked ) and your edge temps are at about ~60 C, then most probably something has gone wrong with the paste.


----------



## Th3Fly1ngCow

Godhand007 said:


> If the fans are working properly (physically checked ) and your edge temps are at about ~60 C, then most probably something has gone wrong with the paste.


I’ve never repasted a newer GPU any advice will MX5 hold me over do you have recommendations and what’s a good amount to spread over the die and also thanks for your help and replies


----------



## Godhand007

Th3Fly1ngCow said:


> I’ve never repasted a newer GPU any advice will MX5 hold me over do you have recommendations and what’s a good amount to spread over the die and also thanks for your help and replies


I am not the right guy for giving advice on thermal pastes. I think somebody already replied with their experience with MX-4.


----------



## Th3Fly1ngCow

Godhand007 said:


> I am not the right guy for giving advice on thermal pastes. I think somebody already replied with their experience with MX-4.


Oops yeah your right I’ll try it if it lasts me a week it’ll give me time to find something better I just want to confirm the card is not defective or something


----------



## alceryes

Th3Fly1ngCow said:


> ...but it’s odd it never cracked over 100 c the last like 8 months even on extreme games like metro exodus so I’m at a loss for reason I do have it vertical mounted but it’s in an open air case so air flow is top tier


When did it go from the _above_ to the _below_? Did anything change that could have been the cause?



Th3Fly1ngCow said:


> 3. Yes basically anything from dying light to cyperpunk and doom eternal no RT 1440p 165hz monitor 99 percent usage shoots me almost to almost 100c junction then over 110 in under 5 to 10 min of playing


Sounds like something definitely happened. Make sure the '_zero RPM_' fan setting is off in Radeon software.
Your computer completely powering off or restarting could be a power issue. What's your exact model of PSU?


----------



## Th3Fly1ngCow

alceryes said:


> When did it go from the _above_ to the _below_? Did anything change that could have been the cause?
> 
> 
> 
> Sounds like something definitely happened. Make sure the '_zero RPM_' fan setting is off in Radeon software.
> Your computer completely powering off or restarting could be a power issue. What's your exact model of PSU?


It’s a super nova EVGA 850 watt I have a custom profile with max fan speed for gaming zero fan is disabled I vertically mounted my GPU that’s about all that changed


----------



## Th3Fly1ngCow

Thanks guy for all the help I repasted and ran a few games with 4 runs of firestrike and never broke 95 C with my custom profile thanks for all the input


----------



## Godhand007

Th3Fly1ngCow said:


> Thanks guy for all the help I repasted and ran a few games with 4 runs of firestrike and never broke 95 C with my custom profile thanks for all the input


Maybe now you can remove the "6900xt sapphire SE possible a pile of trash " from your signature.


----------



## alceryes

Godhand007 said:


> Maybe now you can remove the "6900xt sapphire SE possible a pile of trash " from your signature.


The fact remains that he had to repaste in the first place.


----------



## CS9K

alceryes said:


> The fact remains that he had to repaste in the first place.


This. Not the first Sapphire, nor the first RX 6900 XT, _by far_, that's needed a re-paste out of the box. 

The bean-counters cut costs in the worst place, one that risks invalidating one's warranty to get the GPU to function properly at default settings


----------



## alceryes

CS9K said:


> This. Not the first Sapphire, nor the first RX 6900 XT, _by far_, that's needed a re-paste out of the box.
> 
> The bean-counters cut costs in the worst place, one that risks invalidating one's warranty to get the GPU to function properly at default settings


Yup.
Fortunately, the core of my reference (PC Partner Group/Zotac) card appears to have good contact with the cooler plate and have good pads on the front side. Once I put the additional thermal pads on the backside and set the GPU fan to always on, my hottest-of-all-sensors temp hasn't gone above 93ºC.


----------



## gtz

Th3Fly1ngCow said:


> Thanks guy for all the help I repasted and ran a few games with 4 runs of firestrike and never broke 95 C with my custom profile thanks for all the input


Also I have the non SE version, same cooler minus the RGB fans. That cooler is horrible, I repasted and was hitting 95-100c with my sample. Maybe adding washers would have dropped it further but I added block and solved my issues. 

If air cooling is the only option, the XFX Merc is prob the best air cooler.


----------



## Scorpion667

Th3Fly1ngCow said:


> Oops yeah your right I’ll try it if it lasts me a week it’ll give me time to find something better I just want to confirm the card is not defective or something


Careful with Sapphire they claim warranty void on user repaste per below official response:









If you care about warranty perhaps check if there's a Sapphire RMA center in your city in which case you can probably get them to resolve this quickly. I was very surprised to learn there is one in my city but it's some third party company with a very questionable website. 

That said, my Sapphire Toxic 6900XT started to also get thermal shutdowns in TimeSpy after 4-5 months of gaming. The stock paste was cooked and fully pumped out (pics). I repasted with GC Extreme and washer mod as described here and that resolved the issue. Key observations are that the card is picky with thermal paste as well as application.

I had great success with GC Extreme but poor results with Kryonaut. As for application I went with the "star" method I copied from this guy which worked slightly better (-2c Hotspot in TimeSpy) than the spread method. Make sure you keep even pressure when mating the heatsink to gpu so the paste doesn't pump out to one side while you're threading the backplate screws . Backplate screws always in cross pattern, one turn at a time.

I know this stuff sounds really extra. I've been repasting builds for 18 years and this was by far the most finnicky.

Let us know how you make out!


----------



## ogmadvlad

Are the reference models prone to dying? Ive had my 6900xt under an ek classic block for like half a year already and it died in the most uneventful way ever. 
Had no temperature issues and the card was mostly used for dota 2 and mining eth when idle. 

Heres how the events of my cards death happened:
I was playing dota 2 with -50% power limit with no issue, when I closed the game I was browsing the web for like an hour with no issues. When I was getting off the pc I try to turn on phoenix miner and it wouldnt start. I restart the pc thinking something in windows is blocking the miner or a driver crashed and needs a restart. As the pc turned off the card never turned back on. no video in post. I let the pc boot and remoted in to the pc and checked device manager and had an exclamation mark with error 43. I tried booting with another card in the vga slot and it boots with an old 8400gs. after a reboot the card doesnt show up in device manager, nothing totally dead. could gaming with a -50% power limit and 2100 memory clock have killed the card somehow or just a faulty card that took some time to die? There was no leaks or anything wrong with the cooling system, card ran fine, now I guess I sell for parts as I bought it second hand and put on a waterblock so no chance of amd offfering warranty repair on it.


----------



## Blameless

ogmadvlad said:


> Are the reference models prone to dying? Ive had my 6900xt under an ek classic block for like half a year already and it died in the most uneventful way ever.
> Had no temperature issues and the card was mostly used for dota 2 and mining eth when idle.
> 
> Heres how the events of my cards death happened:
> I was playing dota 2 with -50% power limit with no issue, when I closed the game I was browsing the web for like an hour with no issues. When I was getting off the pc I try to turn on phoenix miner and it wouldnt start. I restart the pc thinking something in windows is blocking the miner or a driver crashed and needs a restart. As the pc turned off the card never turned back on. no video in post. I let the pc boot and remoted in to the pc and checked device manager and had an exclamation mark with error 43. I tried booting with another card in the vga slot and it boots with an old 8400gs. after a reboot the card doesnt show up in device manager, nothing totally dead. could gaming with a -50% power limit and 2100 memory clock have killed the card somehow or just a faulty card that took some time to die? There was no leaks or anything wrong with the cooling system, card ran fine, now I guess I sell for parts as I bought it second hand and put on a waterblock so no chance of amd offfering warranty repair on it.


Was probably defective out of the gate, damaged before you bought it, or damaged when mounting the block. Stock voltages, with a reduced power limit, at good temperatures isn't going to kill a non-defective card, no matter what's run on it, in anything less than several years.


----------



## Godhand007

ogmadvlad said:


> Are the reference models prone to dying? Ive had my 6900xt under an ek classic block for like half a year already and it died in the most uneventful way ever.
> Had no temperature issues and the card was mostly used for dota 2 and mining eth when idle.
> 
> Heres how the events of my cards death happened:
> I was playing dota 2 with -50% power limit with no issue, when I closed the game I was browsing the web for like an hour with no issues. When I was getting off the pc I try to turn on phoenix miner and it wouldnt start. I restart the pc thinking something in windows is blocking the miner or a driver crashed and needs a restart. As the pc turned off the card never turned back on. no video in post. I let the pc boot and remoted in to the pc and checked device manager and had an exclamation mark with error 43. I tried booting with another card in the vga slot and it boots with an old 8400gs. after a reboot the card doesnt show up in device manager, nothing totally dead. could gaming with a -50% power limit and 2100 memory clock have killed the card somehow or just a faulty card that took some time to die? There was no leaks or anything wrong with the cooling system, card ran fine, now I guess I sell for parts as I bought it second hand and put on a waterblock so no chance of amd offfering warranty repair on it.


You could try an RMA after placing the stock cooler back.


----------



## alceryes

ogmadvlad said:


> Are the reference models prone to dying?


Not any more than other AiB partner's 6900 XTs. But using your card 24/7, with half the day spent crypto mining, could wear down any card.
Remember, these are consumer-grade GPUs. They aren't designed for 24/7 operation at 90%+ load usage.


----------



## Blameless

alceryes said:


> Not any more than other AiB partner's 6900 XTs. But using your card 24/7, with half the day spent crypto mining, could wear down any card.
> Remember, these are consumer-grade GPUs. They aren't designed for 24/7 operation at 90%+ load usage.


When it comes to the chips themselves, there is no difference between a consumer grade and professional GPU, other than the binning. The support components, even on reference consumer models are also rated for several tens of thousands of hours at normal temperatures (typically 2k-5k hours at 105C to 125C, depending on what we're talking about, with that time doubling with every 10-20C reduction in temp). The reference 6900 XT PCB is also significantly over-built for it's stock 100% power limit, let alone 50%. Mining is also a low (on everything except the GDDR6 and SoC), very static load, resulting in negligible thermal cycling.


----------



## Th3Fly1ngCow

Scorpion667 said:


> Careful with Sapphire they claim warranty void on user repaste per below official response:
> View attachment 2550543
> 
> 
> If you care about warranty perhaps check if there's a Sapphire RMA center in your city in which case you can probably get them to resolve this quickly. I was very surprised to learn there is one in my city but it's some third party company with a very questionable website.
> 
> That said, my Sapphire Toxic 6900XT started to also get thermal shutdowns in TimeSpy after 4-5 months of gaming. The stock paste was cooked and fully pumped out (pics). I repasted with GC Extreme and washer mod as described here and that resolved the issue. Key observations are that the card is picky with thermal paste as well as application.
> 
> I had great success with GC Extreme but poor results with Kryonaut. As for application I went with the "star" method I copied from this guy which worked slightly better (-2c Hotspot in TimeSpy) than the spread method. Make sure you keep even pressure when mating the heatsink to gpu so the paste doesn't pump out to one side while you're threading the backplate screws . Backplate screws always in cross pattern, one turn at a time.
> 
> I know this stuff sounds really extra. I've been repasting builds for 18 years and this was by far the most finnicky.
> 
> Let us know how you make out!


Thank you for all the information I will do the washer mod so thanks for the correct size too I repasted with mx4 I don’t expect it to last long but I torqued everything cross patterned (spent a career as a mechanic so natural to me) I did the spread method and seemed to be fine I’m hard on my card it game every moment I’m not at work so I fully expect this paste to cook out sooner rather than later but I have a microcenter fairly close thanks for the detailed write up ok glad I found you guys or I’d be twiddling my thumbs first real GPU I’ve ever repasted


----------



## spectra9

Hello everyone. I'm not in the club yet so to speak, but I do have the opportunity to acquire the *Sapphire TOXIC AMD RX 6900 XT Extreme Edition*. I have one concern though. Since I've already installed an AIO on the top of my case (Phanteks P500A) , would it be OK to install the Toxic's AIO at the front? Or would it be benefecial enough to swap the two?


----------



## 6u4rdi4n

spectra9 said:


> Hello everyone. I'm not in the club yet so to speak, but I do have the opportunity to acquire the *Sapphire TOXIC AMD RX 6900 XT Extreme Edition*. I have one concern though. Since I've already installed an AIO on the top of my case (Phanteks P500A) , would it be OK to install the Toxic's AIO at the front? Or would it be benefecial enough to swap the two?


Test it! That's what I would do. I would probably also set both of them to exhaust. At least test it.


----------



## spectra9

6u4rdi4n said:


> Test it! That's what I would do. I would probably also set both of them to exhaust. At least test it.


Yeah, that's probably the best way to know for sure. I've seen both are valid configurations, so was hoping for some insight or reasoning of why one is preferred over the other. I think I'll just start with it at the front and see how that goes.

Btw, why both exhaust though?


----------



## Scorpion667

I run both 360 rads as exhaust. The temps are really good 72c max CPU (5.1Ghz 9900KS) and 58c max GPU (2800/2162) after long gaming session with reasonable fan curves. I'm stubborn and don't want anything dumping hot air in the case I guess. Keep in mind this config is not great for RAM temperatures with side panel closed. I have to run a RAM cooler to keep the B-Die stable at 4400c16 CR1 1.575v


----------



## 6u4rdi4n

spectra9 said:


> Yeah, that's probably the best way to know for sure. I've seen both are valid configurations, so was hoping for some insight or reasoning of why one is preferred over the other. I think I'll just start with it at the front and see how that goes.
> 
> Btw, why both exhaust though?


Just a thought. Radiators move a decent amount of heat, so I would imagine having two seperate loops, in this case AIOs, where one dumps heat into the case and the other exhausts heat, one would heat up the other.


----------



## the matty

Late to the club but I'm finally here with a red devil ultimate, shoved an EK vector block and backplate on it and it's beautiful


----------



## spectra9

Scorpion667 said:


> I run both 360 rads as exhaust. The temps are really good 72c max CPU (5.1Ghz 9900KS) and 58c max GPU (2800/2162) after long gaming session with reasonable fan curves. I'm stubborn and don't want anything dumping hot air in the case I guess. Keep in mind this config is not great for RAM temperatures with side panel closed. I have to run a RAM cooler to keep the B-Die stable at 4400c16 CR1 1.575v
> 
> View attachment 2550730


Great looking rig! Do you run it with open panels like this? If I understand correctly, with the panel closed, the only source of cool air would be from the bottom, which is kinda a problem in my P500A since it doesn't have a bottom intake



6u4rdi4n said:


> Just a thought. Radiators move a decent amount of heat, so I would imagine having two seperate loops, in this case AIOs, where one dumps heat into the case and the other exhausts heat, one would heat up the other.


Gotcha. But since my case does not have a bottom intake, I imagine it would be bad to run both as exhaust. Unless maybe if I keep the side panel open


----------



## Godhand007

So, I just installed and did some basic testing on my Toxic Extreme Edition Card. I am a bit disappointed with the clocks I am getting. I can only do ~2650 Mhz on default voltages (1.2v). Changing PL limits has no impact on these clocks. Also, I saw my voltage staying pretty high even at idle, if I deviate from stock. Can someone with Toxic EE card confirm this behavior?













alceryes said:


> ..
> Overclocking is always a gamble. It could be that the core you get isn't really that good (comparatively speaking) and that the reference card you currently have actually overclocks better. The chance of this is probably low but it's not zero.


Well your warning materialized partially. I am stuck at 2650 Mhz clocks with Toxic EE card. It's better than my reference by around 100 Mhz but I was hoping to get around 2800 without any issues on it. I guess I might have to try TempDepVmin again.


----------



## Scorpion667

spectra9 said:


> Great looking rig! Do you run it with open panels like this? If I understand correctly, with the panel closed, the only source of cool air would be from the bottom, which is kinda a problem in my P500A since it doesn't have a bottom intake
> 
> 
> Gotcha. But since my case does not have a bottom intake, I imagine it would be bad to run both as exhaust. Unless maybe if I keep the side panel open


Thanks, went with function over looks but it's not terrible aesthetically. I close all side panels 99% of the time unless I'm benching. I had a look at your case and dual exhaust rads would only work with side panel open. With it closed the system would choke I think since the only source of intake would be the grill on the PCIe slots and rear fan if you reverse it. If you go with one intake one exhaust I would maybe move the GPU RAD to the top since it's dissipating 400+ watts vs CPU 200+ watts. I'd want that 400w exhaust air directly out of the case so exhaust GPU TOP and CPU intake front.



Godhand007 said:


> So, I just installed and did some basic testing on my Toxic Extreme Edition Card. I am a bit disappointed with the clocks I am getting. I can only do ~2650 Mhz on default voltages (1.2v). Changing PL limits has no impact on these clocks. Also, I saw my voltage staying pretty high even at idle, if I deviate from stock. Can someone with Toxic EE card confirm this behavior?
> 
> View attachment 2550794
> 
> 
> 
> 
> 
> Well your warning materialized partially. I am stuck at 2650 Mhz clocks with Toxic EE card. It's better than my reference by around 100 Mhz but I was hoping to get around 2800 without any issues on it. I guess I might have to try TempDepVmin again.


I have the same card. These are guaranteed to hit 2730 with the Sapphire Trixx Software using "Toxic Boost". As your card does not meet the minimum advertised requirement I would return it and reroll the dice.

See my HWinfo below hope it helps answer your idle voltage question. This is just latest win10, latest recommended radeon driver, idle, 2800/2162 undervolt to 1090mv, GPU fans set to stop at idle. I didn't notice the high voltage at idle thing you described but I also have tuned OS with HW acceleration disabled in Chrome, Steam, Discord. It's probably some software/hw acceleration keeping your GPU busy. If you want me to try something specifically let me know


----------



## Godhand007

Scorpion667 said:


> I have the same card. These are guaranteed to hit 2730 with the Sapphire Trixx Software using "Toxic Boost". As your card does not meet the minimum advertised requirement I would return it and reroll the dice.
> 
> See my HWinfo below hope it helps answer your idle voltage question. This is just latest win10, latest recommended radeon driver, idle, 2800/2162 undervolt to 1090mv, GPU fans set to stop at idle. I didn't notice the high voltage at idle thing you described but I also have tuned OS with HW acceleration disabled in Chrome, Steam, Discord. It's probably some software/hw acceleration keeping your GPU busy. If you want me to try something specifically let me know
> View attachment 2550799


Thanks for the reply but _*hit 2730 with the Sapphire Trixx Software using "Toxic Boost" *_is not guaranteed for all scenarios (refer screenshot below). Idle voltage right now with TempDepVmin @ 1.281v is as shown below. I can do 2800 Mhz in GT2 but I require 1.216 (max voltage in HwInfo for the same). You have a golden chip if you don't require such high voltages. 
I do need to mentioned one thing which great with my card though; Temperatures are ****in awesome . Even with max volage of 1.216 (1.281 in MPT) it rarely breaches 90 C in GT2. This is with a PL limit of 506 watts. I do have an open case though but still.


----------



## Scorpion667

Godhand007 said:


> Thanks for the reply but _*hit 2730 with the Sapphire Trixx Software using "Toxic Boost" *_is not guaranteed for all scenarios (refer screenshot below). Idle voltage right now with TempDepVmin @ 1.281v is as shown below. I can do 2800 Mhz in GT2 but I require 1.216 (max voltage in HwInfo for the same). You have a golden chip if you don't require such high voltages.
> I do need to mentioned one thing which great with my card though; Temperatures are ****in awesome . Even with max volage of 1.216 (1.281 in MPT) it rarely breaches 90 C in GT2. This is with a PL limit of 506 watts. I do have an open case though but still.
> 
> View attachment 2550800
> View attachment 2550801


If you undo all your MPT tweaks and just loop TS GT1+GT2 does it crash with just Toxic Boost enabled? If so I would record that and use it as evidence to fight the retailer. You paid a premium for a binned card that cannot meet it's advertised specification. Yeah they'll try to fight it but if you stand your ground maybe they can do something. 2650 appears to be on the low end of XTXH chips so I think it's worth the effort to send a few emails in an attempt to get a card worthy of the huge premium these carry over reference. I know overclocking is not guaranteed but when you advertise a magic one click OC button it is expected that this will not cause crashes/malfunction

[Edit]
Just checked and this is what Toxic Boost does:
Min GPU: 1000
Max GPU: 2730
VRAM: 2100
PL: +7%

Also forgot to ask, are you on the performance vBIOS? Switch should be all the way to the right so closest to the AIO tubing. But turn the system off first before switching if anything


----------



## aslidop

Finally got my Red Devil 6900XT under an EK Vector block w/ backplate (Kryonaut paste + Gelid pads). I'm able to stabilize at 2600MHz + 2050 on the memory. I've used MPT to bump up to 400W, but I never see it pull down more than 380W. Is this in the ballpark for what I should expect? TimeSpy scores seem a little lower than I'd hoped for (~22500 GS, ~12500 CPU), but I've still gotta finish tuning my RAM and CO. Specs are the 5800X build in my sig.


----------



## LtMatt

Men, the latest beta of HWINFO64 supports ASIC reading on Navi GPUs. I just checked and my 6900 XT has a rating of 88.4%. To view yours you need to look at the system information menu in the latest beta.


----------



## aslidop

LtMatt said:


> Men, the latest beta of HWINFO64 supports ASIC reading on Navi GPUs. I just checked and my 6900 XT has a rating of 88.4%. To view yours you need to look at the system information menu in the latest beta.


81.4% here


----------



## D-EJ915

Huh says my devil ultimate is 91%, I have a block but only used the stock cooler with it. I only have 21.7.2 driver with this bench though if that makes a difference.


----------



## Th3Fly1ngCow

Model: Nitro+SE XTX
Default clock: 2500mhz 
ASIC: 84.5%


----------



## 99belle99

85.9% Reference card with stock cooler.


----------



## Simzak

GPU Model: 6900 XTXH (MSI Gaming Z Trio)
Default Clock: 2579
Asic Quality: 88.5%


----------



## Godhand007

Video Card: Sapphire RX 6900 XT Toxic Extreme Edition

ASIC Quality: 89.3 % yet I can't go over ~2650Mhz without TempDepVmin.


----------



## Godhand007

Scorpion667 said:


> If you undo all your MPT tweaks and just loop TS GT1+GT2 does it crash with just Toxic Boost enabled? If so I would record that and use it as evidence to fight the retailer. You paid a premium for a binned card that cannot meet it's advertised specification. Yeah they'll try to fight it but if you stand your ground maybe they can do something. 2650 appears to be on the low end of XTXH chips so I think it's worth the effort to send a few emails in an attempt to get a card worthy of the huge premium these carry over reference. I know overclocking is not guaranteed but when you advertise a magic one click OC button it is expected that this will not cause crashes/malfunction
> 
> [Edit]
> Just checked and this is what Toxic Boost does:
> Min GPU: 1000
> Max GPU: 2730
> VRAM: 2100
> PL: +7%
> 
> Also forgot to ask, are you on the performance vBIOS? Switch should be all the way to the right so closest to the AIO tubing. But turn the system off first before switching if anything


Couple of points about my situation:

1. I think it did TS with Toxic boost enabled on performance Bios. I think it had failed on silent bios.
2. I got this a bit cheaper than actual MSRP, around ~1550 $ which is around ~300 more than my reference card but still cheaper.
3. If I want to get a replacement than I need to go through Sapphire RMA. No going back to retailer I am afraid. Also, _terms and conditions_ for toxic boost give a lot of room for Sapphire to reject an RMA.
4. I am going to use TempDepVmin for this card. Max voltage that I see in HwInfo is 1.238 (don't actually see this during 3d rendering) . This is probably the best built 6900 XT out there, so I think it can handle a little bit of voltage tweak.
5. My ASIC is 89.3 %. I might not get a card with better/same ASIC again.

Question :

Just to confirm when the switch is closest to AIO then it is performance bios, silent in middle, and software bios switch to the farthest, right?


----------



## LtMatt

When posting your Asic quality, please use the following format.

GPU Model: 6900 XTXH (specify whether it is a XTX or XTXH)
Default Clock: 2599
Asic Quality: 88.4%

My initial theory based on results here and on OcuK is that XTXH die have a higher Asic quality. Need more results though before a pattern can be confirmed.


----------



## Godhand007

@Scorpion667
What are your voltages while a video is paying in a browser and what driver are you on?


----------



## jonRock1992

My Red Devil Ultimate 6900 XTXH ASIC is 85.7%. I can't remember default clock behavior, but I think it's around 2550MHz. My GPU is an average over-clocker with water-cooling, but probably below average on the stock cooling.


----------



## Th3Fly1ngCow

The GF ordered me a byski block for nitro se first time going on water wish me luck


----------



## lestatdk

GPU Model: 6900 XTX
Default Clock: 2589
Asic Quality: 83.6%


----------



## Enzarch

GPU Model: 6900 XTXH (Ref XT LC)
Default Clock: 2579
Asic Quality: 87.1%


----------



## J7SC

FYI, there was a reason why GPU manufacturers pushed to have ASIC readouts taken off GPUz, mostly related to RMAs for cards that had a low ASIC readout but not much else wrong with it . As a former XOCer, for subzero we would look for _low_ ASIC value / high leakage cards as they tend to scale better with cold. When I had Quad-SLI / Quad-Fire, same-model cards would range from 67.5% to 92%. 

I am not sure if this still applies, but anything over 80% was considered '''good''' for air / light water-cooling.


----------



## Scorpion667

Godhand007 said:


> Couple of points about my situation:
> 
> 1. I think it did TS with Toxic boost enabled on performance Bios. I think it had failed on silent bios.
> 2. I got this a bit cheaper than actual MSRP, around ~1550 $ which is around ~300 more than my reference card but still cheaper.
> 3. If I want to get a replacement than I need to go through Sapphire RMA. No going back to retailer I am afraid. Also, _terms and conditions_ for toxic boost give a lot of room for Sapphire to reject an RMA.
> 4. I am going to use TempDepVmin for this card. Max voltage that I see in HwInfo is 1.238 (don't actually see this during 3d rendering) . This is probably the best built 6900 XT out there, so I think it can handle a little bit of voltage tweak.
> 5. My ASIC is 89.3 %. I might not get a card with better/same ASIC again.
> 
> Question :
> 
> Just to confirm when the switch is closest to AIO then it is performance bios, silent in middle, and software bios switch to the farthest, right?


Thanks for clarifying that. It sounds like the card is better than I originally thought and has some headroom:
-stays reasonably cool at insane wattage (500w) with no throttling. Mine gets same temps at 430w
-great ASIC (mine is 88.2)
-Can do 2730 in TS on Perf vBIOS

Re: vBIOS switch see below pic from the product page (about 1/3 down):











Godhand007 said:


> @Scorpion667
> What are your voltages while a video is paying in a browser and what driver are you on?


I reset the min/max/avg in HWinfo while the video was playing and left it running 15 mins. The card is not undervolted atm so 1.2v in Wattman. Chrome + YouTube video fullscreen. Driver 21.10.2 (latest WHQL)


----------



## D1g1talEntr0py

LtMatt said:


> Men, the latest beta of HWINFO64 supports ASIC reading on Navi GPUs. I just checked and my 6900 XT has a rating of 88.4%. To view yours you need to look at the system information menu in the latest beta.


GPU Model: Aorus Master RX 6900 XTX
Default Clock: 2504
ASIC Quality: 84.6%


----------



## KGV

Asrock Phantom 6900XT
86.0%


----------



## J7SC

Any summary re. potential issues with AMD's 22.2.3 drivers ? I'm a 'driver laggard' and still on 21.12.1 🙃


----------



## Godhand007

Scorpion667 said:


> Thanks for clarifying that. It sounds like the card is better than I originally thought and has some headroom:
> -stays reasonably cool at insane wattage (500w) with no throttling. Mine gets same temps at 430w
> -great ASIC (mine is 88.2)
> -Can do 2730 in TS on Perf vBIOS
> 
> Re: vBIOS switch see below pic from the product page (about 1/3 down):
> View attachment 2550945
> 
> 
> 
> 
> I reset the min/max/avg in HWinfo while the video was playing and left it running 15 mins. The card is not undervolted atm so 1.2v in Wattman. Chrome + YouTube video fullscreen. Driver 21.10.2 (latest WHQL)
> View attachment 2550944


My card's behavior is quite a but different from yours. I installed the same driver as you (I am on win11 BTW) and here are my observations:

1. At default settings with Toxic boost and Perf bios, It does pass TS but if I mess round with the settings even a little bit, it fails and I have to restart the system to get it to pass TS again.

2. If I set my clocks to 2750-2850 Mhz, the voltage goes to full ~1.2 while watching videos (HW acceleration is disabled). If I decrease the lower clock frequency to 500 then it stays at around ~.788 v during video playback. If I am browsing or doing only CPU related task it goes down till ~.222 v This is without any MPT tweaks btw.

3. While using MPT tweaks, it goes to ~1.2 during video play and 1.233v (max voltage that I have seen) during Halo: Infinite/TS. But during idle/browsing/CPU tasks it goes down till ~.333 mv.

*I am mentioning the voltage behavior to highlight that with or without MPT voltage tweak my cards behavior is same (except max voltage) while overclocking.* I hope this card can handle ~1.233v for few hours of gameplay on a daily basis as my reference card could only sustain such voltage for about a month and a half before I started getting issues with it. To be fair it was reaching 110 C at those voltages but this stays around 80 C for a typical game session.

Thoughts from others are welcome as well.


----------



## alceryes

Godhand007 said:


> Well your warning materialized partially. I am stuck at 2650 Mhz clocks with Toxic EE card. It's better than my reference by around 100 Mhz but I was hoping to get around 2800 without any issues on it. I guess I might have to try TempDepVmin again.


Ugh, that sucks! Sorry to hear.
Something to note, sometimes a lower voltage will give better OCing results - even if temps are kept below what would be considered 'high'.
E.g. maybe your core can only do 2650MHz at 80C and 1.2v but at 1.175v it stays a little cooler (78C) and sometimes peaks at 2700MHz.

I would experiment a bit with lower voltage.


----------



## EyeCU247

6900XT LC
ASIC 87.8

Stock values per 3d mark test run.


----------



## Godhand007

alceryes said:


> Ugh, that sucks! Sorry to hear.
> Something to note, sometimes a lower voltage will give better OCing results - even if temps are kept below what would be considered 'high'.
> E.g. maybe your core can only do 2650MHz at 80C and 1.2v but at 1.175v it stays a little cooler (78C) and sometimes peaks at 2700MHz.
> 
> I would experiment a bit with lower voltage.


If you read my earlier posts ( 1 2), it is not as bad if I use voltage tweaks on it. This is probably the best built 6900XT out there with great cooling. I am getting 30 C lower (typically in games) hotspot then my refence with MPT voltage tweaks. I can do 2800 Mhz on it with MPT voltage tweaks along with a PL of ~510W in GT2 which is leagues better than my reference RX 6900 XT. I am hopeful that MPT voltage tweaks will not cause an issue on this card given the reasons mentioned earlier. 

About the lower voltage stuff, I tired that but didn't see any improvement for clock that I was targeting.


----------



## SoloCamo

AMD 6900XT (reference cooler base of the base model)
ASIC 82.8%

Not sure where you are getting your readouts for stock clocks that you wanted posted?


----------



## EyeCU247

SoloCamo said:


> AMD 6900XT (reference cooler base of the base model)
> ASIC 82.8%
> 
> Not sure where you are getting your readouts for stock clocks that you wanted posted?


If this comment was not towards me, I apologize for the interruption. 

I didn't overclock the GPU (or anything else on the PC, and ran TimeSpy. It reported those speeds on the test.








I scored 19 375 in Time Spy


AMD Ryzen 9 3900X, AMD Radeon RX 6900 XT x 1, 32768 MB, 64-bit Windows 11}




www.3dmark.com





This is my GFX Card. Pretty close to spot on.





POWERCOLOR Radeon RX 6900 XT Liquid Cooled 16GB | VideoCardz.net


VideoCardz.net Graphics Cards Database




videocardz.net


----------



## CS9K

LtMatt said:


> Men, the latest beta of HWINFO64 supports ASIC reading on Navi GPUs. I just checked and my 6900 XT has a rating of 88.4%. To view yours you need to look at the system information menu in the latest beta.


My two
GPU Model: "PowerColor" Reference RX 6900 XT LC (XTXH)
Default Clock: 2644
ASIC Quality: 91.2%

GPU Model: Reference RX 6900 XT (XTX)
Default Clock: 2524
ASIC Quality: 86.4%

A friend's
GPU Model: Reference RX 6900 XT (XTX)
Default Clock: 2499
ASIC Quality: 83.1%


----------



## J7SC

Folks, does anyone know what GFX_DCS and GFX_EDC per 'Feature Enablement' below in MPT do (I can probably guess at GFX_EDC), and if so, what a non-destructive range would be for either with a 3x8 pin ?


----------



## SoloCamo

CS9K said:


> My two
> GPU Model: "PowerColor" Reference RX 6900 XT LC (XTXH)
> Default Clock: 2644
> ASIC Quality: 91.2%
> 
> GPU Model: Reference RX 6900 XT (XTX)
> Default Clock: 2524
> ASIC Quality: 86.4%
> 
> A friend's
> GPU Model: Reference RX 6900 XT (XTX)
> Default Clock: 2499
> ASIC Quality: 83.1%


I must be dense here or possibly blind, but where is everyone getting their default clock info from?


----------



## Enzarch

SoloCamo said:


> I must be dense here or possibly blind, but where is everyone getting their default clock info from?


In Radeon Software where you would change clocks, Whatever the 'Max Frequency' is set to at defaults
(Performance>Tuning>Enable manual tuning>enable GPU tuning>enable advanced control)


----------



## J7SC

SoloCamo said:


> I must be dense here or possibly blind, but where is everyone getting their default clock info from?


You can also use the GPUz / Advanced tab


----------



## LtMatt

CS9K said:


> My two
> GPU Model: "PowerColor" Reference RX 6900 XT LC (XTXH)
> Default Clock: 2644
> ASIC Quality: 91.2%
> 
> GPU Model: Reference RX 6900 XT (XTX)
> Default Clock: 2524
> ASIC Quality: 86.4%
> 
> A friend's
> GPU Model: Reference RX 6900 XT (XTX)
> Default Clock: 2499
> ASIC Quality: 83.1%


How is the 91.2% for overclocking?

My theory looks to be right. The XTXH have the higher Asic quality and the XTX all appear to have lower.


----------



## J7SC

LtMatt said:


> How is the 91.2% for overclocking?
> 
> My theory looks to be right. The XTXH have the higher Asic quality and the XTX all appear to have lower.


...on paper yes, certainly... but what does that really mean re. overclocking ? Especially with extensive cooling ? 

Check the actual and effective clock speeds below of my lowly, somewhat 'voltage-leaky' 6900 XTX for a few Superposition 4K runs. The '18043' was done today with the latest driver and increased PL, but otherwise stock, ie. 1.175v. The '18389' was done with an earlier driver, increased PL and 1.218v. The 6900 XTX is my daily slogger for work so I haven't pushed it further (yet) but for comps, the 3090 Strix result is in the upper left corner.


----------



## LtMatt

J7SC said:


> ...on paper yes, certainly... but what does that really mean re. overclocking ? Especially with extensive cooling ?
> 
> Check the actual and effective clock speeds below of my lowly, somewhat 'voltage-leaky' 6900 XTX for a few Superposition 4K runs. The '18043' was done today with the latest driver and increased PL, but otherwise stock, ie. 1.175v. The '18389' was done with an earlier driver, increased PL and 1.218v. The 6900 XTX is my daily slogger for work so I haven't pushed it further (yet) but for comps, the 3090 Strix result is in the upper left corner.
> View attachment 2551097


For water and exotic cooling I am sure the old summary from GPU-Z on how to read Asic % still apply to some degree.


----------



## Godhand007

Hey All,

I am thinking about flashing LC bios on my Toxic EE card. I have found this post with details but I have a few questions before I attempt a flash:

1. I have seen some people use another tool (a programmer which works on windows) apart from the method mentioned above. Can some provide me few details on it?
2. My card has two bioses/biosii. In case one bios is corrupted, how do I revert it again? I have seen people being told to boot with working bios and then toggle the switch before reverting the flash. But wouldn't toggling the switch while on Windows/Linux cause an issue?
3. In case I brick both of the bioses, Is there any chance to recover from that? I do have a spare GPU which I case use to boot into system but what does one need to do after that?

@robiatti @weleh @lawson67 @Neoki


----------



## Blameless

Godhand007 said:


> But wouldn't toggling the switch while on Windows/Linux cause an issue?


No.



Godhand007 said:


> In case I brick both of the bioses, Is there any chance to recover from that? I do have a spare GPU which I case use to boot into system but what does one need to do after that?


It's extremely difficult to brick both BIOSes...virtually impossible without trying to do so deliberately or having a serious hardware failure that would make the question of BIOS almost moot.

That said, if you do manage to screw things up this badly, fixing it follows the same procedure as flashing the firmware normally, though you may have to specify a different adapter number if you're booting from a different one.


----------



## Godhand007

Blameless said:


> No.


Thanks for the reply. Any knowledge about point no.1?



> It's extremely difficult to brick both BIOSes...virtually impossible without trying to do so deliberately or having a serious hardware failure that would make the question of BIOS almost moot.
> 
> That said, if you do manage to screw things up this badly,* fixing it follows the same procedure as flashing the firmware normally*, though you may have to specify a different adapter number if you're booting from a different one.


If I understand you correctly, if both bioses are bricked, I need to boot into the system using the spare GPU and follow the same procedure as discussed here, right?


----------



## CS9K

LtMatt said:


> How is the 91.2% for overclocking?
> 
> My theory looks to be right. The XTXH have the higher Asic quality and the XTX all appear to have lower.


I'm looking at 28-30C core<->hotspot Delta-T, which is annoying, but I get it, this little AIO has so little volume of water in it, that it's doing fantastic given the circumstances. I will give it to the 120mm AIO though, in the iPPC 2000 120mm sandwich I have made with it, it will keep the GPU running at 2700MHz with overclocked memory, I just can't let the water temp get above 37C else the hotspot will run away (I use a temp probe stuck in the radiator on the water-in side). But, I get it, the 120mm radiator was designed for OEM PC's, not idiots like me with more power tool and a glimmer in my eye :3

I've set the core to run at 2850MHz, and it did briefly before the water came up to temp (in Heaven benchmark) and it seemed to behave itself. I'm going to pusht he LC for a week or two while I prepare the next case for the loop migration into it, so that if the LC needs to be RMA'd, the warranty sticker will still be in tact at least. Memory tops out at 2390MHz, default timings (fast timings are not stable at any clock speed).

I can't really push the core further as hotspot will bounce off of the 95C limit, and while I think I _could_ remove that limit with MPT, I don't _want_ to. I suspect it will be a fantastic overclocker once I get it into my water block 🧡

Currently I have the included Delta fan sitting on the back of the GPU, blowing onto the backplate, but I think I will pull the backplate off, re-attach the screws with washers, and let the fan blow onto the back of the PCB, god knows the AIO will appreciate any help it can get. I haven't removed the Delta fan because the connector is inside of the front of the GPU, and I don't wish to pull that side apart/off yet.


----------



## J7SC

LtMatt said:


> For water and exotic cooling I am sure the old summary from GPU-Z on how to read Asic % still apply to some degree.


1. ) In general terms, that is likely true...but there were problems before with the ASIC readout which is why they first added the little table display, then took ASIC off completely. Since I water-cool everything anyways (the AM4 / 6900XT combo has 1200x 62mm rad space, 2x D5s), a lower ASIC is preferable. Still, that doesn't explain the fairly high effective clocks on air when new...









2.) FYI, I found s.th. interesting when re-visiting Hardwareluxx and its MPT guide that had an update...Apparently, when running TEMP_DEPENDENT_VMIN, going to the feature list and checking 'GFX_ULV' and 'GFXOFF' will mean that voltage will drop with appropriate light loads, but still return to the TEMP_DEPENDENT_VMIN -set voltage during high load

3.) @CS9K - I think I found the section in the registry which locks the VRAM to 2150 max...even with 'run as admin' that was locked again, but I exported that segment and opened it...too bad I don't know which one is which

EDIT: Re. 2.) please refer > to this


----------



## CS9K

J7SC said:


> 2.) FYI, I found s.th. interesting when re-visiting Hardwareluxx and its MPT guide that had an update...Apparently, when running TEMP_DEPENDENT_VMIN, going to the feature list and checking 'GFX_ULV' and 'GFXOFF' will mean that voltage will drop with appropriate light loads, but still return to the TEMP_DEPENDENT_VMIN -set voltage during high load
> 
> 3.) @CS9K - I think I found the section in the registry which locks the VRAM to 2150 max...even with 'run as admin' that was locked again, but I exported that segment and opened it...too bad I don't know which one is which


2.) Had I kept experimenting with this, I reckon I would have found this eventually. It is awesome that you have discovered this. With time and testing, you may have found a way to safely increase core voltage!

3.) Interesting! I no longer have the memory clock speed issue assuming I keep the 69 XT LC, but good to know! I wonder if @hellm can find this information useful for MPT use in the future. 🧡


----------



## Blameless

Godhand007 said:


> Any knowledge about point no.1?


I've always used the Linux flash tool for the RX 6000 series because it allows force flashing.



Godhand007 said:


> If I understand you correctly, if both bioses are bricked, I need to boot into the system using the spare GPU and follow the same procedure as discussed here, right?


Yes, though as noted chances are the adapter won't be 0, but 1, if it's not the boot device. Always double check before flashing.


----------



## LtMatt

J7SC said:


> 1. ) In general terms, that is likely true...but there were problems before with the ASIC readout which is why they first added the little table display, then took ASIC off completely. Since I water-cool everything anyways (the AM4 / 6900XT combo has 1200x 62mm rad space, 2x D5s), a lower ASIC is preferable. Still, that doesn't explain the fairly high effective clocks on air when new...
> View attachment 2551186
> 
> 
> 2.) FYI, I found s.th. interesting when re-visiting Hardwareluxx and its MPT guide that had an update...Apparently, when running TEMP_DEPENDENT_VMIN, going to the feature list and checking 'GFX_ULV' and 'GFXOFF' will mean that voltage will drop with appropriate light loads, but still return to the TEMP_DEPENDENT_VMIN -set voltage during high load
> 
> 3.) @CS9K - I think I found the section in the registry which locks the VRAM to 2150 max...even with 'run as admin' that was locked again, but I exported that segment and opened it...too bad I don't know which one is which


Good info cheers, will try the vmin tricks.


----------



## J7SC

LtMatt said:


> Good info cheers, will try the vmin tricks.


...haven't tried it myself yet as I just ran across it last night when looking for info on GFX_EDC. I hope to have some time later in the week / weekend to check that updated vmin approach out - could be useful for more than just a few bench runs - though initially, HWInfo should be open to see what's happening with volts, amps and temps over longer periods.


----------



## Trevbev

Sapphire Toxic Extreme (XTXH)
2375Mhz
86.8% ASIC


----------



## ZealotKi11er

My XTXH LC is 89.1%


----------



## esebey

Sapphire Toxic Extreme (XTXH) 
91.2%


----------



## alceryes

AMD reference RX 6900 XT (bought direct from AMD)
ASIC is only 80.1%

Undervolted and with a mild overclock gets me 22K GPU Time Spy score so I'm not complaining.


----------



## J7SC

alceryes said:


> AMD reference RX 6900 XT (bought direct from AMD)
> ASIC is only 80.1%
> 
> Undervolted and with a mild overclock gets me 22K GPU Time Spy score so I'm not complaining.


...have you thought about water-cooling it, with plenty of MPT PL ? Could do very well (see above)...


----------



## alceryes

J7SC said:


> ...have you thought about water-cooling it, with plenty of MPT PL ? Could do very well (see above)...


Not gonna happen in the short-term but perhaps in the long run.


----------



## ZealotKi11er

J7SC said:


> ...have you thought about water-cooling it, with plenty of MPT PL ? Could do very well (see above)...


Your theory is the opposite. Low Quality is low leakage does not clock high regardless of air or water or LN2. 90% card will destroy 80% card under water... not even close.


----------



## Godhand007

CS9K said:


> 2.) Had I kept experimenting with this, I reckon I would have found this eventually. It is awesome that you have discovered this. With time and testing, you may have found a way to safely increase core voltage!


I thought this was already known since around 2 months, it was to me at least. One thing to note though, even during non intensive 3d tasks (e.g. YouTube), the voltage will still spike higher than default. You can play around and check.


----------



## CS9K

Godhand007 said:


> I thought this was already known since around 2 months, it was to me at least.


I must have missed the rest of it; I knew about Temp Dependent Vmin, didn't know the rest.


----------



## CS9K

From an acquaintance:
GPU Model: PowerColor RX 6900 XT Red Devil Ultimate (XTXH)
Default Clock: 2559
ASIC Quality: 87.1%


----------



## J7SC

ZealotKi11er said:


> Your theory is the opposite. Low Quality is low leakage does not clock high regardless of air or water or LN2. 90% card will destroy 80% card under water... not even close.


I think you're wrong on that...

a.) Low ASIC quality is generally high leakage > just one of many references. There's also Google... 
b.) Lots of other factors play into overclocking capabilities beyond just ASIC %
c.) Per GPUz - the 3rd time I'm posting this now









d.) My low Asic / extensive water-cooling benching results of > 2800 MHz effective above and elsewhere in this thread...


----------



## 2080tiowner

Hi all,

Have you got the vbios of sapphire toxic extreme lc please ?


----------



## 6u4rdi4n

GPU model: XFX Speedster MERC 319 RX 6900 XT Ultra (XTX chip)
Default clock: 2534 MHz
ASIC Quality: 87.5%


----------



## alceryes

J7SC said:


> View attachment 2551257
> 
> 
> d.) My low Asic / extensive water-cooling benching results of > 2800 MHz effective above and elsewhere in this thread...


What's weird is that my card does around 2550-2575MHz and 2100MHz VRAM, with fast timings, all with a max voltage of 1.14v.
Eventually, when I start playing games that really push the FPS down, I'll consider adding a block.


----------



## SoloCamo

Enzarch said:


> In Radeon Software where you would change clocks, Whatever the 'Max Frequency' is set to at defaults
> (Performance>Tuning>Enable manual tuning>enable GPU tuning>enable advanced control)





J7SC said:


> You can also use the GPUz / Advanced tab


Thanks. GPU-Z always reported the standard 1825/2015/2250 (base/game/boost)

However, the AMD software has always shown 2474 as my default max clock.

So to update the thread properly.

AMD Reference 6900XT
Default Clock: 2474mhz
ASIC 82.8%




alceryes said:


> AMD reference RX 6900 XT (bought direct from AMD)
> ASIC is only 80.1%
> 
> Undervolted and with a mild overclock gets me 22K GPU Time Spy score so I'm not complaining.


I have a barely higher ASIC value and mine seems to undervolt well enough while holding 2400+ (can do 2500-2600 depending on title) on the reference cooler so I guess we both have nothing to complain about. That said, ever since ASIC became a thing I've always been skeptical of it's actual reliability. However, I've never put a gpu on water nor have I had two identical model cards with massive ASIC differences to compare to.



alceryes said:


> What's weird is that my card does around 2550-2575MHz and 2100MHz VRAM, with fast timings, all with a max voltage of 1.14v.
> Eventually, when I start playing games that really push the FPS down, I'll consider adding a block.


We seem to have very very similar results at the same voltages. Well, I'm a stickler so even though I can do 1.14 I just said "I'll play it safe" and made it 1.15 just because that number seems more proper (I usually do voltage in .05 increments).My memory from a few hours of testing scores the highest at 2100 w/ fast timings. 2110 is doable too but I backed it off as 2120 was scoring a bit lower.


----------



## hellm

CS9K said:


> [..]
> With time and testing, you may have found a way to safely increase core voltage!
> [..]


Be carefull. I very much assume that the GPU still isn't able to reduce voltage under load. The low voltage in idle is just related to ULV state, for any other load scenario, the TDV applies.

Tomorrow MPT 1.3.9 will be online. With this update it will be possible to change ULV Vmin for GFX and Soc.


----------



## SoloCamo

Side note:

So what's up with RDNA2 having 'high' power consumption during media playback? Even a 1080p30 video on youtube reports 40-43w. This seems to match reviews, too. IIRC my Radeon VII was using less then half of that.


----------



## alceryes

SoloCamo said:


> I have a barely higher ASIC value and mine seems to undervolt well enough while holding 2400+ (can do 2500-2600 depending on title) on the reference cooler so I guess we both have nothing to complain about. That said, ever since ASIC became a thing I've always been skeptical of it's actual reliability. However, I've never put a gpu on water nor have I had two identical model cards with massive ASIC differences to compare to.
> 
> We seem to have very very similar results at the same voltages. Well, I'm a stickler so even though I can do 1.14 I just said "I'll play it safe" and made it 1.15 just because that number seems more proper (I usually do voltage in .05 increments).My memory from a few hours of testing scores the highest at 2100 w/ fast timings. 2110 is doable too but I backed it off as 2120 was scoring a bit lower.


Yup.
My next performance boost should be a good one. It'll be an AM5 platform with a Ryzen 7000-series. Unfortunately, there's some new speculation that it won't arrive this year.


----------



## alceryes

SoloCamo said:


> Side note:
> 
> So what's up with RDNA2 having 'high' power consumption during media playback? Even a 1080p30 video on youtube reports 40-43w. This seems to match reviews, too. IIRC my Radeon VII was using less then half of that.


I just checked and, running a 1440p60 video, my card did peak at 40W TGP power but the average was only 27W TGP power. At 1080p60 is doesn't go above 30 for TGP power max.


----------



## SoloCamo

alceryes said:


> I just checked and, running a 1440p60 video, my card did peak at 40W TGP power but the average was only 27W TGP power. At 1080p60 is doesn't go above 30 for TGP power max.


You know what, you are right - I didn't let the video run long enough. I let the video run for like 15 seconds and it settled around 24. My bad.


----------



## Godhand007

hellm said:


> Be carefull. I very much assume that the GPU still isn't able to reduce voltage under load. The low voltage in idle is just related to ULV state, for any other load scenario, the TDV applies.
> 
> Tomorrow MPT 1.3.9 will be online. With this update it will be possible to change ULV Vmin for GFX and Soc.


Could you please elaborate on ULV Vmin for GFX and Soc. What does it mean in terms of actual voltages during idle/web browsing/video/3d load? 
Right now with TDV of 1281mv, I see a a voltage of 1.233 (max ) during games/TS, ~1.2mv during video playback and during browsing it stays below ~700mv and during idle it can go as low as 6mv.


----------



## hellm

This is all the information i got:


Code:


// SECTION: ULV Settings
  uint16_t  UlvVoltageOffsetSoc; // In mV(Q2)
  uint16_t  UlvVoltageOffsetGfx; // In mV(Q2)

  uint16_t  MinVoltageUlvGfx; // In mV(Q2)  Minimum Voltage ("Vmin") of VDD_GFX in ULV mode
  uint16_t  MinVoltageUlvSoc; // In mV(Q2)  Minimum Voltage ("Vmin") of VDD_SOC in ULV mode


----------



## Trevbev

Has anyone else experienced incorrect power consumption readings in wattman?
Mine is jumping up to 500W for a second during a benchmark run. My power limit (including + 15%) is 380W.


----------



## SoloCamo

SoloCamo said:


> You know what, you are right - I didn't let the video run long enough. I let the video run for like 15 seconds and it settled around 24. My bad.


And after saying this now all videos sit at 40w again despite the timeframe. Weird but I'll investigate further.


----------



## CS9K

hellm said:


> The low voltage in idle is just related to ULV state, for any other load scenario, the TDV applies.


This was my experience when I did try the TDVmin trick, interesting that this behaviour persists even with ULV and other options.

This is why I did not mess with TDVmin after the first time. I will go edit my post.

Thank you again for all you do, hellm! 🧡


----------



## hellm

As long as this project is fun, and as long as the community wants my help, it is my pleasure. And on a side note i contributed to make our Igor a little more known to the rest of the world. Oldschool and independent, we need that kind of tec press. His latest strike:








Gigabyte GeForce RTX 3080 GAMING OC WATERFORCE WB - How Gigabyte (doesn’t) react to the aluminum issue | igor'sLAB


I had discussed it last week and also thoroughly tested how the Gigabyte GeForce RTX 3080 GAMING OC WATERFORCE WB can behave in the water cooling loop. The problems around the aluminum used and the…




www.igorslab.de




Happy hardware to all of us!


----------



## J7SC

hellm said:


> Be carefull. I very much assume that the GPU still isn't able to reduce voltage under load. The low voltage in idle is just related to ULV state, for any other load scenario, the TDV applies.
> 
> Tomorrow MPT 1.3.9 will be online. With this update it will be possible to change ULV Vmin for GFX and Soc.





CS9K said:


> This was my experience when I did try the TDVmin trick, interesting that this behaviour persists even with ULV and other options.
> 
> This is why I did not mess with TDVmin after the first time. I will go edit my post.
> 
> Thank you again for all you do, hellm! 🧡


Thanks...looking forward to MPT 1.3.9 !

FYI, I edited my earlier post with a link to the above post as well. Per spoiler, this (arrow) is the update I read at the Hardwareluxx thread and referred to yesterday. 


Spoiler


----------



## cfranko

I switched my 6900 xt from liquid cooling to air cooling. Both my liquid cooling and air cooling setups have liquid metal. With liquid cooling I would get 10-11c delta at 250 watts but at air cooling with liquid metal I get 17c delta at 250 watts. Is it normal to have a delta difference between air and water


----------



## CS9K

cfranko said:


> I switched my 6900 xt from liquid cooling to air cooling. Both my liquid cooling and air cooling setups have liquid metal. With liquid cooling I would get 10-11c delta at 250 watts but at air cooling with liquid metal I get 17c delta at 250 watts. Is it normal to have a delta difference between air and water


Yes. The heatpipes can only "wick" heat away from the core _so_ fast, where water is a nice, consistent flow of liquid at a steady temperature to carry that heat away. It's part volume of substance that carries heat away (much smaller in the heatpipes/vapor chamber of an air cooler), part surface area that one can remove the heat from the substance that carries the heat away.


----------



## cfranko

CS9K said:


> Yes. The heatpipes can only "wick" heat away from the core _so_ fast, where water is a nice, consistent flow of liquid at a steady temperature to carry that heat away. It's part volume of substance that carries heat away (much smaller in the heatpipes/vapor chamber of an air cooler), part surface area that one can remove the heat from the substance that carries the heat away.


Thanks for the explanation. I get 62 edge and 78-79 hotspot at 260 watts with air cooling right now I think it could be better but its good enough.


----------



## ZealotKi11er

J7SC said:


> I think you're wrong on that...
> 
> a.) Low ASIC quality is generally high leakage > just one of many references. There's also Google...
> b.) Lots of other factors play into overclocking capabilities beyond just ASIC %
> c.) Per GPUz - the 3rd time I'm posting this now
> View attachment 2551257
> 
> 
> d.) My low Asic / extensive water-cooling benching results of > 2800 MHz effective above and elsewhere in this thread...


Depends on what you label low or high.


----------



## WR-HW95

AMD Reference 6900XT
Default Clock: GPU-Z 2465MHz and Radeon software 2509MHz
ASIC: 83.8% 

That ASIC value might also be way to check health of gpu if its logged from new card.
I had GPU that broke after few weeks in use (went unstable on factory oc) and when i checked ASIC again it was changed from 78% to 68%.


----------



## Godhand007

WR-HW95 said:


> AMD Reference 6900XT
> Default Clock: GPU-Z 2465MHz and Radeon software 2509MHz
> ASIC: 83.8%
> 
> That ASIC value might also be way to check health of gpu if its logged from new card.
> I had GPU that broke after few weeks in use (went unstable on factory oc) and when i checked ASIC again it was changed from 78% to 68%.


Can someone corroborate this?


----------



## cfranko

Godhand007 said:


> Can someone corroborate this?


I have a 6900 xt I bought on release date and I have been mining whenever I am not playing games since the day I bought it. So card has almost never sat at idle since the day I bought it and the ASIC is 81%. I don’t know if this proves anything but 81% seems pretty low.


----------



## Godhand007

cfranko said:


> I have a 6900 xt I bought on release date and I have been mining whenever I am not playing games since the day I bought it. So card has almost never sat at idle since the day I bought it and the ASIC is 81%. I don’t know if this proves anything but 81% seems pretty low.


Not if it is a reference card. In any case, we don't know whether ASIC quality is directly related to chip quality these days.


----------



## WR-HW95

cfranko said:


> I have a 6900 xt I bought on release date and I have been mining whenever I am not playing games since the day I bought it. So card has almost never sat at idle since the day I bought it and the ASIC is 81%. I don’t know if this proves anything but 81% seems pretty low.


Current value doesnt tell anything if you dont know what it was out of box.
In my case GPU was probably broken from factory, because it lost its OC so fast.
Anyway in RMA request I told that cards ASIC value have dropped along the stability problems and I got answer that it shouldnt change. So got new card to replace it.


----------



## cfranko

I am planning to change my 6900 xt for a 3080 Ti because of dlss and better mining hashrate. But I don't want to lose raw gaming performance. Can someone who used both comment on the raw power of these two gpu's. In reviews they seem pretty similar but I want to hear the thoughts of an actual user.


----------



## SoloCamo

cfranko said:


> I am planning to change my 6900 xt for a 3080 Ti because of dlss and better mining hashrate. But I don't want to lose raw gaming performance. Can someone who used both comment on the raw power of these two gpu's. In reviews they seem pretty similar but I want to hear the thoughts of an actual user.


It's a bit late in the game at this point to change them out IMO, wiith the other cards around the corner. That said it depends on title and resolution but it's a sidegrade at best. Haven't personally used a 3080ti myself though, but I've pretty much read every review on them at this point.


----------



## alceryes

cfranko said:


> I am planning to change my 6900 xt for a 3080 Ti because of dlss and better mining hashrate. But I don't want to lose raw gaming performance. Can someone who used both comment on the raw power of these two gpu's. In reviews they seem pretty similar but I want to hear the thoughts of an actual user.


I don't have the 3080 Ti but, supposedly, The 6900 XT does slightly better at lower resolutions and the 3080 Ti does slightly better at higher resolutions. Only thing I don't like about the 3080 Ti is the low-ish amount of VRAM for a top NVIDIA card. This is one of the reasons I went with the 6900 XT to begin with. I want a card who's performance doesn't drop off a cliff when the VRAM gets full with the latest AAA games of 2024-2025. (don't even get me started on the original 10GB 3080 version)

For example, Horizon Zero Dawn was released on PC in August of 2020. This game, from 1½ years ago, can use over 12GBs of VRAM. Yes, used - not allocated. I only see it getting worse year over year.

Also, DLSS isn't always a win. Be sure and do your research on the game(s) you specifically want to use DLSS in. Play the devil's advocate and search for negative reviews to compare with the positive ones. In some games it's absolutely amazing what DLSS can do, in others, not so much.


----------



## cfranko

alceryes said:


> I don't have the 3080 Ti but, supposedly, The 6900 XT does slightly better at lower resolutions and the 3080 Ti does slightly better at higher resolutions. Only thing I don't like about the 3080 Ti is the low-ish amount of VRAM for a top NVIDIA card. This is one of the reasons I went with the 6900 XT to begin with. I want a card who's performance doesn't drop off a cliff when the VRAM gets full with the latest AAA games of 2024-2025. (don't even get me started on the original 10GB 3080 version)
> 
> For example, Horizon Zero Dawn was released on PC in August of 2020. This game, from 1½ years ago, can use over 12GBs of VRAM. Yes, used - not allocated. I only see it getting worse year over year.
> 
> Also, DLSS isn't always a win. Be sure and do your research on the game(s) you specifically want to use DLSS in. Play the devil's advocate and search for negative reviews to compare with the positive ones. In some games it's absolutely amazing what DLSS can do, in others, not so much.


Thanks for the detailed explanation. I want to use dlss with dying light 2 and cyberpunk. I already tried FSR and I was shocked to see how terrible it is. Everything looked super blurry. And at 1440P I can’t get 165 FPS on Ultra settings, which is the refresh rate of my monitor, in AAA games without FSR. And this bothers me so I thought DLSS would help me. And there is the hashrate advantage of the 3080 Ti which is a nice additional benefit.


----------



## alceryes

cfranko said:


> Thanks for the detailed explanation. I want to use dlss with dying light 2 and cyberpunk. I already tried FSR and I was shocked to see how terrible it is. Everything looked super blurry. And at 1440P I can’t get 165 FPS on Ultra settings, which is the refresh rate of my monitor in AAA games without FSR. And this bothers me so I thought DLSS would help me. And there is the hashrate advantage of the 3080 Ti which is a nice additional benefit.


Note that my 'slightly better/slightly worse' comment is with most ray tracing off. Currently, ray tracing destroys performance on AMD cards. Hopefully, this will change as games start using the software-based lumens ray tracing in the latest Unreal Engine.

So, if you want to turn ray tracing up to the max, the 3080 Ti will give a major boost to performance.


----------



## J7SC

cfranko said:


> Thanks for the detailed explanation. I want to use dlss with dying light 2 and cyberpunk. I already tried FSR and I was shocked to see how terrible it is. Everything looked super blurry. And at 1440P I can’t get 165 FPS on Ultra settings, which is the refresh rate of my monitor, in AAA games without FSR. And this bothers me so I thought DLSS would help me. And there is the hashrate advantage of the 3080 Ti which is a nice additional benefit.


I run a w-cooled 6900XT and a w-cooled 3090 Strix on two separate but compatible setups in one mobo. While the 3090 is still a step up from the 3080 Ti, it is close enough re. your question. There are a few titles where the 6900XT can beat the 3090, but typically, at 4K at least, the 3090 beats the 6900XT. Ditto for DLSS titles such as Cyberpunk 2077. IMO, they are both good cards though.


----------



## D-EJ915

cfranko said:


> I am planning to change my 6900 xt for a 3080 Ti because of dlss and better mining hashrate. But I don't want to lose raw gaming performance. Can someone who used both comment on the raw power of these two gpu's. In reviews they seem pretty similar but I want to hear the thoughts of an actual user.


6900 xt, 3080 ti and 3090 all perform about the same at 1440p-4k in ff14, never tested any other games. The nv cards run faster at lower resolutions 1080p and 720p. 14 uses very little gpu memory though so a more memory intensive game would probably lean more toward the nv gpus since their bandwidth is much better.


----------



## cfranko

J7SC said:


> I run a w-cooled 6900XT and a w-cooled 3090 Strix on two separate but compatible setups in one mobo. While the 3090 is still a step up from the 3080 Ti, it is close enough re. your question. There are a few titles where the 6900XT can beat the 3090, but typically, at 4K at least, the 3090 beats the 6900XT. Ditto for DLSS titles such as Cyberpunk 2077. IMO, they are both good cards though.





D-EJ915 said:


> 6900 xt, 3080 ti and 3090 all perform about the same at 1440p-4k in ff14, never tested any other games. The nv cards run faster at lower resolutions 1080p and 720p. 14 uses very little gpu memory though so a more memory intensive game would probably lean more toward the nv gpus since their bandwidth is much better.


I put my 6900 xt for sale a few hours ago. I am going to get a 3080 Ti for DLSS and possibly better resale value when rtx 4000 releases. Although mpt tuning and overclocking the 6900 was really fun I think this will be a good change.


----------



## jonRock1992

cfranko said:


> I put my 6900 xt for sale a few hours ago. I am going to get a 3080 Ti for DLSS and possibly better resale value when rtx 4000 releases. Although mpt tuning and overclocking the 6900 was really fun I think this will be a good change.


If there was something like MPT for the Nvidia GPU's, I'd be looking to get a 3090. Nvidia's feature set is just better imo.


----------



## cfranko

jonRock1992 said:


> If there was something like MPT for the Nvidia GPU's, I'd be looking to get a 3090. Nvidia's feature set is just better imo.


Yeah its a bummer that nvidia gpu's are strictly locked, afaik the only thing that can be changed is the power limit and even that requires a vbios flash to take it above the what the board partner set


----------



## hellm

The information is there, in the registry. Only i can't modify it, it is a volatile key and it repairs itself on reboot. This is a little above my pay grade.


----------



## J7SC

jonRock1992 said:


> If there was something like MPT for the Nvidia GPU's, I'd be looking to get a 3090. Nvidia's feature set is just better imo.


MPT is great - and also a ton of fun to play around with . That said, there are at least 8 different vbios that work with my custom PCB 3090 Strix (dual vbios card), ranging from 450W to 1000W, with or without r_BAR etc, so it is not like you have no options. Still, MPT is much easier and gets by w/o vbios flashing.


----------



## Trestles126

Just installed the 6900xt in my z97 rig it has helped and running flawless at 26c 

but I can’t get open hardware monitor to recognize temps it shows it in the screen but I’m trying to map my fans via aquasuite using open hardware monitor as the source

any tips to get it recognized


----------



## cfranko

I have liquid metal on my die but I am afraid that If I let the hotpost go above 100C the LM on the die will spread to the sides, outside of the die and short something, can something like that happen? I had put a bit too much on the die when initially applying it. Currently at round 75-80 hotspot I have no issues.


----------



## Blameless

cfranko said:


> I have liquid metal on my die but I am afraid that If I let the hotpost go above 100C the LM on the die will spread to the sides, outside of the die and short something, can something like that happen? I had put a bit too much on the die when initially applying it. Currently at round 75-80 hotspot I have no issues.


There is no magic cut off at 100C. That said, if you put way too much LM on,it is possible for it to wind up where it shouldn't be and either short something or destroy any aluminum components it touched.

If I'm using liquid metal on a bare die part I always put some kind of conformal coating (even if it's just a couple layers of super glue) over any surface mount components on the package that the LM could conceivably touch.


----------



## 99belle99

The PS5 uses liquid metal and has some kind of strong spongy material around the die to stop it spreading.

Liquid metal can spread but most of the time it doesn't. I have read reports of people using a thick layer of nail polish to act as a barrier.


----------



## davids40

👋
can someone tell me where to find ASIC (6900XT)

I can't find it in HWinFO64

thanks 🙏


----------



## cfranko

Blameless said:


> There is no magic cut off at 100C. That said, if you put way too much LM on,it is possible for it to wind up where it shouldn't be and either short something or destroy any aluminum components it touched.
> 
> If I'm using liquid metal on a bare die part I always put some kind of conformal coating (even if it's just a couple layers of super glue) over any surface mount components on the package that the LM could conceivably touch.


I already have nail polish around the die but I don't trust it.


----------



## SoloCamo

Trestles126 said:


> Just installed the 6900xt in my z97 rig it has helped and running flawless at 26c
> 
> but I can’t get open hardware monitor to recognize temps it shows it in the screen but I’m trying to map my fans via aquasuite using open hardware monitor as the source
> 
> any tips to get it recognized
> 
> View attachment 2551733


Looks good, and please don't take offense to this, why would you pair that with a Z97 setup? You updating the rest of the rig soon?

Even at 4k, there were some instances where my 4.6ghz 4790k paired with 32gb cl10 DDR3 2400 would bottleneck a barely overclocked Radeon VII. At 1080p it was bottleneck city. Even at 4k, you are likely getting the performance of a Vanilla 6800 at best depending on the title.


----------



## jonRock1992

SoloCamo said:


> Looks good, and please don't take offense to this, why would you pair that with a Z97 setup? You updating the rest of the rig soon?
> 
> Even at 4k, there were some instances where my 4.6ghz 4790k paired with 32gb cl10 DDR3 2400 would bottleneck a barely overclocked Radeon VII. At 1080p it was bottleneck city. Even at 4k, you are likely getting the performance of a Vanilla 6800 at best depending on the title.


Lol. I didn't wanna be the one to mention the elephant in the room, but I agree. That Z97 system is really going to bottleneck that GPU at almost any resolution, except insanely high resolutions. My 4790K @4.9GHz and 2400MHz CL10 RAM was bottlenecking my 1080Ti a couple years ago.


----------



## 99belle99

davids40 said:


> 👋
> can someone tell me where to find ASIC (6900XT)
> 
> I can't find it in HWinFO64
> 
> thanks 🙏


Video adapter and then click the 6900 XT and it is in the ball of text that comes up.


----------



## alceryes

Be sure to uncheck the sensors only option. You need the main HWiNFO64 sceen.
Then go to video adapter, etc.


----------



## Trestles126

SoloCamo said:


> Looks good, and please don't take offense to this, why would you pair that with a Z97 setup? You updating the rest of the rig soon?
> 
> Even at 4k, there were some instances where my 4.6ghz 4790k paired with 32gb cl10 DDR3 2400 would bottleneck a barely overclocked Radeon VII. At 1080p it was bottleneck city. Even at 4k, you are likely getting the performance of a Vanilla 6800 at best depending on the title.


i was checking out of microcenter with a new z690 and a 12700 and the wait was over 2 hours so i split.... when i get time i'll probably go back and pick up a new asus board 12700 and some new ddr5 but wanted to research more. i picked this card up cause i got it for cheap it has helped but agreed system is bottlenecked....


----------



## ZealotKi11er

jonRock1992 said:


> Lol. I didn't wanna be the one to mention the elephant in the room, but I agree. That Z97 system is really going to bottleneck that GPU at almost any resolution, except insanely high resolutions. My 4790K @4.9GHz and 2400MHz CL10 RAM was bottlenecking my 1080Ti a couple years ago.


Was using 3770K @ 4.6GHz with 1080 Ti at 4K with no issues. It was fine until 2019. BF5 was the first game I was not getting over 100 fps.


----------



## lawson67

Just took a look at my ASIC



GPU Model: "Sapphire Extreme RX 6900 XT (XTXH)
Default Clock: 2619
ASIC Quality: 90.8%


----------



## alceryes

SoloCamo said:


> We seem to have very very similar results at the same voltages. Well, I'm a stickler so even though I can do 1.14 I just said "I'll play it safe" and made it 1.15 just because that number seems more proper (I usually do voltage in .05 increments).My memory from a few hours of testing scores the highest at 2100 w/ fast timings. 2110 is doable too but I backed it off as 2120 was scoring a bit lower.


So, something I found out with the AMD Radeon software.
If you lower the voltage you are actually lowering the voltage _range_ for the card, not a static upper limit. It could also be that raising the core MHz to something that it can't do at your specified voltage, overrides the voltage setting. This may by why people think that the voltage setting doesn't work. I have the voltage set at 1.125V in the software (down from the default 1.175). This equates to a max, under load, voltage of 1.14V. If I remove my voltage setting it happily goes all the way up to 1.175V under load.
I haven't messed with MPT yet and am just using the Radeon software (v21.8.2). Yes, I know it's an older version but it works so good with everything I play so I've no need to change it.


----------



## SoloCamo

alceryes said:


> So, something I found out with the AMD Radeon software.
> If you lower the voltage you are actually lowering the voltage _range_ for the card, not a static upper limit. It could also be that raising the core MHz to something that it can't do at your specified voltage, overrides the voltage setting. This may by why people think that the voltage setting doesn't work. I have the voltage set at 1.125V in the software (down from the default 1.175). This equates to a max, under load, voltage of 1.14V. If I remove my voltage setting it happily goes all the way up to 1.175V under load.
> I haven't messed with MPT yet and am just using the Radeon software (v21.8.2). Yes, I know it's an older version but it works so good with everything I play so I've no need to change it.


Yea, when I first joined the thread it was pointed out to me that the software's clock speed and voltage don't work as you'd expect. If I change my clock speed above the default of 2474 (my specific card's value) any core voltage setting I apply is overridden and it will use the 1.175 default value. I currently leave it at that and have my volts at 1.115. Realistically I played 99% of my games 100% stable at at 1.075 but with TimeSpy not being happy 100% until about 1.110 I just bumped it up ever so slightly to guarantee stability. 

I need to mess with MPT as I'm sure I could hold 2550 or so at 1.125 -1.1.30.


----------



## xR00Tx

lawson67 said:


> Just took a look at my ASIC
> 
> 
> 
> GPU Model: "Sapphire Extreme RX 6900 XT (XTXH)
> Default Clock: 2619
> ASIC Quality: 90.8%


I just checked mine:

GPU Model: Sapphire RX 6900 XT Nitro+ OC (XTX)
Default Clock: 2509
ASIC Quality: 88.0%


----------



## josean99

Hi, here my gpu ASIC:
GPU Model: XFX Speedster 319 MERC Limited Black RX 6900 XT (XTXH)
Default Clock: 2325
ASIC Quality: 84.3%


----------



## alceryes

SoloCamo said:


> Yea, when I first joined the thread it was pointed out to me that the software's clock speed and voltage don't work as you'd expect. If I change my clock speed above the default of 2474 (my specific card's value) any core voltage setting I apply is overridden and it will use the 1.175 default value. I currently leave it at that and have my volts at 1.115. Realistically I played 99% of my games 100% stable at at 1.075 but with TimeSpy not being happy 100% until about 1.110 I just bumped it up ever so slightly to guarantee stability.
> 
> I need to mess with MPT as I'm sure I could hold 2550 or so at 1.125 -1.1.30.


Strange. I have my core clock speed set at 2600MHz and my voltage setting is still working. Default is somewhere in the 2400s.


----------



## LtMatt

xR00Tx said:


> I just checked mine:
> 
> GPU Model: Sapphire RX 6900 XT Nitro+ OC (XTX)
> Default Clock: 2509
> ASIC Quality: 88.0%


Yours could and should have been an XTXH with the quality of silicon.


----------



## Sector-z

What doing the new section in MPT 1.3.9 ? I am not sur what value will do in those box.


----------



## SoloCamo

Anyone have a good guide for using MPT in general?


----------



## EastCoast

You can check to see if you have XTX or XTXH using the latest version of HWInfo64. Using the summary portion of the app. Under the GPU portion on the right side of the popup screen (summary).


----------



## Bart

SoloCamo said:


> Anyone have a good guide for using MPT in general?


I don't have a guide per se, but I bet out of everyone in this entire thread, I'm probably using MPT more "simply" than anyone else. Keeping in mind that my 6900XT is underwater with good paste / Fujipoly thermal pads, and sits in a freezing cold basement, here's my "guide":

1) install GPUZ and MPT;
2) use GPUZ to dump your GPU BIOS out to a ROM file (eg 6900xt_stock.rom);
3) start MPT and load ROM file from step 2;
4) change ONE thing only, the power limit; I changed mine from 272W to 375W (you might not want to go that high);
5) reboot.

Then go into your AMD drivers and unlock everything under tuning; change your max clock to something decent (mine tops out at 2750mhz). I also maxxed out my memory clock at 2150mhz, and set "fast timings" on the RAM, and also maxxed the power limit at 15%. You'll have to find where your GPU is comfortable of course, but mine seems to top out at 2750 / 2150, but with 375W + 15%, it can suck a good amount of juice, so PSU might be a factor depending on what you run. I don't bother changing voltage in any form, since I'm not gunning for scoreboards anymore, just gaming. So far my GPU seems quite content tickling 2700mhz in game with stock volts with that 2750 cap, but that's with super cool temps (low 40s).


----------



## SoloCamo

Bart said:


> I don't have a guide per se, but I bet out of everyone in this entire thread, I'm probably using MPT more "simply" than anyone else. Keeping in mind that my 6900XT is underwater with good paste / Fujipoly thermal pads, and sits in a freezing cold basement, here's my "guide":
> 
> 1) install GPUZ and MPT;
> 2) use GPUZ to dump your GPU BIOS out to a ROM file (eg 6900xt_stock.rom);
> 3) start MPT and load ROM file from step 2;
> 4) change ONE thing only, the power limit; I changed mine from 272W to 375W (you might not want to go that high);
> 5) reboot.
> 
> Then go into your AMD drivers and unlock everything under tuning; change your max clock to something decent (mine tops out at 2750mhz). I also maxxed out my memory clock at 2150mhz, and set "fast timings" on the RAM, and also maxxed the power limit at 15%. You'll have to find where your GPU is comfortable of course, but mine seems to top out at 2750 / 2150, but with 375W + 15%, it can suck a good amount of juice, so PSU might be a factor depending on what you run. I don't bother changing voltage in any form, since I'm not gunning for scoreboards anymore, just gaming. So far my GPU seems quite content tickling 2700mhz in game with stock volts with that 2750 cap, but that's with super cool temps (low 40s).


Thanks for the info. Does MPT allow you to limit the P/L even further then the -10 the Radeon software does? At this point this card has more then enough headroom on the games I play even at 4k so I'm actually tuning it for efficiency while still holding close to stock clocks. For example, In Forza Horizon 5 I've got it where it pulls around 200-220w while set at 1090mV. The clock holds around 2350-2400 and temps are in the 70's on the hotspot on a reference cooler at 1300 or so rpm. With my memory at 2100 w/ fast timings I consistently get better performance in all the games I play and am pulling at absolute least 26 watts less then stock (often 40+ less).


----------



## Bart

SoloCamo said:


> Thanks for the info. Does MPT allow you to limit the P/L even further then the -10 the Radeon software does? At this point this card has more then enough headroom on the games I play even at 4k so I'm actually tuning it for efficiency while still holding close to stock clocks. For example, In Forza Horizon 5 I've got it where it pulls around 200-220w while set at 1090mV. The clock holds around 2350-2400 and temps are in the 70's on the hotspot on a reference cooler at 1300 or so rpm. With my memory at 2100 w/ fast timings I consistently get better performance in all the games I play and am pulling at absolute least 26 watts less then stock.


Newer versions might, but I honestly don't keep up with it. I found my sweet spot and didn't see the need for additional tuning. I haven't updated MPT in months.


----------



## cfranko

Is there a way to increase memory clock above 2150 on XTX?


----------



## ptt1982

New drivers tested on briefly on Red Devil 6900XT, non-Ultimate: Best AMD drivers to date!

Highest overclocks by far passed in Timespy at 2690mhz. Scores are scaling properly. This is surpassing the best driver they ever released by a whopping 30mhz, with the added compatibility and overall performance tweaks on games. From the drivers before the new one, clocks are up 60mhz.

Looking forward to the upscaling stuff as well, loved the trixx software on Nitro cards and this is a better version of it.


----------



## jonRock1992

ptt1982 said:


> New drivers tested on briefly on Red Devil 6900XT, non-Ultimate: Best AMD drivers to date!
> 
> Highest overclocks by far passed in Timespy at 2690mhz. Scores are scaling properly. This is surpassing the best driver they ever released by a whopping 30mhz, with the added compatibility and overall performance tweaks on games. From the drivers before the new one, clocks are up 60mhz.
> 
> Looking forward to the upscaling stuff as well, loved the trixx software on Nitro cards and this is a better version of it.


That's great news! If this is true for my GPU, I'm going to be gaming at 2900MHz lol.


----------



## Raed1

lawson67 said:


> My Vram was scaling all the way up to 2150mhz two weeks ago now whatever driver i use and how many times i DDU and delete SPPT i can not get it to scale anymore over 2120mhz, its losing points all the way after that which was not happening 2 weeks ago which can only lead me to belive its permanently damaged or it has become permanently degraded to the point where it just cant scale over 2120mhz like it could 2 weeks ago, that's the only logical conclusion that I can think of with what i am seeing


Try to increase mem voltage a little with mpt
Mostlt 1375 is more than enough to get 2150 stable 2150 at 1367 is stable for mine


----------



## LtMatt

Raed1 said:


> Try to increase mem voltage a little with mpt
> Mostlt 1375 is more than enough to get 2150 stable 2150 at 1367 is stable for mine


Interesting, last I tried I never had much luck with memory voltage increase making much difference to overclock headroom on the memory. 

Can you share a screenshot of the MPT memory settings you changed?


----------



## CS9K

LtMatt said:


> Interesting, last I tried I never had much luck with memory voltage increase making much difference to overclock headroom on the memory.
> 
> Can you share a screenshot of the MPT memory settings you changed?


I've seen a few others that have tried increasing VMEM and have had limited success over the past year and a half.

From what I understand too, GDDR6 (and video memory in general) is quite sensitive to voltage changes, and more isn't necessarily better. It doesn't work quite like your standard DDR4 B-die, for sure. The extent of that voltage sensitivity is not something I can't speak for personally, so YMMV; just be careful, yall.


----------



## Scorpion667

Has anyone had success raising VMEM in MPT besides *Raed1*? Max I can swing is 2164 but from what I read these non LC cards go in to safe mode when pushing above mid 2100s...


----------



## CS9K

Scorpion667 said:


> Has anyone had success raising VMEM in MPT besides *Raed1*? Max I can swing is 2164 but from what I read these non LC cards go in to safe mode when pushing above mid 2100s...


I did slightly raise the VMEM on my RX 6800 to see if speeds above 2124MHz would stabilize, to no success. 

Where GDDR6 _is_ like DDR4 memory, is that speeds to some extent, and only _some_ timings, scale with voltage, where other timings can only go _so_ low at a given speed, and those timings must be raised before speeds can increase further.


----------



## Neoki

I'm not sure if others have ran into this yet. But after upgrading to 22.3.1, watching a YouTube Video or Twitch stream and gaming is a guaranteed driver crash for me. I was previously using 22.2.1 for the Lost Ark support. No changes to my stable overclock (2750mhz core/stock memory) I've been running for months now.


----------



## LtMatt

Neoki said:


> I'm not sure if others have ran into this yet. But after upgrading to 22.3.1, watching a YouTube Video or Twitch stream and gaming is a guaranteed driver crash for me. I was previously using 22.2.1 for the Lost Ark support. No changes to my stable overclock (2750mhz core/stock memory) I've been running for months now.


Which browser, Chrome? Try disabling hardware acceleration. Saw someone else mention a similar issue elsewhere too with Chrome.


----------



## LtMatt

Scorpion667 said:


> Has anyone had success raising VMEM in MPT besides *Raed1*? Max I can swing is 2164 but from what I read these non LC cards go in to safe mode when pushing above mid 2100s...





CS9K said:


> I did slightly raise the VMEM on my RX 6800 to see if speeds above 2124MHz would stabilize, to no success.
> 
> Where GDDR6 _is_ like DDR4 memory, is that speeds to some extent, and only _some_ timings, scale with voltage, where other timings can only go _so_ low at a given speed, and those timings must be raised before speeds can increase further.


I did a bit of testing and using 1375 (MVDD), replacing both 1350 values did see me gain performance in Timespy graphics test 1 and 2 (custom runs) up to around 2164Mhz. After that I started to lose performance. However when I then tested this using the Far Cry 6 benchmark at 4k max settings, my minimum FPS took a hit compared to stock voltage and 2126Mhz which is my usual peak for mem performance with my current sample 6900 XTXH. So YMMV, but perhaps it is mainly beneficial for synthetics. I also noticed a little screen artifacting once i hit around 2160Mhz that was not there previously.


----------



## Sector-z

Nobody can answer me ? Everyone skipped my post 😕


----------



## Neoki

LtMatt said:


> Which browser, Chrome? Try disabling hardware acceleration. Saw someone else mention a similar issue elsewhere too with Chrome.


That fixes the hard crash but lack of hardware acceleration causes stuttering on videos now. I reverted back to 22.2.1 as it all works fine still with that. I'll write up a bug report, but I'm sure it'll go ignored for a while.

Thanks for the tip on hardware acceleration, was worth the try.


----------



## EastCoast

Sector-z said:


> Nobody can answer me ? Everyone skipped my post 😕


Sorry about that. However, I do not have an answer to your question. Hopefully someone here will.


----------



## LtMatt

Neoki said:


> That fixes the hard crash but lack of hardware acceleration causes stuttering on videos now. I reverted back to 22.2.1 as it all works fine still with that. I'll write up a bug report, but I'm sure it'll go ignored for a while.
> 
> Thanks for the tip on hardware acceleration, was worth the try.


Using Edge here with hardware acceleration with 22.3.1 and no issues if you don't mind using that browser.


----------



## jonRock1992

The latest driver breaks freesync in God Of War. That's the game I'm currently playing, so I instantly reverted. Not going to deal with that crap.


----------



## EastCoast

jonRock1992 said:


> The latest driver breaks freesync in God Of War. That's the game I'm currently playing, so I instantly reverted. Not going to deal with that crap.


The day they actually get off of DX11 and use something more current, friendlier and more efficient api I might consider that game. When you can use a vulkan mod to get better performance is very telling, to me, that the developers don't care for Radeon users.

I simply take issue barely getting decent FPS in that game with my 6900xt knowing I could get better had they used either DX12 or vulkan from the get go.


----------



## CS9K

Hm, I seem to be at an impasse with my RX 6900 XT LC now that it's in its water block.

Time Spy test 2 is now the bane of my existence, as it is for many others here. I can set 2830MHz in the Adrenalin control panel and be stable, but any higher than that and TS:GT2 faceplants at one of a couple of places during the run. I notice the normal behavior with the card (reading 1125-1130mV in HWINFO), but that reading droops pretty heavily in the places in TS:GT2 that pull the most current through the core. The Vcore drops to 1110-1115 and Vsoc drops as low as 990mV during those periods. I've only set power and current limits for card, core, and SoC, and those limits are _well_ above what the GPU is pulling when it crashes out. Likewise, temperatures at those points are 55/75C core/hotspot, so temps are quite fine as well.

If this GPU weren't an XTXH, I'd repeat the same thing I've said here numerous times: "Once you fail out of TS:GT2, that's good game until you raise core voltage", but I don't recall seeing the Vdroop behavior *quite to this extent on my reference RX 6900 XT (XTX).

Am I missing something with the XTXH card, some droop or throttler setting that's still kicking in at higher current draw, or is 2830MHz it until I do Temp Dependent Vmin?



EastCoast said:


> I simply take issue barely getting decent FPS in that game with my 6900xt knowing I could get better had they used either DX12 or vulkan from the get go.


Hi yes hello and join us flight simmers in the same lament about DX11 and it's not-ideal-for-2022 CPU threading. DX12 is promised to us for Flight Sim 2020, and I am _ready_.


----------



## J7SC

CS9K said:


> (...)
> Hi yes hello and join us flight simmers in the same lament about DX11 and it's not-ideal-for-2022 CPU threading. DX12 is promised to us for Flight Sim 2020, and I am _ready_.


FYI, have been using DX12 FS2020 for many months now...albeit on NVidia...works perfectly


----------



## CS9K

J7SC said:


> FYI, have been using DX12 FS2020 for many months now...albeit on NVidia...works perfectly


DX12 is a massive stuttery mess for a lot of, I would probably say most, people. I'm happy that it works for you, but until ASOBO start making improvements specific to DX12, for some/most folks, DX11 is the way to go. ASOBO currently are not on the record saying that they have made any DX12-specific improvements... so far. It's coming, though, hopefully in Sim Update 10, per ASOBO.


----------



## Scorpion667

New Radeon Driver 22.3.1 is not great for Vanguard at 1080p. FPS is lower and it's never boosting to max clock, unlike the previous WHQL driver.

GPU: 6900XT Toxic EE
Min Clock: 2730
Max Clock: 2830
Res/gfx: 1080p all low
OS: Win 10 21H2
CPU: 9900KS @ 5.1
RAM: 4400c16 CR1

_-Not even remotely close to hitting PL or TL. Max draw I see in this game is 330w, max hotspot 84c.
-I tried to set Min Clock to 2800 on new driver to have a more even comparison as old driver actually boosts properly but FPS was still lower on new driver_

5 run average:
*Radeon 21.10.2 WHQL*
Average framerate : 414.42 FPS
Minimum framerate : 393.82 FPS
Maximum framerate : 439.02 FPS
1% low framerate : 320.72 FPS
0.1% low framerate : 296.2 FPS

*Radeon 22.3.1 WHQL*
Average framerate : 408.88 FPS
Minimum framerate : 385.6 FPS
Maximum framerate : 433.78 FPS
1% low framerate : 315.42 FPS
0.1% low framerate : 288.18 FPS

[edit] *all driver installs performed after DDU *[/edit]


----------



## 99belle99

Scorpion667 said:


> New Radeon Driver 22.3.1 is not great for Vanguard. FPS is lower and it's never boosting to max clock, unlike the previous WHQL driver.
> 
> GPU: 6900XT Toxic EE
> Min Clock: 2730
> Max Clock: 2830
> Res/gfx: 1080p all low
> OS: Win 10 21H2
> CPU: 9900KS @ 5.1
> RAM: 4400c16 CR1
> 
> _-Not even remotely close to hitting PL or TL. Max draw I see in this game is 330w, max hotspot 84c.
> -I tried to set Min Clock to 2800 on new driver to have a more even comparison as old driver actually boosts properly but FPS was still lower on new driver_
> 
> 5 run average:
> *Radeon 21.10.2 WHQL*
> Average framerate : 414.42 FPS
> Minimum framerate : 393.82 FPS
> Maximum framerate : 439.02 FPS
> 1% low framerate : 320.72 FPS
> 0.1% low framerate : 296.2 FPS
> 
> *Radeon 22.3.1 WHQL*
> Average framerate : 408.88 FPS
> Minimum framerate : 385.6 FPS
> Maximum framerate : 433.78 FPS
> 1% low framerate : 315.42 FPS
> 0.1% low framerate : 288.18 FPS
> 
> [edit] *all driver installs performed after DDU *[/edit]


That's all well and good but would you bench at 1440 or 4k which is what most are playing at now. It's all great running 400 fps but sure there is no monitor able to run those frames.


----------



## Scorpion667

99belle99 said:


> That's all well and good but would you bench at 1440 or 4k which is what most are playing at now. It's all great running 400 fps but sure there is no monitor able to run those frames.


Edited post to reflect 1080p testing. I don't own 1440p/4k panels so I can't comment on that but I'd be interested to see some numbers! I'm using Acer 390hz 1080p panel

I'm curious if others notice the same boosting issue I did on other configs/games. On old WHQL it would frequently boost to 2755-2806 where as on new driver it just sat at 2711-2714


----------



## J7SC

CS9K said:


> DX12 is a massive stuttery mess for a lot of, I would probably say most, people. I'm happy that it works for you, but until ASOBO start making improvements specific to DX12, for some/most folks, DX11 is the way to go. ASOBO currently are not on the record saying that they have made any DX12-specific improvements... so far. It's coming, though, hopefully in Sim Update 10, per ASOBO.


Turning off Gsync seemed to fix things on 120hz OLED


----------



## CS9K

J7SC said:


> Turning off Gsync seemed to fix things on 120hz OLED


Huh, I'll give that a try.


----------



## coelacanth

Has anyone been able to get VRR (not Freesync Premium) working with an LG CX OLED and 6900 XT? VRR is part of Version 2.1 of the HDMI Specification but I haven't found any way to enable it in Windows 10 or the Radeon Adrenalin software despite AMD stating that VRR is supported ("Modern set-ups require modern solutions. By embracing industry standards like HDMI® 2.1 VRR, and USB C, AMD RDNA 2 enables seamless integration into your life and hardware setup." https://www.amd.com/en/technologies/rdna-2).

When I mash the green button on the remote to bring up the VRR Information menu it says "Fixed" instead of "VRR."

I have seen screenshots of Adrenalin with a VRR toggle, but I don't see it in my software (latest version 22.3.1) or any other version in the last year+. I have heard other people saying the option disappears from Adrenalin as soon as you plug in and turn on an LG OLED TV. I know that VRR works Nvidia GPUs and Xbox Series X.

I have my CX set to PC mode with Instant Game Response enabled.

Any help here would be great.

Thanks.


----------



## CS9K

coelacanth said:


> Has anyone been able to get VRR (not Freesync Premium) working with an LG CX OLED and 6900 XT? VRR is part of Version 2.1 of the HDMI Specification but I haven't found any way to enable it in Windows 10 or the Radeon Adrenalin software despite AMD stating that VRR is supported ("Modern set-ups require modern solutions. By embracing industry standards like HDMI® 2.1 VRR, and USB C, AMD RDNA 2 enables seamless integration into your life and hardware setup." https://www.amd.com/en/technologies/rdna-2).
> 
> When I mash the green button on the remote to bring up the VRR Information menu it says "Fixed" instead of "VRR."
> 
> I have seen screenshots of Adrenalin with a VRR toggle, but I don't see it in my software (latest version 22.3.1) or any other version in the last year+. I have heard other people saying the option disappears from Adrenalin as soon as you plug in and turn on an LG OLED TV. I know that VRR works Nvidia GPUs and Xbox Series X.
> 
> I have my CX set to PC mode with Instant Game Response enabled.
> 
> Any help here would be great.
> 
> Thanks.


Radeon 6000 GPU's do not support HDMI Forum VRR, only freesync. AMD has not made comment as to why this is so, unfortunately.

The b/f and I both have an LG CX, I with my RX 6900 XT LC, he with a 3080Ti FE. AMD requires freesync to be enabled to get VRR on the LG OLEDs


----------



## LtMatt

CS9K said:


> Radeon 6000 GPU's do not support HDMI Forum VRR, only freesync. AMD has not made comment as to why this is so, unfortunately.
> 
> The b/f and I both have an LG CX, I with my RX 6900 XT LC, he with a 3080Ti FE. AMD requires freesync to be enabled to get VRR on the LG OLEDs


This. Using an OLED CX48 here with AMD FreeSync. In AMD Software I have the option to toggle VRR on and off though.


----------



## coelacanth

LtMatt said:


> This. Using an OLED CX48 here with AMD FreeSync. In AMD Software I have the option to toggle VRR on and off though.


Thanks for the screenshot. I enabled Freesync Premium on the CX and the VRR toggle pops up.


----------



## LtMatt

coelacanth said:


> Thanks for the screenshot. I enabled Freesync Premium on the CX and the VRR toggle pops up.


Perfect, it works well for me.


----------



## colourcode

Did anyone here re-paste a NITRO+ AMD Radeon RX 6900 XT SE 16G GDDR6 (sapphiretech.com)?
Wondering if I can remove the main cooler without risking breaking any of the cooling pads. Looks like they might be separate heatsinks?


----------



## alceryes

colourcode said:


> Did anyone here re-paste a NITRO+ AMD Radeon RX 6900 XT SE 16G GDDR6 (sapphiretech.com)?
> Wondering if I can remove the main cooler without risking breaking any of the cooling pads. Looks like they might be separate heatsinks?


It's not advisable to reuse thermal pads - even if they remain intact.
Because pads deform to fill the space they're pressed up against, you may introduce gaps where the pad doesn't fully cover the core/vram. This is especially true for OEM pads that are cheaper quality and don't bounce back much, if at all.


----------



## colourcode

alceryes said:


> It's not advisable to reuse thermal pads - even if they remain intact.
> Because pads deform to fill the space they're pressed up against, you may introduce gaps where the pad doesn't fully cover the core/vram. This is especially true for OEM pads that are cheaper quality and don't bounce back much, if at all.


Yeah. I was thinking if the main cooler is completely separate from all the pads. In the pictures it looks like the surrounding pads are covered by separate pieces of metal but I have no clue.


----------



## J7SC

@CS9K ...any news from your MPT sources re. VRAM limit of XTX ? 
As mentioned before, I think I found the relevant segment in the registry, but next to impossible to test out as a the required reboot seems to reset it. 'RAM freedom' is all I really need to make it a little 6950XTX .

...per earlier posts, 1.218v for GPU below


----------



## LtMatt

That effective clock speed is really high @J7SC. Is there anything specific needed to keep it high and so close to the GPU clock speed other than the necessary amount of voltage for stability, suitable power limits and low temperatures?


----------



## J7SC

LtMatt said:


> That effective clock speed is really high @J7SC. Is there anything specific needed to keep it high and so close to the GPU clock speed other than the necessary amount of voltage for stability, suitable power limits and low temperatures?


...I think it's a combination of factors, consisting of the specific chip, cooling, and PL. Per below, PCB dark-theme temps on top for the Superposition run w/ clocks I showed earlier at 1.218v GPU; plus day-one bone-stock air-cooled results light-theme temps and clocks below that, albeit at 3800 rpm fans and 18 C ambient (check nominal & effective clocks there as well).

Custom w-cooling system was originally configured for 2x 2080 Ti and Threadripper (1200W +) before being put to use with the 3950X / 6900XT combo and has 1200mm x 62mm rad space and dual D5 pumps. In addition, lots of headroom on the PL settings so you get a fairly straight MHz line (ie. TimeSpy lower line below, don't mind the 3950X zig-zagging above).


----------



## CS9K

colourcode said:


> Did anyone here re-paste a NITRO+ AMD Radeon RX 6900 XT SE 16G GDDR6 (sapphiretech.com)?
> Wondering if I can remove the main cooler without risking breaking any of the cooling pads. Looks like they might be separate heatsinks?


Hello. I do believe that the main heatsink is separate from the VRAM+VRM heatsinks on this GPU model.

A good place to start searching for comfirmation is google; sites like TechPowerUp usually have complete teardowns of the GPU's that they review.


----------



## CS9K

J7SC said:


> @CS9K ...any news from your MPT sources re. VRAM limit of XTX ?


Unfortunately, you know as much as I do. The user "hellm" is that source, he posts mostly over on Igor's Lab, but he occasionally pokes his head in here, too. Just keep watching Igor's Lab for updates to MPT/RBE; that's likely where you'll find the first reports of safe memory OC should someone figure out how to bypass the lockout.


----------



## hellm

CS9K said:


> Unfortunately, you know as much as I do. The user "hellm" is that source, he posts mostly over on Igor's Lab, but he occasionally pokes his head in here, too. Just keep watching Igor's Lab for updates to MPT/RBE; that's likely where you'll find the first reports of safe memory OC should someone figure out how to bypass the lockout.


And i have have my knowledge from the community. Igorslab, Hardwareluxx, PCGH and here.
I still don't have a test setup with a radeon, so i have to rely on the information i receive. I only provided the tool.
That will change this year, then I hopefully will have time for some other projects that I still have on the list.


----------



## gilor80

Good morning, what is the best and stable driver for overclocking?
My gpu: 6900 xt red devil ultimate


----------



## Sufferage

gilor80 said:


> Good morning, what is the best and stable driver for overclocking?
> My gpu: 6900 xt red devil ultimate


Latest, 22.3.1 has been great so far with my Toxic 6900XT 🤘


----------



## alceryes

Sufferage said:


> Latest, 22.3.1 has been great so far with my Toxic 6900XT 🤘


I'm considering jumping from my 'goldie oldie' 21.8.2 driver to this but will let it gel with the public for another week or so.
Please come back and let us know if you experience any issues as you use it.


----------



## Sufferage

alceryes said:


> I'm considering jumping from my 'goldie oldie' 21.8.2 driver to this but will let it gel with the public for another week or so.
> Please come back and let us know if you experience any issues as you use it.


Will do  
So far it's been smooth sailing for me without any problems occurring, higher scores in benchmarks & great stability when overclocking.


----------



## cfranko

is 67 edge 90 hotspot at 300 watts too high for air cooling and liquid metal? How can I know if there is an issue with mounting pressure?


----------



## bulletoftime

cfranko said:


> is 67 edge 90 hotspot at 300 watts too high for air cooling and liquid metal? How can I know if there is an issue with mounting pressure?


That is quite normal I think. I also have liquid metal on my card (Asus TUF 6900 XT). I see 68 C and 86 C (edge, hotspot) at 300 watts and 70 C and 90 C (edge, hotspot) at 345 watts on stock cooling.
I usually use the most mounting pressure that is reasonably achieved (without going ham on the screws).


----------



## bulletoftime

cfranko said:


> is 67 edge 90 hotspot at 300 watts too high for air cooling and liquid metal? How can I know if there is an issue with mounting pressure?


To add, liquid metal only improved my temps by about 3 C and 6 C for edge and hotspot respectively. A bit surprising...but I think the limiting factor for me is the stock air cooling on the card.


----------



## cfranko

bulletoftime said:


> To add, liquid metal only improved my temps by about 3 C and 6 C for edge and hotspot respectively. A bit surprising...but I think the limiting factor for me is the stock air cooling on the card.


When I had a waterblock on this card Liquid metal had helped a lot, then I had to switch back to air cooling and I kept the liquid metal on the die because I was too scared to remove it, it was all sticky and stuff so I just kept it. On air cooling it doesn't seem to help as much as watercooling.


----------



## CS9K

bulletoftime said:


> To add, liquid metal only improved my temps by about 3 C and 6 C for edge and hotspot respectively. A bit surprising...but I think the limiting factor for me is the stock air cooling on the card.


On air cooling, it's not _as_ much a matter of getting the heat from core to heatsink-cold-plate...

I have noticed with RDNA2's top-end GPU's: It is more than anything else, the ability of the heatsink's vapor chamber and/or heatpipes to wick that heat away from the core and then distribute it. 

RDNA2's core surface area is quite small compared to Ampere's, and when that tiny core is pulling 250-300W just by itself, that is some _crazy_ heat-density for the surface area.

I've noticed this when comparing my ref RX 6900 XT on air, on the RX 6900 XT LC water cooler (both with thermal pad and with Gelid GC-Extreme paste), and both cards on water w/block in my sig. The RX 6900 XT LC's water loop does an okay job wicking heat away... right up until the water temp starts to rise. Once you lose the better Delta-T of cold water, it's not as great at wicking the heat away, not like my open loop; temps on the loop are _fantastic_... 55/75 core/hotspot at 400-450W card-draw.


----------



## coelacanth

cfranko said:


> is 67 edge 90 hotspot at 300 watts too high for air cooling and liquid metal? How can I know if there is an issue with mounting pressure?


I think it depends a lot on the fans. At stock fans on my XFX Merc the hotspot is mid to high 90s, if I up the fan speed just a bit hopspot is 10C lower than at stock. If I crank the fans it stays very cool.


----------



## mrazster

Those of you using this card and watercooling it, what block are you using ?

XFX Speedster MERC 319 AMD Radeon™ RX 6900 XT Limited Black Gaming Graphics Card with 16GB GDDR6, AMD RDNA™ 2
RX-69XTACSD9


----------



## LtMatt

bulletoftime said:


> To add, liquid metal only improved my temps by about 3 C and 6 C for edge and hotspot respectively. A bit surprising...but I think the limiting factor for me is the stock air cooling on the card.


Are you using a reference card?


----------



## cfranko

LtMatt said:


> Are you using a reference card?


He said he is using a TUF 6900 XT in his previous message.


----------



## LtMatt

cfranko said:


> He said he is using a TUF 6900 XT in his previous message.


Thanks missed that. I thought it sounded too good to be true for a reference card.


----------



## dagget3450

Enzarch said:


> Microsoft OneDrive - Access files anywhere. Create docs with free Office Online.
> 
> 
> Store photos and docs online. Access them from any PC, Mac or phone. Create and work together on Word, Excel or PowerPoint documents.
> 
> 
> 
> 1drv.ms





CS9K said:


> You're the best! 💗
> 
> cc @hellm, the bios from a reference model RX 6900 XT LC


Forgive me if this was already asked, is this bios usable on a ref 6900xt that is now underwater?

I'd like to give it a spin if so.


----------



## CS9K

dagget3450 said:


> Forgive me if this was already asked, is this bios usable on a ref 6900xt that is now underwater?
> 
> I'd like to give it a spin if so.


Negative


----------



## Trevbev

Has the power consumption reading in wattman changed recently?
I've been getting much higher power readings than I should be.
I've just checked HWinfo with time spy running and the max TGP was 383W which is what it's supposed to be but the reading for max GPU ASIC power was 529W.
I think this is the same value that wattman uses too.
I've also noticed that sometimes the ASIC power is less that the TGP or PPT.
Should I be worried about this GPU ASIC power reading or just ignore it and pay attention to the TGP power?


----------



## dagget3450

CS9K said:


> Negative


bummer

anyways toying around in timespy and some tweaking with MPT 









80th on HOF for x2 timespy - been a long time since i've been on HOF and i don't think i ever did it in Timespy only Firestrike if i recall.(have to find screenies lol)

putting this 5900x to work a little i got from a user here!


----------



## CS9K

Trevbev said:


> I've just checked HWinfo with time spy running and the max TGP was 383W which is what it's supposed to be but the reading for max GPU ASIC power was 529W.


I don't recall which was responsible, but it was either a particular driver version, or particular HWINFO64 version, that was giving erroneous readings for ASIC power. Update both.


----------



## ptt1982

Anyone here tried the latest drivers 22.3.2?
If someone has installed them and played Elden Ring, would really appreciate feedback on that game and its performance.

Just got my third covid jab yesterday and a bit out of it so don't have the energy start tuning, especially as the 22.3.1 drivers are so good.
Thanks again!


----------



## ArchStanton

dagget3450 said:


> x2 timespy


My epeen had retracted to somewhere behind my navel till I read that little snippet 🤣


----------



## dagget3450

ArchStanton said:


> My epeen had retracted to somewhere behind my navel till I read that little snippet 🤣


lol yeah, this is a bit of an experiment because a few threads on here recently about CF/MGPU and i tore down machines to play around. Sadly aside from some benchmarks, its still the same old story of disappointment game wise.

I made it to 75th but i had a malfuntion that had me about to cry. One of my 6900xt's started black screening on me, and wouldn't even run heaven at 640x360 without blackscreening unless i brought clocks all the way down to about 700mhz. Turns out my GPU is fine, it was a Rail on my Lepa G1600 that appears to be the culprit. So the GPU appears to be fine, and ill gladly take a bad PSU over a GPU in this hellish price market of GPU's.

So i guess ill have to stop here for now, and see if i can resolve my PSU dilemma ...some days are just bad race days..


----------



## Th3Fly1ngCow

Edit to my ordinal post I’m dumb and didn’t realize how hot my room was yesterday currently at core 48/60 junction since it’s not scorched outside


----------



## ArchStanton

Th3Fly1ngCow said:


> So guys it’s my first time on water


Details will help the "pros" in this thread diagnose your temperatures. Can you elaborate regarding "on water"? AIO? Custom loop? Pump? Block? Etc. The more detail the better IMO .


----------



## Th3Fly1ngCow

ArchStanton said:


> Details will help the "pros" in this thread diagnose your temperatures. Can you elaborate regarding "on water"? AIO? Custom loop? Pump? Block? Etc. The more detail the better IMO .


Byaskyi block in a custom loop with a 5600x (1) 240mm radiator and (2) 120mm radiators pump and fans set to 100 pulling roughly 320 watts


----------



## dagget3450

My 6900xts sit around mid 40s and hotspots that I've seen hit around mid/low 50s. They are around 330w I think. Tonight I'll probably swap psu and go for higher wattage. I'll see where temps end up.

Those temps to me sound high for water custom


----------



## Scorpion667

I have TFX thermal paste arriving shortly and wish to do some A/B testing against my currently applied GC Extreme on 6900XT. Should I just use TimeSpy for comparison?


----------



## CS9K

Scorpion667 said:


> I have TFX thermal paste arriving shortly and wish to do some A/B testing against my currently applied GC Extreme on 6900XT. Should I just use TimeSpy for comparison?


I'm excited for the results! And before/after on other tests like Port Royal, and/or other programs like Superposition (4k Optimized) would be nice, if you have the time.


----------



## Trevbev

CS9K said:


> I don't recall which was responsible, but it was either a particular driver version, or particular HWINFO64 version, that was giving erroneous readings for ASIC power. Update both.


Thanks. Updating the driver worked.


----------



## LtMatt

esebey said:


> Sapphire Toxic Extreme (XTXH)
> 91.2%


How well does it overclock?


----------



## tootall123

GPU Model: "GIGABYTE AORUS RX 6900 XT Xtreme Waterforce XTXH"
Default Clock: 2669
ASIC Quality: 92.6%


----------



## LtMatt

tootall123 said:


> GPU Model: "GIGABYTE AORUS RX 6900 XT Xtreme Waterforce XTXH"
> Default Clock: 2669
> ASIC Quality: 92.6%


That's the highest I've seen, followed by esebey above.


----------



## CS9K

Aye, I am curious how far yall have been able to push your XTXH GPU's. Mine seems a bit of an oddity. Normally, the voltage required follows a nice, mostly-ish-predictable exponential curve upward as clock speed goes up, but my RX 6900 XT LC in my water block, seems to hit a wall at 2850MHz. It will run 2830MHz all day with no errors, and even runs 2775 stable if I set 1175mV as the max voltage via MPT (my current daily-driver tune). My GPU will almost do 2800MHz at 1175mV. Almost. 

I say all of this because I notice that, even on water with hotspot temps in the low 70's under max load, the firmware's Vdroop gets VERY aggressive when core amperage gets above about 330A or so, and it's almost always during that aggressive Vdroop when it crashes out at 2850MHz and above. I know the VRM on the board is good for it, my PSU is good for it, I'm using hand-made 16ga cables on the card... and I don't mind hotspot getting up into the 80's under max benchmark load nor do I mind wattage going up near 500W, but 350A/460W is as far as it'll let itself go before super aggressive Vdroop kicks in. 

I didn't have this issue on my other Radeon GPU's, but I never pushed those anywhere near this hard,

Hm, is there a setting you other XTXH users change via MPT to make the GPU ease up on the Vdroop slightly to push core clocks higher? For the record, I have not and will not be using temp dependent Vmin on this card, and I do know and appreciate that Vdroop is a good thing, especially at super-high current draw (I got to learn all about that when wrapping my head around Intel 9th-11th gen overclocking🔥)


----------



## Blameless

Haven't been able to find any practical software method of limiting droop on my 6900XTXH and I'm pretty sure this is why my 6800XT clocks better in practice.


----------



## esebey

LtMatt said:


> How well does it overclock?


Quite well, i didnt push it to its limits just yet, but got the #1 spot with 3700x on 3dmark tests on 1.25v


----------



## LtMatt

esebey said:


> Quite well, i didnt push it to its limits just yet, but got the #1 spot with 3700x on 3dmark tests on 1.25v


What was the max overclock you achieved with stock voltage in terms of set and actual clock frequency?


----------



## esebey

LtMatt said:


> What was the max overclock you achieved with stock voltage in terms of set and actual clock frequency?


I dont remember what was the max clock with stock voltage but I scored 21 764 in Time Spy this is what i get with 1.25v 2850mhz set 2788mhz average


----------



## LtMatt

CS9K said:


> Aye, I am curious how far yall have been able to push your XTXH GPU's. Mine seems a bit of an oddity. Normally, the voltage required follows a nice, mostly-ish-predictable exponential curve upward as clock speed goes up, but my RX 6900 XT LC in my water block, seems to hit a wall at 2850MHz. It will run 2830MHz all day with no errors, and even runs 2775 stable if I set 1175mV as the max voltage via MPT (my current daily-driver tune). My GPU will almost do 2800MHz at 1175mV. Almost.
> 
> I say all of this because I notice that, even on water with hotspot temps in the low 70's under max load, the firmware's Vdroop gets VERY aggressive when core amperage gets above about 330A or so, and it's almost always during that aggressive Vdroop when it crashes out at 2850MHz and above. I know the VRM on the board is good for it, my PSU is good for it, I'm using hand-made 16ga cables on the card... and I don't mind hotspot getting up into the 80's under max benchmark load nor do I mind wattage going up near 500W, but 350A/460W is as far as it'll let itself go before super aggressive Vdroop kicks in.
> 
> I didn't have this issue on my other Radeon GPU's, but I never pushed those anywhere near this hard,
> 
> Hm, is there a setting you other XTXH users change via MPT to make the GPU ease up on the Vdroop slightly to push core clocks higher? For the record, I have not and will not be using temp dependent Vmin on this card, and I do know and appreciate that Vdroop is a good thing, especially at super-high current draw (I got to learn all about that when wrapping my head around Intel 9th-11th gen overclocking🔥)


You have a good sample based on what you've said regardless of the Vdroop.


----------



## J7SC

CS9K said:


> Aye, I am curious how far yall have been able to push your XTXH GPU's. Mine seems a bit of an oddity. Normally, the voltage required follows a nice, mostly-ish-predictable exponential curve upward as clock speed goes up, but my RX 6900 XT LC in my water block, seems to hit a wall at 2850MHz. It will run 2830MHz all day with no errors, and even runs 2775 stable if I set 1175mV as the max voltage via MPT (my current daily-driver tune). My GPU will almost do 2800MHz at 1175mV. Almost.
> 
> I say all of this because I notice that, even on water with hotspot temps in the low 70's under max load, the firmware's Vdroop gets VERY aggressive when core amperage gets above about 330A or so, and it's almost always during that aggressive Vdroop when it crashes out at 2850MHz and above. I know the VRM on the board is good for it, my PSU is good for it, I'm using hand-made 16ga cables on the card... and I don't mind hotspot getting up into the 80's under max benchmark load nor do I mind wattage going up near 500W, but 350A/460W is as far as it'll let itself go before super aggressive Vdroop kicks in.
> 
> I didn't have this issue on my other Radeon GPU's, but I never pushed those anywhere near this hard,
> 
> Hm, is there a setting you other XTXH users change via MPT to make the GPU ease up on the Vdroop slightly to push core clocks higher? For the record, I have not and will not be using temp dependent Vmin on this card, and I do know and appreciate that Vdroop is a good thing, especially at super-high current draw (I got to learn all about that when wrapping my head around Intel 9th-11th gen overclocking🔥)


...basically, you're looking for LLC level control if I understand you correctly ?

On another note, has anyone seen a description / guide for 'DPM Overrides' and 'Throttler' (located in MPT / Advanced) ?


----------



## cfranko

I want to buy a Nitro+ SE because it looks good. I heard the cooling is terrible is that true? If so would liquid metal fix temps?


----------



## VeganJoy

Just set up my watercooling loop with my 6900xt OCF so I figured I’d drop by and see what kinda settings to run on it. Bykski block with 420+280mm rads in a P500A, so tons of airflow. Maxing out the stock power limit of 380w I’m hitting 60-65C core and 85-90C hotspot temps with coolant around 37-40C, is that normal for a custom loop? Thanks!


----------



## LtMatt

cfranko said:


> I want to buy a Nitro+ SE because it looks good. I heard the cooling is terrible is that true? If so would liquid metal fix temps?


It's the stock RGB that makes it look so good IMO. I believe the cooler design is pretty much identical to the cheaper version bar the backplate. 

I don't think it's terrible and I'm sure LM would help. I think it flat out voids warranty with Sapphire though.


----------



## cfranko

LtMatt said:


> I don't think it's terrible and I'm sure LM would help. I think it flat out voids warranty with Sapphire though.


My friend has the regular nitro+ (non se) with the stock paste it's sitting at 100c hotspot at 270 watts which is terrible imo but ill probably be getting it regardless since the rgb is incredible.


----------



## LtMatt

cfranko said:


> My friend has the regular nitro+ (non se) with the stock paste it's sitting at 100c hotspot at 270 watts which is terrible imo but ill probably be getting it regardless since the rgb is incredible.


Must be bad contact as that's stock cooler performance on the hotspot.


----------



## dagget3450

So silly question. What's the max wattage for reference 6900xt and TDC , I've been out of it for a while. I want to push my GPUs but also be somewhat safe.
Highest I have tried so far is 340w and TDC of 360
So far I've not had any crashes but I am slowly walking up the clocks and wattage.


----------



## ArchStanton

I'd like to finish reading the entirety of this thread before proceeding with MPT (page 115 currently), but I thought I would share my best results thus far without it. AORUS Radeon™ RX 6900 XT XTREME WATERFORCE WB 16G (repasted/puttied front and rear, no LM _yet_). This run was completed with 2700-2800MHz core, fast memory timings, 2150MHz memory, 1.2v, and +15% PL. I left the window open last night to bring the temps down as low as I could.


----------



## VeganJoy

ArchStanton said:


> I'd like to finish reading the entirety of this thread before proceeding with MPT (page 115 currently), but I thought I would share my best results thus far without it. AORUS Radeon™ RX 6900 XT XTREME WATERFORCE WB 16G (repasted/puttied front and rear, no LM _yet_). This run was completed with 2700-2800MHz core, fast memory timings, 2150MHz memory, 1.2v, and +15% PL. I left the window open last night to bring the temps down as low as I could.


damn i need to figure out how to oc this ryzen chip, cpu score is severely lacking for me lol. my gpu is completely power limited and best ive seen so far is 23500 with a bit of an undervolt. its set for 2750mhz but actual seems to be 2500-2600


----------



## ZealotKi11er

Still cannot run my card with anything over 350w. I am hitting 60c/95c pretty quickly.


----------



## J7SC

ArchStanton said:


> I'd like to finish reading the entirety of this thread before proceeding with MPT (page 115 currently), but I thought I would share my best results thus far without it. AORUS Radeon™ RX 6900 XT XTREME WATERFORCE WB 16G (repasted/puttied front and rear, no LM _yet_). This run was completed with 2700-2800MHz core, fast memory timings, 2150MHz memory, 1.2v, and +15% PL. I left the window open last night to bring the temps down as low as I could.
> 
> View attachment 2553456


Nice ! FYI, the HWInfo sheet says 2,470 MHz for VRAM, not 2150 ? Anyway, once you start hitting MPT, your real fun begins 



VeganJoy said:


> damn i need to figure out how to oc this ryzen chip, cpu score is severely lacking for me lol. my gpu is completely power limited and best ive seen so far is 23500 with a bit of an undervolt. its set for 2750mhz but actual seems to be 2500-2600
> View attachment 2553463


It's a great start, but your CPU score is indeed a bit low for a 5950X; my XTX 6900XT (regular XTX) is paired w/ a 3950X which has a typical CPU score of 15.7k in TS...my 5950X (paired w/ a 3090 Strix) gets 17K + in CPU score. That said, your GPU score is great already and will pick up more w/ faster CPU settings in TS; never mind custom tuning your GPU with MPT et al if you have the cooling for it...


----------



## ArchStanton

J7SC said:


> HWInfo sheet says 2,470 MHz for VRAM,


Yes, and I haven't yet had time to check around and see if others experience the same with HWiNFO64 misreporting memory speeds. Most folks just seem to post bench score without background info from HW or even RS. In my newbishness, I even wondered if this was another Radeon slider that didn't do exactly what its labeling would imply 🤷‍♂️.

I'm currently reading all the info @ UL Benchmarks Time Spy. After getting further in this thread, I have the impression I can easily raise my CPU score with a TS specific BIOS profile (static OC or DOC switching on the C8DH ). I also plan to check with SMT off.

Thank you for the encouragement .


----------



## J7SC

ArchStanton said:


> Yes, and I haven't yet had time to check around and see if others experience the same with HWiNFO64 misreporting memory speeds. Most folks just seem to post bench score without background info from HW or even RS. In my newbishness, I even wondered if this was another Radeon slider that didn't do exactly what its labeling would imply 🤷‍♂️.
> 
> I'm currently reading all the info @ UL Benchmarks Time Spy. After getting further in this thread, I have the impression I can easily raise my CPU score with a TS specific BIOS profile (static OC or DOC switching on the C8DH ). I also plan to check with SMT off.
> 
> Thank you for the encouragement .


...once deep in the bowls of MPT, I needed less encouragement - and more restraint


----------



## ArchStanton

J7SC said:


> ...once deep in the bowls of MPT, I needed less encouragement - and more restraint


I find Time Spy _is _far more engaging than that little blue bar in CPU-Z yes 🤣.



VeganJoy said:


> damn i need to figure out how to oc this ryzen chip


Per J7's comments, I put together a "quick and dirty" Time Spy specific BIOS profile. Static OC is free points in the CPU test (we don't have to balance the OC for AVX/AVX2 in regular Time Spy). I will certainly be making efforts on this front myself going forward.


----------



## CS9K

LtMatt said:


> You have a good sample based on what you've said regardless of the Vdroop.


Aye, I am grateful for the sample that I have. Was just picking the minds of others to see what other tactics that I may have missed, tactics that aren't temperature dependent Vmin.


----------



## J7SC

ArchStanton said:


> I find Time Spy _is _far more engaging than that little blue bar in CPU-Z yes 🤣.
> 
> 
> Per J7's comments, I put together a "quick and dirty" Time Spy specific BIOS profile. Static OC is free points in the CPU test (we don't have to balance the OC for AVX/AVX2 in regular Time Spy). I will certainly be making efforts on this front myself going forward.


My CPU-Z has a red bar, but CPU-Z has nothing to do with it anyhow, being a different metric. When it gets to GPU benches, my fav is Port Royal, though I still have a weak spot for Catzilla - it is just plain weird to watch, even repeatedly.

Re. my comment on MPT and restraint over encouragement, I was referring to some hardwareluxx pages on MPT whereby they raise the question of 'GPU 1.3v ? 1.4v ? 600 W ?'


----------



## ArchStanton

J7SC said:


> Catzilla


Predates my gallivant into overclocking beyond turning XMP on, but I saw it in use in some random J2C video on YT.



J7SC said:


> I was referring to some hardwareluxx pages on MPT whereby they raise the question of 'GPU 1.3v ? 1.4v ? 600 W ?'


I suspect we'll all be on our next platforms before I feel the need learn German 😂.


----------



## LtMatt

ZealotKi11er said:


> Still cannot run my card with anything over 350w. I am hitting 60c/95c pretty quickly.


Is that on the reference AIO XTXH?


----------



## dagget3450

What's better if binning these, higher mem clocks or higher GPU core clocks? I seem to have one with better memory clocks and one with better core clocks?


----------



## J7SC

dagget3450 said:


> What's better if binning these, higher mem clocks or higher GPU core clocks? I seem to have one with better memory clocks and one with better core clocks?


....There's the obvious question of price differentials, especially as 6950XT are close to releasing. Next, I assume both your samples are the same genre, ie. XTXH (regular XTH is a different beast). There's also the question of how you are going to cool it (ie. stock air, custom water) and what your primary use case is. 

Finally, it depends on just by how much the GPU clocks are higher / lower, and by how much the VRAM clocks diverge. If the GPU clocks are within 30 MHz or so, my personal preference would be the card with the (much?) better VRAM, simply because tools such as MPT allow for easier modding of the GPU clocks.


----------



## dagget3450

J7SC said:


> ....There's the obvious question of price differentials, especially as 6950XT are close to releasing. Next, I assume both your samples are the same genre, ie. XTXH (regular XTH is a different beast). There's also the question of how you are going to cool it (ie. stock air, custom water) and what your primary use case is.
> 
> Finally, it depends on just by how much the GPU clocks are higher / lower, and by how much the VRAM clocks diverge. If the GPU clocks are within 30 MHz or so, my personal preference would be the card with the (much?) better VRAM, simply because tools such as MPT allow for easier modding of the GPU clocks.


They are reference and i have had them since launch basically. Right now they are underwater, and will remain so, but in different systems soon as i am done overclocking and benchmarking. only gaming with them

i will test each one by itself soon, i can say they are much more stable underwater than on the stock cooler, thank goodness for that.

I am having trouble deciphering clockspeeds/avgs due to using Mgpu/linked which is only reading primary gpu in things like 3dmark from what i can tell.

i used "radeon" overclock in UI to see what it came up with.
gpu 1 2604 core - mem 2090
gpu2 2548 core - mem 2150

that gave me a base line kind of


----------



## J7SC

dagget3450 said:


> They are reference and i have had them since launch basically. Right now they are underwater, and will remain so, but in different systems soon as i am done overclocking and benchmarking. only gaming with them
> 
> i will test each one by itself soon, i can say they are much more stable underwater than on the stock cooler, thank goodness for that.
> 
> I am having trouble deciphering clockspeeds/avgs due to using Mgpu/linked which is only reading primary gpu in things like 3dmark from what i can tell.
> 
> i used "radeon" overclock in UI to see what it came up with.
> gpu 1 2604 core - mem 2090
> gpu2 2548 core - mem 2150
> 
> that gave me a base line kind of


...if you haven't already done so, I would run each card at the same OC settings (say VRAM max / fast timings, and GPU 2450 min - 2750 max, PL at +15%, stock bios, same ambient and background apps) w/ HWInfo open in the background...probably a couple of different 4K test to really give the VRAM a workout (rinse and repeat re. random variance). This assumes of course that you can disable one or the other w-cooled card w/o too much trouble. HWInfo on effective clocks & watts should be helpful, along w/ fps etc.


----------



## LtMatt

dagget3450 said:


> They are reference and i have had them since launch basically. Right now they are underwater, and will remain so, but in different systems soon as i am done overclocking and benchmarking. only gaming with them
> 
> i will test each one by itself soon, i can say they are much more stable underwater than on the stock cooler, thank goodness for that.
> 
> I am having trouble deciphering clockspeeds/avgs due to using Mgpu/linked which is only reading primary gpu in things like 3dmark from what i can tell.
> 
> i used "radeon" overclock in UI to see what it came up with.
> gpu 1 2604 core - mem 2090
> gpu2 2548 core - mem 2150
> 
> that gave me a base line kind of


I've never seen a 6900 XT that the Auto Mem Overclock feature didn't suggest 2150Mhz. 

It would be interesting to know at what frequency you start losing Timespy graphics score with that sample.


----------



## tootall123

tootall123 said:


> GPU Model: "GIGABYTE AORUS RX 6900 XT Xtreme Waterforce XTXH"
> Default Clock: 2669
> ASIC Quality: 92.6%


Where would be the best place to try and sell this? I barely get to use my PC these days unfortunately.


----------



## dagget3450

J7SC said:


> ...if you haven't already done so, I would run each card at the same OC settings (say VRAM max / fast timings, and GPU 2450 min - 2750 max, PL at +15%, stock bios, same ambient and background apps) w/ HWInfo open in the background...probably a couple of different 4K test to really give the VRAM a workout (rinse and repeat re. random variance). This assumes of course that you can disable one or the other w-cooled card w/o too much trouble. HWInfo on effective clocks & watts should be helpful, along w/ fps etc.


I can attest to when they were on air,. They didn't clock well at all. However I was on a different platform. Also they were on ref air coolers.

On this 5900x they are clocking way above before. I've had them both around 2700ish max clocks were closer to 25/2600ish avg I was able to take #1 spot for my exact hardware on timespy extreme. I like to oc/bench but I am getting to the point I want to test the 5900x in very specific gaming settings.

I am looking for a pump for my 7940x and I'd like to compare it at 4.9 vs the 5900x in a specific gaming load

Anyways I want to bin the best one for my main game box, and probably give the other to my wife's PC.


----------



## ArchStanton

dagget3450 said:


> probably give the other to my wife's PC


----------



## CS9K

CS9K said:


> Aye, I am curious how far yall have been able to push your XTXH GPU's. Mine seems a bit of an oddity. Normally, the voltage required follows a nice, mostly-ish-predictable exponential curve upward as clock speed goes up, but my RX 6900 XT LC in my water block, seems to hit a wall at 2850MHz. It will run 2830MHz all day with no errors, and even runs 2775 stable if I set 1175mV as the max voltage via MPT (my current daily-driver tune). My GPU will almost do 2800MHz at 1175mV. Almost.
> 
> I say all of this because I notice that, even on water with hotspot temps in the low 70's under max load, the firmware's Vdroop gets VERY aggressive when core amperage gets above about 330A or so, and it's almost always during that aggressive Vdroop when it crashes out at 2850MHz and above. I know the VRM on the board is good for it, my PSU is good for it, I'm using hand-made 16ga cables on the card... and I don't mind hotspot getting up into the 80's under max benchmark load nor do I mind wattage going up near 500W, but 350A/460W is as far as it'll let itself go before super aggressive Vdroop kicks in.
> 
> I didn't have this issue on my other Radeon GPU's, but I never pushed those anywhere near this hard,
> 
> Hm, is there a setting you other XTXH users change via MPT to make the GPU ease up on the Vdroop slightly to push core clocks higher? For the record, I have not and will not be using temp dependent Vmin on this card, and I do know and appreciate that Vdroop is a good thing, especially at super-high current draw (I got to learn all about that when wrapping my head around Intel 9th-11th gen overclocking🔥)


So, now that I'm approaching overclocking with the following statement as an absolute, in mind: "2830MHz being the highest that it will ever go at full (stock) voltage"

I incrementally stepped up the max voltage via MPT and ended yesterday evening with 1181mV (I typed 1180mV, but it changed to 1181mV), with a core clock setting in the Adrenalin Control Panel of 2800MHz. Stable in several full Time Spy runs, and passed the Time Spy stress test with 99.7%.

I'll take it! This XTXH, despite not clocking up into the 2900MHz range, is a delightfully excellent sample at the clock speeds that it _does_ run at. 🧡


----------



## bigred00

Hi all! New to the forum and have a question. I just recently picked up a 6900 XT Gaming Z (XTXH version). And noticed my temps have been higher than I’d like. I am looking to put this in a custom loop, and noticed Bykski is the only company with a compatible block. Do y’all have any experience with them being a good brand? What would be needed to get this card waterblock for the best performance paste/thermal pad wise?


----------



## rodac

I am a little bit late to the party, sorry I usually do not obessessivly monitor those threads, but I should have kept up

Towards early March, I saw a post from someone who was frustated about the clock speed of their *Sapphire Toxic Extreme 6900 XT EE* (the fastest Sapphire as of now).
His clock did not reach 2700 + and then there was this discussion of why is worth for the money and they / he should RMA it. Lmatt then commented in the thread and got a few of us check our ASIC % 
My GPU, a Sapphire Toxic Extreme 6900 XT EE has the rather low *ASIC value of 85.6% only*.

I would like to report my own observations, I do not make any claims that I understand or know things but that is what I noticed.

When I bought that same GPU back in 2021 June, of course I wanted to get it to run fast, given the kind of money I had spent, nearly £ 2K.

I am pleased to report that I could get the card to reach up to 2730 Mhz but these were just peaks, often the Hot Spot also reached 99 C as well.
Most of the time the card ran at around 2500 and rarely reached those highs. 

The best 3D Mark scores I got out of the box without any manual tuning and only the Sapphire TRIXX Boost (the funny skull with a gas mask enabled) was a score of Time Spy (1440) 20700 and Time Spy Extreme of 10600 approx.

Then, given that I am not a hard core tuner at all, just very keen on GPU and high tech, I stopped measuring things and gamed on.

Recently, I got into a 3DMark fever again, a after few AMD Adrenaline driver upgrade (4 updates I believe), I was puzzled to see that *the GPU no longer reaches those highs and peaks of 2730 Mhz any more.*

So, I want to let you know that that very same Sapphire Toxic Extreme 6900 XT EE GPU that would run <<up to >> 2700 Mhz before, stopped performing as fast, I ran 3D Mark and got lower scores of around 20500 points (Time Spy 1440)

What do you think happened ? How about the Adrenaline drivers ? Incidentally, I also noticed that the hot spot now only reaches around 90, so that up to 10 degrees less, better for longevity but lower clock speeds.

Anyone can explain that ?


----------



## LtMatt

rodac said:


> I am a little bit late to the party, sorry I usually do not obessessivly monitor those threads, but I should have kept up
> 
> Towards early March, I saw a post from someone who was frustated about the clock speed of their *Sapphire Toxic Extreme 6900 XT EE* (the fastest Sapphire as of now).
> His clock did not reach 2700 + and then there was this discussion of why is worth for the money and they / he should RMA it. Lmatt then commented in the thread and got a few of us check our ASIC %
> My GPU, a Sapphire Toxic Extreme 6900 XT EE has the rather low *ASIC value of 85.6% only*.
> 
> I would like to report my own observations, I do not make any claims that I understand or know things but that is what I noticed.
> 
> When I bought that same GPU back in 2021 June, of course I wanted to get it to run fast, given the kind of money I had spent, nearly £ 2K.
> 
> I am pleased to report that I could get the card to reach up to 2730 Mhz but these were just peaks, often the Hot Spot also reached 99 C as well.
> Most of the time the card ran at around 2500 and rarely reached those highs.
> 
> The best 3D Mark scores I got out of the box without any manual tuning and only the Sapphire TRIXX Boost (the funny skull with a gas mask enabled) was a score of Time Spy (1440) 20700 and Time Spy Extreme of 10600 approx.
> 
> Then, given that I am not a hard core tuner at all, just very keen on GPU and high tech, I stopped measuring things and gamed on.
> 
> Recently, I got into a 3DMark fever again, a after few AMD Adrenaline driver upgrade (4 updates I believe), I was puzzled to see that *the GPU no longer reaches those highs and peaks of 2730 Mhz any more.*
> 
> So, I want to let you know that that very same Sapphire Toxic Extreme 6900 XT EE GPU that would run <<up to >> 2700 Mhz before, stopped performing as fast, I ran 3D Mark and got lower scores of around 20500 points (Time Spy 1440)
> 
> What do you think happened ? How about the Adrenaline drivers ? Incidentally, I also noticed that the hot spot now only reaches around 90, so that up to 10 degrees less, better for longevity but lower clock speeds.
> 
> Anyone can explain that ?


Highly likely the paste has dried, it happened on two of my Toxics. Changing the paste should fix the issue, you likely void warranty. You can buy the warranty stickers Sapphire uses on Ali Express, I have a few. I could maybe send you a couple as I live in the UK if needed. Or you can order them from Ali Express yourself. If yuo do change thermal paste, might be better to use a standard grey paste and not liquid metal. Maybe if you ever had to RMA the GPU they may not notice as the warranty stickers will be intact. If you use liquid metal they would surely notice, but you will get the best possible temps and better than using normal paste. YMMV.


----------



## ArchStanton

bigred00 said:


> What would be needed to get this card waterblock for the best performance paste/thermal pad wise?


The consensus on pastes for the GPU die seems to be "the thicker the better" unless you re-paste every month or two. So, Artic MX-5/Gelid GC-Extreme/Thermal Grizzly Kryonaut Extreme (not the original Kryonaut unless you plan to reapply often) come to mind for me. I will let others advise you on pads, though I myself have become a major fan of thermal putty in place of pads (TG-PP-10). If you decide to give putty a try, keeping your fingers cold but dry makes it far easier to work with. There are multiple places here on the forum where others have offered step by step how-to's on putty application.

Edit: Liquid metal will likely outperform any paste on the GPU die, but there is a risk/reward decision to be made with it.


----------



## dagget3450

Hey side questions since temps seem to be a big talk here. 

Where is a good link/data/guide to see about temps and gpu throttling on 6900xt?
Also what's an acceptable range for hotspot temps vs core temp deltas? (Under water I notice up to 10c difference depending on load/wattage)
I'd like to compare using "normal thermal paste" vs " liquid metal" as an example.


----------



## ArchStanton

dagget3450 said:


> what's an acceptable range for hotspot temps vs core temp deltas?


I'm still playing "catch-up" in this thread (have read pages 1-145 so far), but I have the impression most folks are happy with a delta of <=20. Back in the bowels of this thread it is not uncommon for individuals to report an overall reduction in temperatures of 5-15 degrees and a reduction in deltas of 10-20 degrees after an application of LM. Though many of the people sharing those results had temperature "issues" out of the box (bad paste/pads/mount/etc.)


----------



## dagget3450

ArchStanton said:


> I'm still playing "catch-up" in this thread (have read pages 1-145 so far), but I have the impression most folks are happy with a delta of <=20. Back in the bowels of this thread it is not uncommon for individuals to report an overall reduction in temperatures of 5-15 degrees and a reduction in deltas of 10-20 degrees after an application of LM. Though many of the people sharing those results had temperature "issues" out of the box (bad paste/pads/mount/etc.)


Yeah would be nice to consolidate data into a thread summary pin or even its own thread with good data for those needing it. Anyways it's a massive thread here to read through all this.


----------



## rodac

LtMatt said:


> Highly likely the paste has dried, it happened on two of my Toxics. Changing the paste should fix the issue, you likely void warranty. You can buy the warranty stickers Sapphire uses on Ali Express, I have a few. I could maybe send you a couple as I live in the UK if needed. Or you can order them from Ali Express yourself. If you do change thermal paste, might be better to use a standard grey paste and not liquid metal. Maybe if you ever had to RMA the GPU they may not notice as the warranty stickers will be intact. If you use liquid metal they would surely notice, but you will get the best possible temps and better than using normal paste. YMMV.


Thanks for your light speed reply on this topic, I know that you own the same card and you may well have an idea. I must admit that the paste did not cross my mind, not even bit.
But it kind of makes sense indeed, granted that I do not view myself as a gamer given that have a full time job and overtime to do to be able to pay for those luxuries but there is no denying that I must have accrued no less than 500 hours of use at full power (easy to check based on Steam's time counter) in less that a year, hmm 1.8 hours a day in average.
Well that proves me wrong, I am a gamer 

And we are talking about GPU intensive load only for those 500 hours (I use another machine for light work loads, basic browsing and vmware work)

Interesting to note that the efficiency of a GPU can decrease within a year of moderate use. That is probably not too hard a job to open up that specific card but then what if this is the AIO efficiency that has decreased ? It is decently mounted I believe, right at the top, the best spot for the pump.

So the 2 of you recommend that liquid metal paste. About 6 months ago, I managed to change and replace a screen of an iphone, so given the kind of clock work it is, I should be able to manage that job, on the other hand when I think about the price of this thing.... Ideally it would help to see a tutorial but given that very few own this card, this is not going to be easy to find a tutorial online.

The new nvidia 3090 ti just got released today for about the same price that I paid for the toxic EE, there was still stock when I checked, not like 2020-21 for sure.

Thanks for your kind offer to let me have the stickers, very nice of you, if we think about it, how long is the warranty, typically a year, so I have only 2 months and so left, so not a deal breaker for me.
I am more concerned if I s***w up the job.

Definitely tempted to do that paste job, given that you have seen this before. Where can I buy this paste ?

Cheers


----------



## CS9K

CS9K said:


> So, now that I'm approaching overclocking with the following statement as an absolute, in mind: "2830MHz being the highest that it will ever go at full (stock) voltage"
> 
> I incrementally stepped up the max voltage via MPT and ended yesterday evening with 1181mV (I typed 1180mV, but it changed to 1181mV), with a core clock setting in the Adrenalin Control Panel of 2800MHz. Stable in several full Time Spy runs, and passed the Time Spy stress test with 99.7%.
> 
> I'll take it! This XTXH, despite not clocking up into the 2900MHz range, is a delightfully excellent sample at the clock speeds that it _does_ run at. 🧡


After many hours of babysitting Superposition 4K Optimized, I've settled in on a daily driver tune, finally.

To all: Remember, every GPU is different, so my numbers won't look like "yours" if any of yall go through stability testing as I did.


1181mV (MPT)
2800MHz
2370MHz+Fast
2075MHz fclk (MPT)
450W/350A board/ASIC limits (MPT, max of 500W/385A @ +15%)

I'm quite happy with my RX 6900 XT LC with my EK water block. The memory speeds have helped tremendously at native 4k, which is what I game at. I've never owned a "binned" flagship GPU like this, and it makes he giddy 🧡

For those curious, 4K Optimized score landed _around_ the mid 18,000's points in Unigine Superposition. I'll post screencaps when I'm back on public drivers.


----------



## J7SC

CS9K said:


> After many hours of babysitting Superposition 4K Optimized, I've settled in on a daily driver tune, finally.
> 
> To all: Remember, every GPU is different, so my numbers won't look like "yours" if any of yall go through stability testing as I did.
> 
> 
> 1181mV (MPT)
> 2800MHz
> 2370MHz+Fast
> 2075MHz fclk (MPT)
> 450W/350A board/ASIC limits (MPT, max of 500W/385A @ +15%)
> 
> I'm quite happy with my RX 6900 XT LC with my EK water block. The memory speeds have helped tremendously at native 4k, which is what I game at. I've never owned a "binned" flagship GPU like this, and it makes he giddy 🧡
> 
> For those curious, 4K Optimized score landed _around_ the mid 18,000's points in Unigine Superposition. I'll post screencaps when I'm back on public drivers.


That sounds like a great custom 6950XT to me already  ...anyway, great work ! Now, for the curious...you may recall my Superposition 4K scores that are in the neighborhood, but if it is not too much trouble, can you run Superposition with a VRAM limit of 2150 / fast timings as well as 'unrestricted' VRAM (and everything else the same), please, on your XT LC? As you know, the 2150 VRAM limit is my fav thing to complain about (repeatedly) re. my 3950X/XTX combo, and I would love to know what is across that forbidden wall ! How much is left at the gate, so to speak...


----------



## CS9K

J7SC said:


> That sounds like a great custom 6950XT to me already  ...anyway, great work ! Now, for the curious...you may recall my Superposition 4K scores that are in the neighborhood, but if it is not too much trouble, can you run Superposition with a VRAM limit of 2150 / fast timings as well as 'unrestricted' VRAM (and everything else the same), please, on your XT LC? As you know, the 2150 VRAM limit is my fav thing to complain about (repeatedly) re. my 3950X/XTX combo, and I would love to know what is across that forbidden wall ! How much is left at the gate, so to speak...


While the RX 6950 XT should have the 18Gbit memory, I would be _very_ surprised if they all had the XTXH die and the upgraded VRM MOSFETS that the LC does. That is to say, I'm not mad about my Unicorn GPU 🦄

And yeah, I can eventually get you those numbers. For a point of reference, at 2700MHz, 2150+Fast, 2100 fclk, my RX 6900 XT XTX on water would do 17,800-ish in 4k Optimized on the same PC.

As for the improvements at 4K: The faster VRAM did increase max performance slightly, but the largest improvement is in situations where 4K is juuuust a bit too much for the cache, and the GPU has to lean more-heavily on the memory. The "chug" in those situations is MUCH less pronounced. The extra 100MHz core clock does most of the work for the improvement in max framerates, I feel.


----------



## J7SC

CS9K said:


> While the RX 6950 XT should have the 18Gbit memory, I would be _very_ surprised if they all had the XTXH die and the upgraded VRM MOSFETS that the LC does. That is to say, I'm not mad about my Unicorn GPU 🦄
> 
> And yeah, I can eventually get you those numbers. For a point of reference, at 2700MHz, 2150+Fast, 2100 fclk, my RX 6900 XT XTX on water would do 17,800-ish in 4k Optimized on the same PC.
> 
> As for the improvements at 4K: The faster VRAM did increase max performance slightly, but the largest improvement is in situations where 4K is juuuust a bit too much for the cache, and the GPU has to lean more-heavily on the memory. The "chug" in those situations is MUCH less pronounced. The extra 100MHz core clock does most of the work for the improvement in max framerates, I feel.


...I meant your card sounds like a great 6950 XTXH LC; it's definitely a keeper. Btw, I haven't touched fclk on mine yet, but after running several tests w/ VRAM increments at 10 MHz, 2150 was the highest score consistently. Then I kicked in 'fast timings', and scores improved again, so I don't think I could gain anything beyond getting rid off that annoying 2150 limit. It is also my primary work card for the home office so I don't want to go too crazy (I have the 3090 for that). Anyway, that's why I asked you to run 2150 fast timings vs 2370 fast timings on the same system with otherwise identical settings when it is convenient. It would show me what the VRAM limit _actually_ costs, so to speak. May be it's all just in my head 🥴


----------



## CS9K

J7SC said:


> ...I meant your card sounds like a great 6950 XTXH LC; it's definitely a keeper. Btw, I haven't touched fclk on mine yet, but after running several tests w/ VRAM increments at 10 MHz, 2150 was the highest score consistently. Then I kicked in 'fast timings', and scores improved again, so I don't think I could gain anything beyond getting rid off that annoying 2150 limit. It is also my primary work card for the home office so I don't want to go too crazy (I have the 3090 for that). Anyway, that's why I asked you to run 2150 fast timings vs 2370 fast timings on the same system with otherwise identical settings when it is convenient. It would show me what the VRAM limit _actually_ costs, so to speak. May be it's all just in my head 🥴


I gotchyou, fam. Just, not at the moment; DM sent.


----------



## LtMatt

Here is my superposition score using my Toxic EE at max stock voltage, which is game stable

2750/2850 @1.2v
22.3.1
2124Mhz + FT
2200 FCLK
399W/350A PL/TDC
Windows 11 (screenshot says Win 10 for some reason)


----------



## LtMatt

I'm not a huge fan of Superposition as I always find that it shows a average FPS/score increase with higher memory clocks up to around 2150Mhz, but Timespy and games show worse performance at 2150Mhz. When i run 2124Mhz in Superposition, I get a slightly lower avg FPS/score, but my minimum FPS are 10+ higher. This behaviour has been the same for me over two separate Toxic EEs.



rodac said:


> Thanks for your light speed reply on this topic, I know that you own the same card and you may well have an idea. I must admit that the paste did not cross my mind, not even bit.
> But it kind of makes sense indeed, granted that I do not view myself as a gamer given that have a full time job and overtime to do to be able to pay for those luxuries but there is no denying that I must have accrued no less than 500 hours of use at full power (easy to check based on Steam's time counter) in less that a year, hmm 1.8 hours a day in average.
> Well that proves me wrong, I am a gamer
> 
> And we are talking about GPU intensive load only for those 500 hours (I use another machine for light work loads, basic browsing and vmware work)
> 
> Interesting to note that the efficiency of a GPU can decrease within a year of moderate use. That is probably not too hard a job to open up that specific card but then what if this is the AIO efficiency that has decreased ? It is decently mounted I believe, right at the top, the best spot for the pump.
> 
> So the 2 of you recommend that liquid metal paste. About 6 months ago, I managed to change and replace a screen of an iphone, so given the kind of clock work it is, I should be able to manage that job, on the other hand when I think about the price of this thing.... Ideally it would help to see a tutorial but given that very few own this card, this is not going to be easy to find a tutorial online.
> 
> The new nvidia 3090 ti just got released today for about the same price that I paid for the toxic EE, there was still stock when I checked, not like 2020-21 for sure.
> 
> Thanks for your kind offer to let me have the stickers, very nice of you, if we think about it, how long is the warranty, typically a year, so I have only 2 months and so left, so not a deal breaker for me.
> I am more concerned if I s***w up the job.
> 
> Definitely tempted to do that paste job, given that you have seen this before. Where can I buy this paste ?
> 
> Cheers


Taking it apart is easy, just make sure you use the right screwdriver head as the screws that secure the shroud are quite easy to strip if you are not careful. 

Here's a rough guide. 

1. Remove the two larger screws on the PCI bracket.
2. Remove five screws from the top/bottom of the shroud. 
3. Remove the retention bracket. No need to remove the backplate thankfully. 
4. Clean the GPU die and copper block on the pump.
5. Apply your paste of choice to the GPU die.
6. This is the trickiest part. align the pump and the die together, to do this you'll need to have the back of the PCB facing you with the pump resting on the desk. The four screw holes from the pump will poke through the back of the PCB. Try to balance the PCB so that the four screw holes are through evenly. Unless you balance it, two of them will be through much more than the others. This is important to getting the best possible even mount. Once you have them all through as evenly as possible, attach the retention bracket, making sure to keep things even. 
7. Diagonal pattern on tightening, two turns initially to get each corner of the bracket secured, then 1 turn on each keeping a diagonal patter. If you find that one corner of the retention bracket seems to screw down a further than the others, start again. 
8. The above is not too hard but can be tricky until you get the hang of it. Getting a perfect even mount will help temps significantly. 
9. Reattach the PCI bracket and then shroud screws. 
10. Pray everything powers back up. 😅

I'm sure other Toxic users can relate to the above if they have changed thermal paste.


----------



## CS9K

LtMatt said:


> Here is my superposition score using my Toxic EE at max stock voltage, which is game stable
> 
> 2750/2850 @1.2v
> 22.3.1
> 2124Mhz + FT
> 2200 FCLK
> 399W/350A PL/TDC
> Windows 11 (screenshot says Win 10 for some reason)
> View attachment 2553748


Could I trouble you to give Superposition a go with your max clock speed set to 2800MHz?


----------



## LtMatt

CS9K said:


> Could I trouble you to give Superposition a go with your max clock speed set to 2800MHz?


Sure, I'll run


CS9K said:


> Could I trouble you to give Superposition a go with your max clock speed set to 2800MHz?


Here is the score with 2700/2800. I now think my score at 2750/2850 is perhaps not as good as it should be.









Will re-run again and see if it improves.

Re-run again at 2750/2850. Seems like scores improve a little if you run it again after.


----------



## CS9K

LtMatt said:


> Sure, I'll run
> 
> Here is the score with 2700/2800. I now think my score at 2750/2850 is perhaps not as good as it should be.
> View attachment 2553760
> 
> 
> Will re-run again and see if it improves.


Hm, I do wonder if I may be doing something wrong/inefficiently... your score at 2800MHz is more than 100 points higher than mine. Perhaps the 5950X is making up for the lost ground, though I would think with the faster memory on my GPU I would at least be on-par with your score.

Hmm. I did run the numbers every 10 mhz from 2310 to 2400 with default _and_ fast timings to make sure I wasn't hitting ECC, and core clocks/effective clocks aren't indicating clock-stretching. I'm also on the latest chipset drivers in Windows 11, too.


----------



## LtMatt

CS9K said:


> Hm, I do wonder if I may be doing something wrong/inefficiently... your score at 2800MHz is more than 100 points higher than mine. Perhaps the 5950X is making up for the lost ground, though I would think with the faster memory on my GPU I would at least be on-par with your score.
> 
> Hmm. I did run the numbers every 10 mhz from 2310 to 2400 with default _and_ fast timings to make sure I wasn't hitting ECC, and core clocks/effective clocks aren't indicating clock-stretching. I'm also on the latest chipset drivers in Windows 11, too.


I doubt it, 100 points is nothing really. As long as you keep seeing improvements with clock changes.

Also I get a 50Mhz difference between clock frequency and effective clock. Happens in Timespy too. Yet in games and even Firestrike Ultra, I get no clock stretching. See the screenshot below from Superposition running at 2750/2850Mhz.









Even at stock core clocks/memory clocks/voltage/power limits I see 50Mhz clock stretching in Superposition and Timespy, so it's not related to the overclock. Might have to try FCLK back at default 1944 see if that helps as maybe that is causing it. - EDIT 1940Mhz FCLK made no difference to the 50Mhz clock stretching. 

I need to repaste this sample with liquid metal as I am currently using standard thermal paste. Maybe that will help a little.


----------



## CS9K

LtMatt said:


> I doubt it, 100 points is nothing really. As long as you keep seeing improvements with clock changes.
> 
> Also I get a 50Mhz difference between clock frequency and effective clock. Happens in Timespy too. Yet in games and even Firestrike Ultra, I get no clock stretching. See the screenshot below from Superposition running at 2750/2850Mhz.
> View attachment 2553764
> 
> 
> Even at stock core clocks/memory clocks/voltage/power limits I see 50Mhz clock stretching in Superposition and Timespy, so it's not related to the overclock. *Might have to try FCLK back at default 1944 see if that helps as maybe that is causing it.*


Bingo. Do a few runs at 1940MHz fclk, then raise it 50 mhz, test again, raise 50. You'll see it bonk its head on SoC voltage pretty darn quick. Once I had memory and core clock overclocks dialed in, 2075MHz was about as high as I could go without seeing the SoC voltage pegged at its maximum. Performance did drop off slightly if I set fclk higher, so 2075MHz it is.


----------



## LtMatt

CS9K said:


> Bingo. Do a few runs at 1940MHz fclk, then raise it 50 mhz, test again, raise 50. *You'll see it bonk its head on SoC voltage pretty darn quick*. Once I had memory and core clock overclocks dialed in, 2075MHz was about as high as I could go without seeing the SoC voltage pegged at its maximum. Performance did drop off slightly if I set fclk higher, so 2075MHz it is.


I did try 1940Mhz and it made no difference to the clock stretching, so everything at pure stock now and still happening. At least performance seems okay going on your results.

What do you mean by the part I bolded above? I've never monitored SOC voltage before normally just leave it at default.


----------



## CS9K

LtMatt said:


> I did try 1940Mhz and it made no difference to the clock stretching, so everything at pure stock now and still happening. At least performance seems okay going on your results.
> 
> What do you mean by the part I bolded above?


It's late, I'm tired, and I shouldn't be on the internet; I didn't grab the full context of the part about fclk. My bad, carry on.


----------



## LtMatt

CS9K said:


> It's late, I'm tired, and I shouldn't be on the internet; I didn't grab the full context of the part about fclk. My bad, carry on.


Haha, no worries. I see what you mean though, I checked SOC voltage at with FCLK at stock and it was at 1.066v roughly. 

Increase FCLK to 2200Mhz and the SOC voltage went up to 1.127v. Not sure what to make of this yet so if you have any insight please share.


----------



## CS9K

LtMatt said:


> Haha, no worries. I see what you mean though, I checked SOC voltage at with FCLK at stock and it was at 1.066v roughly.
> 
> Increase FCLK to 2200Mhz and the SOC voltage went up to 1.127v. Not sure what to make of this yet so if you have any insight please share.


So what I was getting at with the fclk's voltage: In my experience, when SoC voltage is parked at 1.127V (or whatever 'your' card maxes out at, for others reading this), GPU performance increases stop scaling as fclk is raised further, once SoC voltage hits that ceiling. 

Trivia time: "fclk" is actually the cache, though it's annoying (and interesting) that the voltage to feed it is tied to "SoC Voltage" on the GPU.


----------



## J7SC

LtMatt said:


> Haha, no worries. I see what you mean though, I checked SOC voltage at with FCLK at stock and it was at 1.066v roughly.
> 
> Increase FCLK to 2200Mhz and the SOC voltage went up to 1.127v. Not sure what to make of this yet so if you have any insight please share.


Interesting stuff !


----------



## LtMatt

CS9K said:


> So what I was getting at with the fclk's voltage: In my experience, when SoC voltage is parked at 1.127V (or whatever 'your' card maxes out at, for others reading this), GPU performance increases stop scaling as fclk is raised further, once SoC voltage hits that ceiling.
> 
> Trivia time: "fclk" is actually the cache, though it's annoying (and interesting) that the voltage to feed it is tied to "SoC Voltage" on the GPU.


Mine seems to start hitting max SOC voltage at around 2060Mhz. 2055Mhz its around 1.110v. Need to see if performance increase stop at around 2060Mhz vs 2200Mhz.

Can't get to the bottom of the clock stretching issue though. The fact it happens at stock makes me think it's out of my control.

No clock stretching when running the Stress Test within AMD Software, or when running games that push power draw up to near 400W like Days Gone etc. 
See screenshot below, blue highlight is effective clock.


----------



## J7SC

LtMatt said:


> Mine seems to start hitting max SOC voltage at around 2060Mhz. 2055Mhz its around 1.110v. Need to see if performance increase stop at around 2060Mhz vs 2200Mhz.
> 
> Can't get to the bottom of the clock stretching issue though. The fact it happens at stock makes me think it's out of my control.
> 
> No clock stretching when running the Stress Test within AMD Software, or when running games that push power draw up to near 400W like Days Gone etc.
> See screenshot below, blue highlight is effective clock.
> View attachment 2553775


When you folks mod FCLK to s.th. like 2130, do you also increase the SOC frequency in MPT ?


----------



## LtMatt

J7SC said:


> When you folks mod FCLK to s.th. like 2130, do you also increase the SOC frequency in MPT ?


No, I leave it at stock. 

I just tested Timespy and FCLK at 2200 showed a performance increase in graphics score vs 2055Mhz. For me at least I think I'll continue using 2200 FCLK since SOC temps are low anyway.


----------



## J7SC

LtMatt said:


> No, I leave it at stock.
> 
> I just tested Timespy and FCLK at 2200 showed a performance increase in graphics score vs 2055Mhz. For me at least I think I'll continue using 2200 FCLK since SOC temps are low anyway.


Tx. My stock SoC v in Superpos is typically around 1.111v, with SoC temps at around 40 C. While I have tried a few other MPT specialties, I never touched FCLK because I'm maxed out out 2150 (effective 2140+) VRAM anyways. Once I finish a biggish work project, I'll play around with FCLK (aka cache) just to see...


----------



## LtMatt

J7SC said:


> Tx. My stock SoC v in Superpos is typically around 1.111v, with SoC temps at around 40 C. While I have tried a few other MPT specialties, I never touched FCLK because I'm maxed out out 2150 (effective 2140+) VRAM anyways. Once I finish a biggish work project, I'll play around with FCLK (aka cache) just to see...


Do it. I've seen no negative to having it set to 2100Mhz-2200Mhz on a few samples. My 6800 XT (which is a bit of a lemon all round) does not like it going above stock though. If it's set too high you'll get a random PC restart. Set it drastically too high and Windows won't load. 25Mhz increments should be fine to find the max limit.


----------



## jonRock1992

LtMatt said:


> No, I leave it at stock.
> 
> I just tested Timespy and FCLK at 2200 showed a performance increase in graphics score vs 2055Mhz. For me at least I think I'll continue using 2200 FCLK since SOC temps are low anyway.


I use 2300MHz FCLK, but my voltage is raised quite a bit. I haven't had any issues with it. I might do a superposition bench later.


----------



## LtMatt

jonRock1992 said:


> I use 2300MHz FCLK, but my voltage is raised quite a bit. I haven't had any issues with it. I might do a superposition bench later.


Balls to the wall Jon Rock they call him! Please do would like to see your results.

Yes I tried 2225-2250Mhz and eventually had stability issues so would need more core (or perhaps SOC?) voltage to go further but the small gains are not worth it IMO.


----------



## ZealotKi11er

LtMatt said:


> Is that on the reference AIO XTXH?


Yes, its Reference XTXH LC using a Bykski Water-Block.


----------



## J7SC

LtMatt said:


> Do it. I've seen no negative to having it set to 2100Mhz-2200Mhz on a few samples. My 6800 XT (which is a bit of a lemon all round) does not like it going above stock though. If it's set too high you'll get a random PC restart. Set it drastically too high and Windows won't load. 25Mhz increments should be fine to find the max limit.


...yeah, looking to find the time to really play around with FCLK and will try your suggestion of 25MHz increments. I also left enough tubing so that I could move the 6900XT XTX from the 3950X 'work' build on the upper left (spoiler) to the 5950X 'play / gamer' build in the lower right that usually runs the 3090, related QD change-over would only be a minute or so. Then again, I'm not a fan of running different GPUs in the same Windows install even w/ DDU; there are always remnants in the registry which can balloon as time marches on. May be I should just build a free-standing test-bench, still have parts for it like frame and PSUs etc from my HWBot days... then just use dedicated bench drives 


Spoiler


----------



## rodac

LtMatt said:


> Taking it apart is easy, just make sure you use the right screwdriver head as the screws that secure the shroud are quite easy to strip if you are not careful.
> Here's a rough guide.
> 1. Remove the two larger screws on the PCI bracket.
> 2. Remove five screws from the top/bottom of the shroud.
> 3. Remove the retention bracket. No need to remove the backplate thankfully.
> 4. Clean the GPU die and copper block on the pump.
> 5. Apply your paste of choice to the GPU die.
> 6. This is the trickiest part. align the pump and the die together, to do this you'll need to have the back of the PCB facing you with the pump resting on the desk. The four screw holes from the pump will poke through the back of the PCB. Try to balance the PCB so that the four screw holes are through evenly. Unless you balance it, two of them will be through much more than the others. This is important to getting the best possible even mount. Once you have them all through as evenly as possible, attach the retention bracket, making sure to keep things even.
> 7. Diagonal pattern on tightening, two turns initially to get each corner of the bracket secured, then 1 turn on each keeping a diagonal pattern. If you find that one corner of the retention bracket seems to screw down a further than the others, start again.
> 8. The above is not too hard but can be tricky until you get the hang of it. Getting a perfect even mount will help temps significantly.
> 9. Reattach the PCI bracket and then shroud screws.
> 10. Pray everything powers back up. 😅
> I'm sure other Toxic users can relate to the above if they have changed thermal paste.


Thanks a lot @LtMatt , that is an amazing clear list of steps you have produced,

and if I am to go by the time stamp I saw, it looks like you wrote this in the middle of the night, I am wondering how you can get away with this unless the timestamp is that of a US server and is incorrect.
I would not even get that treatment if that was paid support, thanks.
That is really very useful, now I just need to do this and take photos and add them to your STEPS.
So I have checked and no, I was wrong, the warranty is not one year but two years, so doing this would indeed void my warranty, I will need to rethink the stickers after all.

I am still trying hard to isolate another root cause, re-ran again all of the 3D mark test I could load up (literally) but the highest clock speed I could achieve was exactly 2692 Mhz.
So that is admittedly not too far off the 2730 Mhz that I would anyway rarely reach.

So within only 10 months, the paste has dried, but maybe that is completely expected behaviour for a hard core tuner but for me that is a bit of a shock.
Specs will indeed probably change will the smallest adjustments.

I tried to use the Adrenaline AMD Software tuning and noticed that when using the default out of the boxes preset modes [quite, balanced, rage*), rage performs worse than with the one click toxic boost option from the Sapphire TRIXX Software.
If I go into Manual tuning mode with the Adrenaline AMD Software, then it breaks everything, the screen goes blue and I need to reboot. It looked quite tempting though as I was able to specify the max frequency, if that had worked, that is.

Coming back to your steps, I have (expectedly ) a long list of questions ;-)

*1) Thermal paste*
Amazon UK -> "_One enjoy Liquid Metal Thermal Paste, 79 W/mK High Performance, Silver King Heatsink Paste, CPU for All Coolers, 1 Grams with Cleanser and Spreader_" OK ?
I have the required alcohol to clean the surface of the chip, that I used when I built my machine back in June (Isopropanol + wipes)
Should I spread the paste with the credit card technique or else put that little bean size ?

*2) Re-assembling the Pump*
Noted your point about keeping the screws that needs to be evenly tightened to optimize spread of the compound. OK
Noted your recommendation to turn the PCB upside down (I believe) so that the Astek Pump is resting on the desk and get the screws through the PCB to apply even pressure more easily.
_<<back of the PCB facing you with the pump resting on the desk>>_
Not sure if I can see those 4 screws at the back, there is a star shape and a square that exposes the back of the chip from what I can see (strange the W and D marked spots)

Back plate of the Radeon AMD Sapphire 6900 XT Toxic Extreme (AIO cooled)



















Question, but then it is kind of no different than what I did when I installed my AIO on my CPU, if there is only one chip, the memory is cooled with the built-in fans.
How does the surface of that GPU chip compare with that of the Rizen 9 5750x CPU ?
It looks like the surface on which I would apply the paste is larger on that GPU compared with the processor I have. So I need a bit more paste.

Navi 21 Die Size = 520 mm²
AMD Ryzen 9 5900X under 300 mm2
*3) Retention Bracket*
Simply not sure what that retention bracket is, would have thought this is re-assembled last

Thanks


----------



## ArchStanton

rodac said:


> Should I spread the paste with the credit card technique or else put that little bean size ?


Short answer: NO!!!

Better answer(s):

How to safely apply liquid metal to things. - YouTube 

Skylake X i9 Delid & Liquid Metal Application Tutorial - YouTube 

Tested for a Year: How Often Should You Change Liquid Metal? - YouTube 

How to Remove Liquid Metal from a CPU & IHS - YouTube


----------



## rodac

ArchStanton said:


> Short answer: NO!!!
> Better answer(s):
> How to safely apply liquid metal to things. - YouTube
> Skylake X i9 Delid & Liquid Metal Application Tutorial - YouTube
> Tested for a Year: How Often Should You Change Liquid Metal? - YouTube
> How to Remove Liquid Metal from a CPU & IHS - YouTube


That is great, thanks for sharing


----------



## ArchStanton

rodac said:


> That is great, thanks for sharing


You are most welcome. Happy cotton swabbing 😎.


----------



## gilor80

Sufferage said:


> Latest, 22.3.1 has been great so far with my Toxic 6900XT 🤘


thank u!


----------



## LtMatt

ArchStanton said:


> You are most welcome. Happy cotton swabbing 😎.


And how tedious it is. 



rodac said:


> Thanks a lot @LtMatt , that is an amazing clear list of steps you have produced,
> 
> and if I am to go by the time stamp I saw, it looks like you wrote this in the middle of the night, I am wondering how you can get away with this unless the timestamp is that of a US server and is incorrect.
> I would not even get that treatment if that was paid support, thanks.
> That is really very useful, now I just need to do this and take photos and add them to your STEPS.
> So I have checked and no, I was wrong, the warranty is not one year but two years, so doing this would indeed void my warranty, I will need to rethink the stickers after all.
> 
> I am still trying hard to isolate another root cause, re-ran again all of the 3D mark test I could load up (literally) but the highest clock speed I could achieve was exactly 2692 Mhz.
> So that is admittedly not too far off the 2730 Mhz that I would anyway rarely reach.
> 
> So within only 10 months, the paste has dried, but maybe that is completely expected behaviour for a hard core tuner but for me that is a bit of a shock.
> Specs will indeed probably change will the smallest adjustments.
> 
> I tried to use the Adrenaline AMD Software tuning and noticed that when using the default out of the boxes preset modes [quite, balanced, rage*), rage performs worse than with the one click toxic boost option from the Sapphire TRIXX Software.
> If I go into Manual tuning mode with the Adrenaline AMD Software, then it breaks everything, the screen goes blue and I need to reboot. It looked quite tempting though as I was able to specify the max frequency, if that had worked, that is.
> 
> Coming back to your steps, I have (expectedly ) a long list of questions ;-)
> 
> *1) Thermal paste*
> Amazon UK -> "_One enjoy Liquid Metal Thermal Paste, 79 W/mK High Performance, Silver King Heatsink Paste, CPU for All Coolers, 1 Grams with Cleanser and Spreader_" OK ?
> I have the required alcohol to clean the surface of the chip, that I used when I built my machine back in June (Isopropanol + wipes)
> Should I spread the paste with the credit card technique or else put that little bean size ?
> 
> *2) Re-assembling the Pump*
> Noted your point about keeping the screws that needs to be evenly tightened to optimize spread of the compound. OK
> Noted your recommendation to turn the PCB upside down (I believe) so that the Astek Pump is resting on the desk and get the screws through the PCB to apply even pressure more easily.
> _<<back of the PCB facing you with the pump resting on the desk>>_
> Not sure if I can see those 4 screws at the back, there is a star shape and a square that exposes the back of the chip from what I can see (strange the W and D marked spots)
> 
> Back plate of the Radeon AMD Sapphire 6900 XT Toxic Extreme (AIO cooled)
> View attachment 2553885
> 
> 
> View attachment 2553886
> 
> 
> 
> Question, but then it is kind of no different than what I did when I installed my AIO on my CPU, if there is only one chip, the memory is cooled with the built-in fans.
> How does the surface of that GPU chip compare with that of the Rizen 9 5750x CPU ?
> It looks like the surface on which I would apply the paste is larger on that GPU compared with the processor I have. So I need a bit more paste.
> 
> Navi 21 Die Size = 520 mm²
> AMD Ryzen 9 5900X under 300 mm2
> *3) Retention Bracket*
> Simply not sure what that retention bracket is, would have thought this is re-assembled last
> 
> Thanks
> 
> 
> View attachment 2553887


1) As ArchStanton suggested. What I will say is I recommend that you have the GPU running for 15 minutes or so at full load and put the liquid metal tube on top of the radiator. I find it spreads easier when it is warm. Also a warning, sometimes it can be a bit of a pain to spread and takes patience and gentle strokes, you'll get the hang of it but it can be a bit fiddly until you get used to it.

2) I've just taken apart my Toxic this morning to repaste it with liquid metal as I was using standard thermal paste previously. I'll share steps, screenshots etc below.

3) It'll be clear in the screenshots below. Brace yourself folks, wall of text and screenshots incoming...

*Sapphire Toxic 6900 XT AIO Disassembly, Re-paste and Reassembly guide* (sponsored by theregoesmywarranty.com)

As Juan Carlos likes to say in Far Cry 6, always use the right tool for the right job. Make sure you have a screwdriver set as you'll need a couple of different types of screw heads.

I also suggest you take note of GPU temperatures (Edge and Junction/Hotspot) running a game or benchmark so you have something to compare to. Try to keep ambient temperature similar so as to not affect the results. What we are looking for is a lower delta between edge and junction and a lower junction temperature overall.

Step1:
Remove the two screws from the PCI bracket.









Step2:
Remove the five screws from the shroud. Three on top, two on the bottom that are not shown below.









Step3:
Gently pull the shroud off. Be careful not to pull out the RGB, pump and fan wires which are all tucked in near the bottom right of the shroud.









Step4:
Turn the the GPU over and remove the retention bracket. Two full turns following the X patter in the order shown.









Step5:
Turn the GPU over again and gently lift the pump off the GPU die as shown below. Clean the thermal paste using a lint free cloth or similar and some isopropyl 99% alcohol or similar. Repeat for the copper baseplate on the pump.









Step6:
Clean GPU die and copper baseplate. I personally don't bother to clean the outsides of the GPU die as I'll be using liquid metal and I figure that gives it a little protection should a bit of liquid metal spill onto the nearby components. Some people clean it all up and put thermal tape over the components, it's up to you what you do here. If you are careful and don't apply too much liquid metal, my method works fine.









Step7:
Apply a half pea size drop of liquid metal. If you put on more than this, you can suck it back up using the liquid metal applicator shown in the second screenshot.


















Step8:
Start spreading the liquid metal. You can use either the brush or the qtip, I prefer the qtip as it gives more control with the brush it is possible to flick some off the GPU die. Slow long strokes to spread it. If you followed my earlier tip and heated up the tube by resting it on the GPU radiator under load, it should spread fairly easily.









Step9:
Once complete, it should look something like this. I personally don't recommend putting a layer on the copper baseplate, a thin coating should give the overall best results.









Step10:
Gently Re-attach the pump to the GPU die.









Step11:
Turn the GPU over. Now you will see the problem I mentioned before with the Toxic. The screw holes on the pump do not come through the PCB evenly unless you use a hand or finger to balance it all out. See the example below and then how I applied pressure with my thumb to ensure it is even.


















Step12:
Secure the retention bracket, X pattern following 1 > 2 > 3 >4. Initially you want to do two full turns on each of the screws. Once each screw has had two full turns to ensure the retention bracket is secured, you can drop to one full turn, keeping the X pattern going. This is the most difficult part and if not done correctly, you may get bad temperatures and contact issues. Just be patient, take your time and do not overtighten the screws. As soon as you feel resistance stop. If one screw seems to require a lot more tightening than the others, stop and start again as something is wrong.









Step13:
Re-attach the shroud using the five screws. Apply pressure to the shroud area just before you put the screw in. It can be easy to strip the screw head and or screw hole if not done properly.









Step14:
Re-attach the PCI bracket screws.









Step15:
We are finished. Time to get on your knees and pray to the holy lord that the PC boots up. 😅









It took me three attempts to get a perfect mount and I've changed thermal paste on these Toxics a dozen times already over a few samples.

Initially I was using SYY 157 thermal paste. It's very thick and does a decent job as a placeholder. 
*Here is the GPU edge/junction temperature running Days Gone at 400W load, 2750/2850Mhz using SYY 157:*








Ambient Temp: 19c
Edge Temp: 56c
Junction Temp: 80
Delta: 24c
Clock Frequency: 2747Mhz (throttling a little bit as GPU is hitting the 399W power limit)

Not bad, this was a lot better than the stock paste but still not ideal.

*Here is the GPU edge/junction temperature running Days Gone at 400W load, 2750/2850Mhz using Silver King Liquid Metal:







*
Ambient Temp: 20.5c
Edge Temp: 59c
Junction Temp: 70c
Delta: 11c
Clock Frequency: 2794Mhz (no throttling as the GPU is not hitting the 399W power limit)

Excellent results. Edge has creeped up a little, but as I believe I improved my overall contact on the mount, my junction temp has dropped 10c. Pretty happy with that.

Hope this helps anyone thinking about changing paste on the Toxic. Just remember that this will almost certainly void your warranty and you could potentially damage the GPU so only do it if you are comfortable and agree to the risks.


----------



## deadfelllow

LtMatt said:


> And how tedious it is.
> 
> 
> 1) As ArchStanton suggested. What I will say is I recommend that you have the GPU running for 15 minutes or so at full load and put the liquid metal tube on top of the radiator. I find it spreads easier when it is warm. Also a warning, sometimes it can be a bit of a pain to spread and takes patience and gentle strokes, you'll get the hang of it but it can be a bit fiddly until you get used to it.
> 
> 2) I've just taken apart my Toxic this morning to repaste it with liquid metal as I was using standard thermal paste previously. I'll share steps, screenshots etc below.
> 
> 3) It'll be clear in the screenshots below. Brace yourself folks, wall of text and screenshots incoming...
> 
> *Sapphire Toxic 6900 XT AIO Disassembly, Re-paste and Reassembly guide* (sponsored by theregoesmywarranty.com)
> 
> As Juan Carlos likes to say in Far Cry 6, always use the right tool for the right job. Make sure you have a screwdriver set as you'll need a couple of different types of screw heads.
> 
> I also suggest you take note of GPU temperatures (Edge and Junction/Hotspot) running a game or benchmark so you have something to compare to. Try to keep ambient temperature similar so as to not affect the results. What we are looking for is a lower delta between edge and junction and a lower junction temperature overall.
> 
> Step1:
> Remove the two screws from the PCI bracket.
> View attachment 2553922
> 
> 
> Step2:
> Remove the five screws from the shroud. Three on top, two on the bottom that are not shown below.
> View attachment 2553923
> 
> 
> Step3:
> Gently pull the shroud off. Be careful not to pull out the RGB, pump and fan wires which are all tucked in near the bottom right of the shroud.
> View attachment 2553943
> 
> 
> Step4:
> Turn the the GPU over and remove the retention bracket. Two full turns following the X patter in the order shown.
> View attachment 2553924
> 
> 
> Step5:
> Turn the GPU over again and gently lift the pump off the GPU die as shown below. Clean the thermal paste using a lint free cloth or similar and some isopropyl 99% alcohol or similar. Repeat for the copper baseplate on the pump.
> View attachment 2553925
> 
> 
> Step6:
> Clean GPU die and copper baseplate. I personally don't bother to clean the outsides of the GPU die as I'll be using liquid metal and I figure that gives it a little protection should a bit of liquid metal spill onto the nearby components. Some people clean it all up and put thermal tape over the components, it's up to you what you do here. If you are careful and don't apply too much liquid metal, my method works fine.
> View attachment 2553926
> 
> 
> Step7:
> Apply a half pea size drop of liquid metal. If you put on more than this, you can suck it back up using the liquid metal applicator shown in the second screenshot.
> View attachment 2553927
> 
> 
> View attachment 2553928
> 
> 
> Step8:
> Start spreading the liquid metal. You can use either the brush or the qtip, I prefer the qtip as it gives more control with the brush it is possible to flick some off the GPU die. Slow long strokes to spread it. If you followed my earlier tip and heated up the tube by resting it on the GPU radiator under load, it should spread fairly easily.
> View attachment 2553929
> 
> 
> Step9:
> Once complete, it should look something like this. I personally don't recommend putting a layer on the copper baseplate, a thin coating should give the overall best results.
> View attachment 2553932
> 
> 
> Step10:
> Gently Re-attach the pump to the GPU die.
> View attachment 2553933
> 
> 
> Step11:
> Turn the GPU over. Now you will see the problem I mentioned before with the Toxic. The screw holes on the pump do not come through the PCB evenly unless you use a hand or finger to balance it all out. See the example below and then how I applied pressure with my thumb to ensure it is even.
> View attachment 2553934
> 
> 
> View attachment 2553935
> 
> 
> Step12:
> Secure the retention bracket, X pattern following 1 > 2 > 3 >4. Initially you want to do two full turns on each of the screws. Once each screw has had two full turns to ensure the retention bracket is secured, you can drop to one full turn, keeping the X pattern going. This is the most difficult part and if not done correctly, you may get bad temperatures and contact issues. Just be patient, take your time and do not overtighten the screws. As soon as you feel resistance stop. If one screw seems to require a lot more tightening than the others, stop and start again as something is wrong.
> View attachment 2553936
> 
> 
> Step13:
> Re-attach the shroud using the five screws. Apply pressure to the shroud area just before you put the screw in. It can be easy to strip the screw head and or screw hole if not done properly.
> View attachment 2553939
> 
> 
> Step14:
> Re-attach the PCI bracket screws.
> View attachment 2553938
> 
> 
> Step15:
> We are finished. Time to get on your knees and pray to the holy lord that the PC boots up. 😅
> View attachment 2553940
> 
> 
> It took me three attempts to get a perfect mount and I've changed thermal paste on these Toxics a dozen times already over a few samples.
> 
> Initially I was using SYY 157 thermal paste. It's very thick and does a decent job as a placeholder.
> *Here is the GPU edge/junction temperature running Days Gone at 400W load, 2750/2850Mhz using SYY 157:*
> View attachment 2553941
> 
> Ambient Temp: 19c
> Edge Temp: 56c
> Junction Temp: 80
> Delta: 24c
> Clock Frequency: 2747Mhz (throttling a little bit as GPU is hitting the 399W power limit)
> 
> Not bad, this was a lot better than the stock paste but still not ideal.
> 
> *Here is the GPU edge/junction temperature running Days Gone at 400W load, 2750/2850Mhz using Silver King Liquid Metal:
> View attachment 2553942
> *
> Ambient Temp: 20.5c
> Edge Temp: 59c
> Junction Temp: 70c
> Delta: 11c
> Clock Frequency: 2794Mhz (no throttling as the GPU is not hitting the 399W power limit)
> 
> Excellent results. Edge has creeped up a little, but as I believe I improved my overall contact on the mount, my junction temp has dropped 10c. Pretty happy with that.
> 
> Hope this helps anyone thinking about changing paste on the Toxic. Just remember that this will almost certainly void your warranty and you could potentially damage the GPU so only do it if you are comfortable and agree to the risks.


Great Post. This will help sapphire 6900xt ee victims like me and many others. Appreciate your work and help Matt.


----------



## LtMatt

deadfelllow said:


> Great Post. This will help sapphire 6900xt ee victims like me and many others. Appreciate your work and help Matt.


Hopefully it helps other Toxic owners who decide to change the thermal paste.


----------



## deadfelllow

Hey guys,

I need your help. So, lately i dissambled my sapphire 6900xt ee for repasting. And after i repasting i noticed that my temps were worse than stock paste. And I repasted many times to fix that but it did not help. Last but not least I noticed that retention bracket screwheads are stripped( the mounting pressure was all over the place. I manually pushed the pcb and aio with my hand and temps were good w.o the screws ) . So i need new ones . Anybody knows that screw type?

Please help.


----------



## LtMatt

deadfelllow said:


> Hey guys,
> 
> I need your help. So, lately i dissambled my sapphire 6900xt ee for repasting. And after i repasting i noticed that my temps were worse than stock paste. And I repasted many times to fix that but it did not help. Last but not least I noticed that retention bracket screwheads are dead ( the mounting pressure was all over the place. I manually pushed the aio and pcb and temps were good w.o the screws ) . So i need that screws. Anybody knows that screw type?
> 
> Please help.
> 
> View attachment 2553952


Hopefully it is the screws and not the screw holes that may have become worn because if that is the case you might need a new AIO.


----------



## deadfelllow

LtMatt said:


> Hopefully it is the screws and not the screw holes that may have become worn because if that is the case you might need a new AIO.


Nah. It's screws %99.99. I just need a new ones.


----------



## dagget3450

deadfelllow said:


> Nah. It's screws %99.99. I just need a new ones.


Those brackets look alot like the ones always on backside of AMD GPUs. You might be able to try one and see if the screws are a match. Not sure about you but I have a ton of old GPUs sitting around.

Good luck.
Edit: to clarify I meant screws only not the bracket.


----------



## LtMatt

dagget3450 said:


> Those brackets look alot like the ones always on backside of AMD GPUs. You might be able to try one and see if the screws are a match. Not sure about you but I have a ton of old GPUs sitting around.
> 
> Good luck.
> Edit: to clarify I meant screws only not the bracket.


I said similar in a DM to deadfelllow. I have a MBA 6800 XT here and the bracket/screws look nigh on identical.


----------



## deadfelllow

LtMatt said:


> I said similar in a DM to deadfelllow. I have a MBA 6800 XT here and the bracket/screws look nigh on identical.





dagget3450 said:


> Those brackets look alot like the ones always on backside of AMD GPUs. You might be able to try one and see if the screws are a match. Not sure about you but I have a ton of old GPUs sitting around.
> 
> Good luck.
> Edit: to clarify I meant screws only not the bracket.


I dont have any spare gpu's. I just need the screw types thats all


----------



## LtMatt

deadfelllow said:


> I dont have any spare gpu's. I just need the screw types thats all


If you get desperate, Ebay spares/faulty gpu...


----------



## deadfelllow

LtMatt said:


> If you get desperate, Ebay spares/faulty gpu...


Fun fact. I'm living in Turkey. 1$ equals to 15 Turkish liras which is very high. Ebay is kinda expensive for me. Lets say spare gtx560 is 60$ which is 900 turkish liras.( minimum wage is 4250 Turkish liras which is 280 $ D)


----------



## ZealotKi11er

What is the best block for reference 6900xt in terms of cooling, thermal pads, fitment?


----------



## dagget3450

Man, so i haven't had much time to test but i did a quick Mgpu run on Strange Brigade, one of the few titles that mgpu that i own.

so at 11520x2160 (5kx2 eyefinity on a crossover 44k in 2 way PBP mode) the fps goes from unplayable to playable.

sinlge gpu @ 34 fps avg.









2x gpu @ 65 avg









Really wish i could do this on dx11 for some games i'd love to play like this. 

how it looks on my mointor: (phone cam not the best)












deadfelllow said:


> Fun fact. I'm living in Turkey. 1$ equals to 15 Turkish liras which is very high. Ebay is kinda expensive for me. Lets say spare gtx560 is 60$ which is 900 turkish liras.( minimum wage is 4250 Turkish liras which is 280 $ D)


bummer, perhaps look locally, maybe a scrap/recycling facility like e-scrap?


----------



## LtMatt

dagget3450 said:


> Man, so i haven't had much time to test but i did a quick Mgpu run on Strange Brigade, one of the few titles that mgpu that i own.
> 
> so at 11520x2160 (5kx2 eyefinity on a crossover 44k in 2 way PBP mode) the fps goes from unplayable to playable.
> 
> sinlge gpu @ 34 fps avg.
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 2x gpu @ 65 avg
> 
> 
> 
> 
> 
> 
> 
> 
> 
> Really wish i could do this on dx11 for some games i'd love to play like this.
> 
> how it looks on my mointor: (phone cam not the best)
> View attachment 2553996
> 
> 
> 
> 
> 
> bummer, perhaps look locally, maybe a scrap/recycling facility like e-scrap?


Not bad, scaling decent too. Is it smooth in MGPU?


----------



## CS9K

ZealotKi11er said:


> What is the best block for reference 6900xt in terms of cooling, thermal pads, fitment?


I'm happy with my EK Acetal + Nickel block. I had one bolt come loose on the front of the card, but no leaks, and when I disassembled it after 1yr of running, the nickel coating on the inside looked fine (just normal light tarnishing). Temps have been fantastic and fitment is tight...

So tight that I have only found one brand of 3rd party thermal pads that are soft enough to work as replacements for the EK pads that come with it: These arctic pads from Amazon https://www.amazon.com/dp/B086CFF6Y1

Based on Shore00 firmness measurements that I could find, pretty much every higher-performing pad that I found was MUCH harder than the EK block can handle; the EK block needs soft pads or else core-to-block contact will not be solid. The good news is, Performance PC's sells the EK pads if you need to replace them.

\_edit\_ Here's a picture of the whole mess of parts during my maintenance after one year of running. I used distilled water + EK Clear Cryofuel concentrate. Everything looks pretty dang good I reckon.

CPU block is a TechN AM4 block.


----------



## dagget3450

LtMatt said:


> Not bad, scaling decent too. Is it smooth in MGPU?


Honestly for me it felt very smooth, but i am a bit forgiving on such high resolutions. Also, i should have read the message in the screenshots. ill post the log details.

i got the resolution incorrect also, it was 11520x3240 - 
looking at the log notes, it looks like frametimes avg was drastically reduced for 2 gpu's practically in half. which would make it far more playable on 2 gpus over 1 since im guessing there might be some input lag on 1gpu.


















In dx12 its getting a bit higher fps over vulkan in 1gpu, but i cannot get Mgpu to run when using benchmark.


----------



## rodac

LtMatt said:


> Hopefully it helps other Toxic owners who decide to change the thermal paste.


I had already used superlatives yesterday with your previous post and level of details, but here I am left short of words. That is unbelievable.
I just cannot believe what I see, you went as far as repeating the whole process and doing this yourself and posting the photos as well.
Yes, that is sure going to help me and many others who own the same GPU a lot, simply because the same issue will likely exist with every single one of them.

In fact, I started researching again a lot tonight unaware that you had already managed to find the time to do all this article and steps with photos.
And I found some others (3 individuals) who posted in Youtube complaining about the Toxic GPU thermals like you said.

I found a tutorial video on You tube of a Limited Edition version owner which who also kindly covers the basics of opening up the shroud, he then applied standard paste with his fingers (that bit was a little unexpected)
Then I read more about that liquid metal paste and it appears to be highly conductive (well it makes sense, it is metal) and I was going to steer away from this compound as I am not as experienced
But now, seeing that concrete example with the exact same model and that specific liquid metal compound, it looks feasible.
I read that there are some issues with the type of metal, aluminium it appears or even nickel, not sure exactly, I expect that you will have looked into this of course.

Awesome post ! Thumbs up

ps: step 6 about the paste around the die, really useful, great


----------



## Raptors31

hi guys i got a 6900 xt asus strix lc the normal one anny tip to help me too boost that score a bit


----------



## ZealotKi11er

CS9K said:


> I'm happy with my EK Acetal + Nickel block. I had one bolt come loose on the front of the card, but no leaks, and when I disassembled it after 1yr of running, the nickel coating on the inside looked fine (just normal light tarnishing). Temps have been fantastic and fitment is tight...
> 
> So tight that I have only found one brand of 3rd party thermal pads that are soft enough to work as replacements for the EK pads that come with it: These arctic pads from Amazon https://www.amazon.com/dp/B086CFF6Y1
> 
> Based on Shore00 firmness measurements that I could find, pretty much every higher-performing pad that I found was MUCH harder than the EK block can handle; the EK block needs soft pads or else core-to-block contact will not be solid. The good news is, Performance PC's sells the EK pads if you need to replace them.
> 
> \_edit\_ Here's a picture of the whole mess of parts during my maintenance after one year of running. I used distilled water + EK Clear Cryofuel concentrate. Everything looks pretty dang good I reckon.
> 
> CPU block is a TechN AM4 block.
> 
> View attachment 2554002


Opened mine to clean. Pinched the gasket and it leaked. Tried to put it back but its leaking all over. Bought a Heatkiller V. Lets see how it does.


----------



## dagget3450

ZealotKi11er said:


> Opened mine to clean. Pinched the gasket and it leaked. Tried to put it back but its leaking all over. Bought a Heatkiller V. Lets see how it does.


bummer, i was happy with the ek copper full cover block for ref. i am not picky and also dont care for RGB.

My temps seems okay, but since the ambient has gone up due to warmer weather. I am seeing high 50s/low 60s on hotspot. i might revert back to stock and see what they are. I also have some temp variance on one block vs another so not sure about that yet. 

i am extremely sad because these were the cheapest at the time. Now they are way cheaper. Dangit.. lol








EK-Quantum Vector RX 6800/6900 - Copper + Plexi


EK-Quantum Vector RX 6800/6900 is a 2nd generation Vector GPU water block from the EK® Quantum Line. It is made for graphics cards based on the latest AMD® RDNA2™ architecture. This water block fits most reference PCB designs of the Radeon RX 6800, RX 6800XT, and RX 6900 GPUs. For a precise...




www.ekwb.com


----------



## ZealotKi11er

dagget3450 said:


> bummer, i was happy with the ek copper full cover block for ref. i am not picky and also dont care for RGB.
> 
> My temps seems okay, but since the ambient has gone up due to warmer weather. I am seeing high 50s/low 60s on hotspot. i might revert back to stock and see what they are. I also have some temp variance on one block vs another so not sure about that yet.
> 
> i am extremely sad because these were the cheapest at the time. Now they are way cheaper. Dangit.. lol
> 
> 
> 
> 
> 
> 
> 
> 
> EK-Quantum Vector RX 6800/6900 - Copper + Plexi
> 
> 
> EK-Quantum Vector RX 6800/6900 is a 2nd generation Vector GPU water block from the EK® Quantum Line. It is made for graphics cards based on the latest AMD® RDNA2™ architecture. This water block fits most reference PCB designs of the Radeon RX 6800, RX 6800XT, and RX 6900 GPUs. For a precise...
> 
> 
> 
> 
> www.ekwb.com


With my current block I was hitting 95c hotspot at 360w.


----------



## dagget3450

ZealotKi11er said:


> With my current block I was hitting 95c hotspot at 360w.


Yikes, I take it that's not normal?


----------



## ZealotKi11er

dagget3450 said:


> Yikes, I take it that's not normal?


I would not think its normal.


----------



## jonRock1992

LtMatt said:


> Here is my superposition score using my Toxic EE at max stock voltage, which is game stable
> 
> 2750/2850 @1.2v
> 22.3.1
> 2124Mhz + FT
> 2200 FCLK
> 399W/350A PL/TDC
> Windows 11 (screenshot says Win 10 for some reason)
> View attachment 2553748


Hey I finally got around to running that Superposition bench on 4k Optimized. This was with my 24/7 stable overclock.
2770 MHz / 2870 MHz core clock
2150 MHz fast-timings mem clock
2300 MHz GPU FCLK
450W/450A PL/TDC
Driver 23.3.2


----------



## LtMatt

jonRock1992 said:


> Hey I finally got around to running that Superposition bench on 4k Optimized. This was with my 24/7 stable overclock.
> 2770 MHz / 2870 MHz core clock
> 2150 MHz fast-timings mem clock
> 2300 MHz GPU FCLK
> 450W/450A PL/TDC
> Driver 23.3.2
> 
> View attachment 2554100


Very nice Jon! What voltage do you need for those game stable clocks in MPT? 

I do notice my min FPS are 10 FPS higher though.


----------



## jonRock1992

LtMatt said:


> Very nice Jon! What voltage do you need for those game stable clocks in MPT?
> 
> I do notice my min FPS are 10 FPS higher though.


I use 1262mV. I don't think the actual voltage is that high most of the time though. My GPU seems to be very droopy. Been running these settings for a long time now. Passed over 40 loops of timespy gt2 with these settings.


----------



## LtMatt

jonRock1992 said:


> I use 1262mV. I don't think the actual voltage is that high most of the time though. My GPU seems to be very droopy. Been running these settings for a long time now. Passed over 40 loops of timespy gt2 with these settings.


Nice, the benefits of true water cooling allowing that voltage and keeping things nice and cool.


----------



## J7SC

jonRock1992 said:


> I use 1262mV. I don't think the actual voltage is that high most of the time though. My GPU seems to be very droopy. Been running these settings for a long time now. Passed over 40 loops of timespy gt2 with these settings.


Vey nice results ! I'm slowly getting there with my droopy and v-leaky, low-Asic chip (I like them that way). For now, I'm limiting it to 1.218v, though HWInfo always reports a lower peak. Without extensive water-cooling, I wouldn't even try that.


----------



## jonRock1992

J7SC said:


> Vey nice results ! I'm slowly getting there with my droopy and v-leaky, low-Asic chip (I like them that way). For now, I'm limiting it to 1.218v, though HWInfo always reports a lower peak. Without extensive water-cooling, I wouldn't even try that.


My hotspot is still in the 60's even at that voltage while gaming lol. Kinda crazy. I'm just using a single slim 360mm rad that's dedicated to the GPU. My CPU uses an AIO.


----------



## J7SC

jonRock1992 said:


> My hotspot is still in the 60's even at that voltage while gaming lol. Kinda crazy. I'm just using a single slim 360mm rad that's dedicated to the GPU. My CPU uses an AIO.


Yeah, hotspot is usually the only one on my w-cooled 6900XT which is beyond the 40 C range (62 C+ -) in these runs...more a function of peak watts /amps than anything else


----------



## LtMatt

J7SC said:


> droopy and v-leaky (I like them that way).


Is that also how you like your women?


----------



## J7SC

LtMatt said:


> Is that also how you like your women?


No, not really...


----------



## LtMatt

J7SC said:


> No, not really...


All (poor) jokes aside, I think a little bump over stock voltage should be okay, just my personal opinion. Even 1.212v at full load only goes up to around 1.150v or thereabouts.


----------



## J7SC

LtMatt said:


> All (poor) jokes aside, I think a little bump over stock voltage should be okay, just my personal opinion. Even 1.212v at full load only goes up to around 1.150v or thereabouts.


...no worries, I took it as a joke. Re. GPU-v and potential inaccuracies with the otherwise excellent HWInfo, I do have voltmeters w/ probes and such, but I fortified the card not only with a w-block (plus thermal putty) on the front, but the back as well, along with an additional, giant heatsink. Put differently, access for the voltage measurement is not so easy now w/o redoing everything - and I keep on telling myself that this is the work-machine GPU, so I really shouldn't go overboard though I don't always listen


----------



## ZealotKi11er

Are you guys getting ready for 500-700w GPUs?


----------



## dagget3450

ZealotKi11er said:


> Are you guys getting ready for 500-700w GPUs?


seems counter productive to tradition of efficiency.


Oh also, i was a bit happy with this:









3DMark.com search


3DMark.com search




www.3dmark.com


----------



## cfranko

dagget3450 said:


> seems counter productive to tradition of efficiency.
> 
> 
> Oh also, i was a bit happy with this:
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 3DMark.com search
> 
> 
> 3DMark.com search
> 
> 
> 
> 
> www.3dmark.com


How many games support multi gpu? I think its pretty pointless.


----------



## ArchStanton

cfranko said:


> How many games support multi gpu? I think its pretty pointless.


I feel you, but if he enjoys it in a financially responsible way what's the harm? I mean, fake tits are pointless too, but **** I still look at them sometimes 🤷‍♂️ 😝.


----------



## dagget3450

cfranko said:


> How many games support multi gpu? I think its pretty pointless.


Yes, from a gaming perspective Mgpu is very niche and or pointless due to support. I have not promoted mgpu gaming and in fact as mentioned in my previous posts. I pulled gpu's from 2 systems to do this testing/benching as per help/interests of another thread on here.

If your bashing a 3dmark bench score due to Mgpu and citing gaming as the strawman argument. You may want to reconsider, when it comes to benching no one cares other than a number/result achieved. The world records for say Memory frequency or cpu frequency aren't meant to be usable for anything other than a number/result to top a chart.


----------



## J7SC

FYI, I still run mGPU (2x 2080 TI) on one of my systems in CFR mode...works quite well (when it works)

...elsewhere, I seem to have found some good settings re. no clock stretching with my little low-Asic XTX. This wasn't a banzai run (re. GPUv, clocks, peak PL), but it wasn't exactly light idling either...


----------



## CS9K

J7SC said:


> FYI, I still run mGPU (2x 2080 TI) on one of my systems in CFR mode...works quite well (when it works)
> 
> ...elsewhere, I seem to have found some good settings re. no clock stretching with my little low-Asic XTX. This wasn't a banzai run (re. GPUv, clocks, peak PL), but it wasn't exactly light idling either...
> View attachment 2554409


Is this with or without Temp Dependent Vmin?


----------



## J7SC

CS9K said:


> Is this with or without Temp Dependent Vmin?


...this is a 'partial' TempDepV...1.218v set w/ MPT settings allowing for down- clocking & -volting.


----------



## alceryes

ZealotKi11er said:


> Are you guys getting ready for 500-700w GPUs?


I think this will catch a lot of people off guard.
Mom-n-pop computer stores and even online enthusiast builders will need to spend twice as much on their PSUs (they usually skimp on the PSU) and they'll probably still be hit with returns due to OCP trips and PSUs that just aren't up to the task, regardless of them having a high enough wattage rating.

I'm fine with my slight overclock and undervolt on my 6900 XT for now. In a couple years when I can't play the games I want to play, at the setting I want to play at, I'll think about adding a block to it to potentially get another ~10% performance boost.


----------



## J7SC

alceryes said:


> I think this will catch a lot of people off guard.
> Mom-n-pop computer stores and even online enthusiast builders will need to spend twice as much on their PSUs (they usually skimp on the PSU) and they'll probably still be hit with returns due to OCP trips and PSUs that just aren't up to the task, regardless of them having a high enough wattage rating.
> 
> I'm fine with my slight overclock and undervolt on my 6900 XT for now. In a couple years when I can't play the games I want to play, at the setting I want to play at, I'll think about adding a block to it to potentially get another ~10% performance boost.


While I enjoy the odd bench, I think your comment _"In a couple years when I can't play the games I want to play, at the setting I want to play at..."_ is spot-on. In addition to that, my XTX is also the daily work-horse and I tend to keep GPUs for a long time...for now, my top two GPUs are both capable of feeding the OLED C1 at a decent clip, and that is all that is required for now, apart from reliability.


----------



## dagget3450

alceryes said:


> I think this will catch a lot of people off guard.
> Mom-n-pop computer stores and even online enthusiast builders will need to spend twice as much on their PSUs (they usually skimp on the PSU) and they'll probably still be hit with returns due to OCP trips and PSUs that just aren't up to the task, regardless of them having a high enough wattage rating.
> 
> I'm fine with my slight overclock and undervolt on my 6900 XT for now. In a couple years when I can't play the games I want to play, at the setting I want to play at, I'll think about adding a block to it to potentially get another ~10% performance boost.


I used to think and do this type of strategy but I found getting a waterblocks later is really difficult now days. A good example is my Vega FE which is basically a ref vega56/64 has needed water since I got it. I can't find any waterblocks for it now....

Honestly it's better IMO to get a block soon after the card when the prices are reasonable. So later down the road your not paying out the yang.

Also since I put my 6900xt on water it's so much more stable when overclocked. At the least I am happy with just the performance difference once on water.


----------



## rodac

LtMatt said:


> ....... 2) I've just taken apart my Toxic this morning to repaste it with liquid metal as I was using standard thermal paste previously. I'll share steps, screenshots etc below.
> ...... Excellent results. Edge has creeped up a little, but as I believe I improved my overall contact on the mount, my junction temp has dropped 10c. Pretty happy with that.
> Hope this helps anyone thinking about changing paste on the Toxic. Just remember that this will almost certainly void your warranty and you could potentially damage the GPU so only do it if you are comfortable and agree to the risks.


@LtMatt 
I am planning to do this re-paste somewhere around Easter time, I want to take my time to minimize potential issues and avoid those broken screws issue reported by someone else.
I will make sure that I use the right tools (the Toxic precision screw driver set provided in the box) based on your Far Cry quote ;-) I have not played any of those games, need to give this a go soon.

I spent many hours searching for an alternative paste other than the liquid metal. There are warnings and disclaimers all over the place about the risks of liquid metal, I have never done a GPU and it is under a year old, so I am not willing to take the risk with liquid metal with such an expensive GPU.

Based on your 'how to' guide, it looks like I am very much in luck with that Sapphire Toxic EE GPU since there is no need to remove the heat sink, so this is going to be much easier than with a more conventional GPU with 3 fans.

In fact, I will probably end up only buying GPUs with AIOs in the future. There are quite a few people who are against AIO, typically because of their longevity anxiety, but then we can argue that by the time the AIO packs up, then the product will be obsolete anyway.

So I looked at a number of thermal paste reviews , found out that Der8Bauer is in fact the man behind that Grizzly paste brand that you see everywhere coming up as best choice in many reviews, there are quite a number of negative reviews in Amazon though, the thing which stands out is this effect called 'Pump Out'. The Grizzly Kryonaut would be prone to that issue.

<< Pump Out -> During thermal cycles there is different expansion of the cooler and the cpu/gpu die and therefore the thermal paste pumps out. This leads to a worse cooling performance.>>

It appears that the paste you first chose (SYY 157) is very thick. According to one of the Amazon reviewers, it looks like the thicker the paste is, the less likely this 'Pump Out' issue is likely to occur, especially with GPUs. I do not know if this statement is true. The draw back, of course, is that this is very difficult to apply, a recurring cause for complaints.

So eventually I went for TF8 ThermalRight today bought from Amazon. (carbon based, 2 grams) but I could have just as well also purchased the same SYY 157 paste, and will probably end up with similar results.
I cannot wait to see the results, will keep you posted. Thanks


----------



## alceryes

J7SC said:


> While I enjoy the odd bench, I think your comment _"In a couple years when I can't play the games I want to play, at the setting I want to play at..."_ is spot-on. In addition to that, my XTX is also the daily work-horse and I tend to keep GPUs for a long time...for now, my top two GPUs are both capable of feeding the OLED C1 at a decent clip, and that is all that is required for now, apart from reliability.


I used to keep GPUs for a while but, over the last three years, I've gone through four.
A few years ago I had a Vega 64 with a Morpheus cooler and Noctua fans that served me well but then I took advantage of an Amazon Renewed mistake. Early on in the GPU price skyrocket, Amazon Renewed was slow in adjusting their Renewed products to reflect the hike. I bought a like-new Titan Xp and sold my Vega 64 at a premium so it was almost a wash. It was a no brainer since it came with a 90-day no questions asked return window.
Then, early last year, I found a refurbished (from Dell) Alienware with a Dell version RTX 3080 for a reasonable price. Unfortunately, that card had hardware issues so I returned it. Other than my time lost, I got a full refund.
Lastly, I found a great ebay deal for my current RX 6900 XT, sealed, direct from AMD. I only bit because the seller literally lives a mile away and actually included the original receipt (for warranty purposes). I sold my Titan Xp for $800 so my net loss was actually under MSRP for the 6900 XT.

Even though the prices have been disheartening, I've actually taken advantage of the situation to get a great card.


----------



## alceryes

dagget3450 said:


> I used to think and do this type of strategy but I found getting a waterblocks later is really difficult now days. A good example is my Vega FE which is basically a ref vega56/64 has needed water since I got it. I can't find any waterblocks for it now....
> 
> Honestly it's better IMO to get a block soon after the card when the prices are reasonable. So later down the road your not paying out the yang.
> 
> Also since I put my 6900xt on water it's so much more stable when overclocked. At the least I am happy with just the performance difference once on water.


I agree to a certain extent.
I got my Vega 64 late in the game and used it on its stock blower (vacuum cleaner noise levels) for a while before I just got fed up with the noise. Luckily, I was able to get the Morpheus cooler, but I realized that, if I had purchased it 6-8 months prior, I would've saved $20-30. There weren't many stores that still had it when I bought and the price was much higher.


----------



## KingPies

Hello everyone. I’m new here. Long time lurker, first time poster. Be kind.
I have a question or two and figure this is the right place.
I have a 6900xt toxic ee like a few others in here. She is approximately 6 months old and I am experiencing some reasonable degradation (circa 20c) in hot spot temp/boost clocks/synthetic scores since install. For similar ambient conditions, I am seeing a hotspot from mid 70’s to now mid 90’s running the stock p bios. Modifying the fan curve still nets mid 90’s at stock p bios. Looking at the detailed data (hwinfo output) hotspot quickly goes to mid 90’s and is maintained throughout ts/tse tests (before returning to very cool when not under load).
I have tried some tuning with the assistance of a (more) knowledgeable friend who is on his second toxic ee after a dead pump and even with under volt I am seeing high temps, and low scores/boosting. Using toxic boost I don’t really see close to the stated 2730, even for short bursts.
I suspect it could be paste dying but are considering rma (repaste = bye bye warranty) Am I on track or is rma a waste of time?

screeny from yesterday








Screeny from october


----------



## rodac

KingPies said:


> Hello everyone. I’m new here. Long time lurker, first time poster. Be kind.
> I have a question or two and figure this is the right place.
> I have a 6900xt toxic ee like a few others in here. She is approximately 6 months old and I am experiencing some reasonable degradation (circa 20c) in hot spot temp/boost clocks/synthetic scores since install. For similar ambient conditions, I am seeing a hotspot from mid 70’s to now mid 90’s running the stock p bios. Modifying the fan curve still nets mid 90’s at stock p bios. Looking at the detailed data (hwinfo output) hotspot quickly goes to mid 90’s and is maintained throughout ts/tse tests (before returning to very cool when not under load).
> I have tried some tuning with the assistance of a (more) knowledgeable friend who is on his second toxic ee after a dead pump and even with under volt I am seeing high temps, and low scores/boosting. Using toxic boost I don’t really see close to the stated 2730, even for short bursts.
> I suspect it could be paste dying but are considering rma (repaste = bye bye warranty) Am I on track or is rma a waste of time?
> 
> screeny from yesterday
> Screeny from october


similar with my toxic EE, but mine reached peaks at 2730 very intermittently at first, not likely to be possible to get an average 2.7k anyway. Now 10 months later, only get 2692 max , the peak hot spot once showed 100 but at full power fluctuates between 85 and 97 degrees.
cannot comment if you should or even can RMA or not. A very good gpu so far. Your time spy , I dare to say in an overclockers forum is in line with the out of the box specs even if the magic figure of 2.7k does not come up. Personally I will see if I can follow @LtMatt ’s tutorial to re apply paste as it is likely that better thermals will yield a bit more performance. The clock speed from what I could see in my time spy is not the only indicator of performance.


----------



## KingPies

rodac said:


> similar with my toxic EE, but mine reached peaks at 2730 very intermittently at first, not likely to be possible to get an average 2.7k anyway. Now 10 months later, only get 2692 max , the peak hot spot once showed 100 but at full power fluctuates between 85 and 97 degrees.
> cannot comment if you should or even can RMA or not. A very good gpu so far. Your time spy , I dare to say in an overclockers forum is in line with the out of the box specs even if the magic figure of 2.7k does not come up. Personally I will see if I can follow @LtMatt ’s tutorial to re apply paste as it is likely that better thermals will yield a bit more performance. The clock speed from what I could see in my time spy is not the only indicator of performance.


Thanks for reply. I did have it up to 24.3k in ts back in the day (min/max/pl only) but seem to have dropped to max ~ 23.3 since trying this past week using driver/wattman only. Seems to almost need an under volt now and max 10% pl, whereas previous it was max volt/pl. strange unit (and yes ts.. just using as a tool)


----------



## LtMatt

KingPies said:


> Hello everyone. I’m new here. Long time lurker, first time poster. Be kind.
> I have a question or two and figure this is the right place.
> I have a 6900xt toxic ee like a few others in here. She is approximately 6 months old and I am experiencing some reasonable degradation (circa 20c) in hot spot temp/boost clocks/synthetic scores since install. For similar ambient conditions, I am seeing a hotspot from mid 70’s to now mid 90’s running the stock p bios. Modifying the fan curve still nets mid 90’s at stock p bios. Looking at the detailed data (hwinfo output) hotspot quickly goes to mid 90’s and is maintained throughout ts/tse tests (before returning to very cool when not under load).
> I have tried some tuning with the assistance of a (more) knowledgeable friend who is on his second toxic ee after a dead pump and even with under volt I am seeing high temps, and low scores/boosting. Using toxic boost I don’t really see close to the stated 2730, even for short bursts.
> I suspect it could be paste dying but are considering rma (repaste = bye bye warranty) Am I on track or is rma a waste of time?
> 
> screeny from yesterday
> View attachment 2554504
> 
> Screeny from october
> View attachment 2554505


Sounds like the paste has dried, happened to my Toxics too. Warranty voided, but temperatures drastically improved with a re-paste. 

More information here if you'd like to try it.


----------



## KingPies

LtMatt said:


> Sounds like the paste has dried, happened to my Toxics too. Warranty voided, but temperatures drastically improved with a re-paste.
> 
> More information here if you'd like to try it.


Thanks matt. Seen the post and will probably give it a go.


----------



## LtMatt

KingPies said:


> Thanks matt. Seen the post and will probably give it a go.


Good luck, let us know how it goes with before and after pics!


----------



## KingPies

LtMatt said:


> Good luck, let us know how it goes with before and after pics!


Will do. Just deciding on paste. Any recs? (Not liquid metal)


----------



## LtMatt

KingPies said:


> Will do. Just deciding on paste. Any recs? (Not liquid metal)


Best chance of not voiding warranty is to buy the warranty stickers from Ali Express so you can replace the W and D with brand new ones, and to then use a thick grey paste similar to the stock paste.
Gelid GC Extreme or SYY 177 meet that criteria. There might be alternatives that others here can recommend that are similar to those mentioned.


----------



## KingPies

LtMatt said:


> Best chance of not voiding warranty is to buy the warranty stickers from Ali Express so you can replace the W and D with brand new ones, and to then use a thick grey paste similar to the stock paste.
> Gelid GC Extreme or SYY 177 meet that criteria. There might be alternatives that others here can recommend that are similar to those mentioned.
> View attachment 2554535


Syy 157?

i ordered some syy157 and some kryonaut


----------



## D1g1talEntr0py

KingPies said:


> Will do. Just deciding on paste. Any recs? (Not liquid metal)


Gelid Extreme


----------



## alceryes

LtMatt said:


> Best chance of not voiding warranty is to buy the warranty stickers from Ali Express so you can replace the W and D with brand new ones, and to then use a thick grey paste similar to the stock paste.
> Gelid GC Extreme or SYY 177 meet that criteria. There might be alternatives that others here can recommend that are similar to those mentioned.
> View attachment 2554535


Just an FYI. If you live and bought the card in the USA, manufacturers can NOT use those stickers as evidence to deny a warranty claim.
Commerce laws in other countries vary, of course.


----------



## LtMatt

alceryes said:


> Just an FYI. If you live and bought the card in the USA, manufacturers can NOT use those stickers as evidence to deny a warranty claim.
> Commerce laws in other countries vary, of course.


Yes and that's one thing to love about the USA. Sadly I do not think that applies in the UK, and perhaps Europe too.


----------



## J7SC

KingPies said:


> Will do. Just deciding on paste. Any recs? (Not liquid metal)





D1g1talEntr0py said:


> Gelid Extreme


...also very happy with GelidEx on the die (+ fyi, thermal putty on the VRAM / custom w-block)


----------



## CS9K

J7SC said:


> ...also very happy with GelidEx on the die (+ fyi, thermal putty on the VRAM / custom w-block)


Thirded on the Gelid GC-Extreme @KingPies; it has been _fantastic_ at resisting pump-out and is my go-to for bare-die applications like on today's spicy a.f. GPU cores.

I even made a video about how to spread it, since GC-Extreme behaves like a non-Newtonian fluid (the more force you use to spread it, the more it resists that force; slow and steady is the way).

Video at the forum link below:









[Official] AMD Radeon RX 6900 XT Owner's Club


Thank you. I will see tomorrow. The pads from EK are 1.00mm on all Liquid Devil and Vector RD blocks. Your block mount has to be the issue.Thinner pads could mean that other components like the memory and VRM won’t make good contact anymore. When mounting your block, make sure the card is flat...




www.overclock.net


----------



## LtMatt

CS9K said:


> Thirded on the Gelid GC-Extreme @KingPies; it has been _fantastic_ at resisting pump-out and is my go-to for bare-die applications like on today's spicy a.f. GPU cores.
> 
> I even made a video about how to spread it, since GC-Extreme behaves like a non-Newtonian fluid (the more force you use to spread it, the more it resists that force; slow and steady is the way).
> 
> Video at the forum link below:
> 
> 
> 
> 
> 
> 
> 
> 
> 
> [Official] AMD Radeon RX 6900 XT Owner's Club
> 
> 
> Thank you. I will see tomorrow. The pads from EK are 1.00mm on all Liquid Devil and Vector RD blocks. Your block mount has to be the issue.Thinner pads could mean that other components like the memory and VRM won’t make good contact anymore. When mounting your block, make sure the card is flat...
> 
> 
> 
> 
> www.overclock.net


Something quite satisfying about spreading and watching thermal paste being spread across a naked die. 🤭


----------



## J7SC

LtMatt said:


> Something quite satisfying about spreading and watching thermal paste being spread across a naked die. 🤭


...even with IHS on CPUs rather than naked die(s), there's a lot of fun to be had...I've done the GelidEX spread on more than one 'giant' Threadripper IHS; very satisfying indeed 

EDIT...somebody get me my beyond-2150 MHz VRAM fun


----------



## KingPies

Sounds like a plan. Thanks all


----------



## ArchStanton

LtMatt said:


> Something quite satisfying about spreading and watching thermal paste being spread across a naked die.


Lay back on the couch please. Now tell me, in your own words, how would you describe your relationship with your mother?


----------



## LtMatt

I've figured out what was causing my 30-40Mhz lower effective clock speed.








As soon as I unchecked that using MPT, no more effective clock dropping. Even in Timespy.

I was playing Halo Infinite earlier and I noticed the effective clock dropping which is odd as I've never noticed it various games before. It was similar to what I noticed in Timespy, but not in Firestrike. This is despite temps being 25c+ from throttle temps and power draw 50W below my power limits.

I unchecked this option, restarted and ran Halo Infinite and Timespy and effective clock is now close to reported clock speed. 4K max settings.









Screenshot is washed out as I had HDR on.


----------



## CS9K

LtMatt said:


> I've figured out what was causing my 30-40Mhz lower effective clock speed.
> View attachment 2554562
> 
> As soon as I unchecked that using MPT, no more effective clock dropping. Even in Timespy.
> 
> I was playing Halo Infinite earlier and I noticed the effective clock dropping which is odd as I've never noticed it various games before. It was similar to what I noticed in Timespy, but not in Firestrike. This is despite temps being 25c+ from throttle temps and power draw 50W below my power limits.
> 
> I unchecked this option, restarted and ran Halo Infinite and Timespy and effective clock is now close to reported clock speed. 4K max settings.
> View attachment 2554563
> 
> 
> Screenshot is washed out as I had HDR on.


Yus, the super-aggressive "Deep Sleep" function has caused all manner of issues in the past. I had particular beef with Unity DX11 games in Exclusive Fullscreen: with DS enabled, the GPU would never clock-up to what it needed to do to sustain 1440p120 and 4k120. Disable deep sleep? Things sailed along smooth as butter.

That was two years ago >.<


----------



## LtMatt

CS9K said:


> Yus, the super-aggressive "Deep Sleep" function has caused all manner of issues in the past. I had particular beef with Unity DX11 games in Exclusive Fullscreen: with DS enabled, the GPU would never clock-up to what it needed to do to sustain 1440p120 and 4k120. Disable deep sleep? Things sailed along smooth as butter.
> 
> That was two years ago >.<


Yes got to love the power saving features that can sometimes impact performance, or at the very least clock speed.

What I will say though is this is the first game I've noticed a problem with the clock speed, and I couldn't really notice any performance deficit with it (DS) enabled.

However, and I'm sure you all agree, I still prefer to see my effective clock within Mhz of the reported clock speed.


----------



## J7SC

LtMatt said:


> I've figured out what was causing my 30-40Mhz lower effective clock speed.
> View attachment 2554562
> 
> As soon as I unchecked that using MPT, no more effective clock dropping. Even in Timespy.
> 
> I was playing Halo Infinite earlier and I noticed the effective clock dropping which is odd as I've never noticed it various games before. It was similar to what I noticed in Timespy, but not in Firestrike. This is despite temps being 25c+ from throttle temps and power draw 50W below my power limits.
> 
> I unchecked this option, restarted and ran Halo Infinite and Timespy and effective clock is now close to reported clock speed. 4K max settings.
> 
> Screenshot is washed out as I had HDR on.





LtMatt said:


> Yes got to love the power saving features that can sometimes impact performance, or at the very least clock speed.
> 
> What I will say though is this is the first game I've noticed a problem with the clock speed, and I couldn't really notice any performance deficit with it (DS) enabled.
> 
> However, and I'm sure you all agree, I still prefer to see my effective clock within Mhz of the reported clock speed.


...this is actually what hardwareluxx said on their site (purple frame below). Oddly enough, my 6900's effective and reported clock often match or are extremely close even w/o those disabled, including prior to MPT use, when new. It's a bit of a strange card, but I like it...


----------



## D1g1talEntr0py

LtMatt said:


> Yes got to love the power saving features that can sometimes impact performance, or at the very least clock speed.
> 
> What I will say though is this is the first game I've noticed a problem with the clock speed, and I couldn't really notice any performance deficit with it (DS) enabled.
> 
> However, and I'm sure you all agree, I still prefer to see my effective clock within Mhz of the reported clock speed.


I tried this out and while my effective clock was in step with my reported clock speed, I got hitching in The Division 2 that I had never experienced before. Frame rates were the same, but some of my frame times were jumping up and down. It was strange. 
This is just my personal experience and maybe specific to my card or something else in my system. But I turned DS_GFXCLK back on and the problem went away.


----------



## cfranko

I've been mining/gaming with my 6900 xt since I bought it (1 year ago) I left it to mine whenever I was not gaming. When I first bought the card it mined stable with 2134 memory clock, today the maximum stable is 2126, I think my memory degraded. Memory temps is around 84 while mining. Any thoughts on this?


----------



## LtMatt

D1g1talEntr0py said:


> I tried this out and while my effective clock was in step with my reported clock speed, I got hitching in The Division 2 that I had never experienced before. Frame rates were the same, but some of my frame times were jumping up and down. It was strange.
> This is just my personal experience and maybe specific to my card or something else in my system. But I turned DS_GFXCLK back on and the problem went away.


Interesting, thanks for sharing and something for folks to keep an eye out for who disable this feature.


----------



## LtMatt

D1g1talEntr0py said:


> I tried this out and while my effective clock was in step with my reported clock speed, I got hitching in The Division 2 that I had never experienced before. Frame rates were the same, but some of my frame times were jumping up and down. It was strange.
> This is just my personal experience and maybe specific to my card or something else in my system. But I turned DS_GFXCLK back on and the problem went away.


Need to do some more testing on this. Saw some stuttering earlier in Warzone and now wondering if this disabled feature may have been the cause.

I wonder if perhaps something else might need to be disabled also in MPT so as to not cause this issue... 🤔


----------



## Godhand007

alceryes said:


> Just an FYI. If you live and bought the card in the USA, manufacturers can NOT use those stickers as evidence to deny a warranty claim.
> Commerce laws in other countries vary, of course.


That might be true but it doesn't mean they won't make your life hell before they honor their warranty,


----------



## D1g1talEntr0py

LtMatt said:


> Need to do some more testing on this. Saw some stuttering earlier in Warzone and now wondering if this disabled feature may have been the cause.
> 
> I wonder if perhaps something else might need to be disabled also in MPT so as to not cause this issue... 🤔


Possibly. Maybe all of the DS_ features need to be disabled.


----------



## rodac

I keep monitoring temperatures usingTechPowerUp and Excel to review stats; ahead of my planned re-pasting with MX4 (to go safe very safe ;-)
On my Sapphire Toxic Extreme, I now noted that the GPU does actually reaches up 2700 Mhz (unlike what I said earlier but finding figures with a bare eye in a text file is not really the cleverest approach)
So it sometimes does reach 27xx but does so in one single sudden 1 second burst before or right after a benchmarking session in Cyberpunk 2077 with RayTracing that seems to push frequencies higher than Time Spy.
Then I checked the difference between the GPU temp and Hotspot temp and I noted difference of 'up to' 39 C degrees in between. How is this even possible, there must be something wrong for such a difference to exist, though for a very short time and then diff goes down to 30 C, maybe it is OK....


----------



## ZealotKi11er

Just got my new block. Cant install until I clean my loop. 
Got this one: HEATKILLER V for RX 6800/6900XT - ACETAL, 134,95 €

Just from the installation:
The mounting system and screws are far better than anything I have used before. 
The thermal pads are also perfectly pre-cut and they look like high quality.
They also include Thermal Grizzly thermal paste. 

Cant wait to see what the temps are like.


----------



## LtMatt

rodac said:


> I keep monitoring temperatures usingTechPowerUp and Excel to review stats; ahead of my planned re-pasting with MX4 (to go safe very safe ;-)
> On my Sapphire Toxic Extreme, I now noted that the GPU does actually reaches up 2700 Mhz (unlike what I said earlier but finding figures with a bare eye in a text file is not really the cleverest approach)
> So it sometimes does reach 27xx but does so in one single sudden 1 second burst before or right after a benchmarking session in Cyberpunk 2077 with RayTracing that seems to push frequencies higher than Time Spy.
> Then I checked the difference between the GPU temp and Hotspot temp and I noted difference of 'up to' 39 C degrees in between. How is this even possible, there must be something wrong for such a difference to exist, though for a very short time and then diff goes down to 30 C, maybe it is OK....
> 
> View attachment 2554856


It's what can happen when the paste dries out and or contact is not very good. It's time to take the step and re-paste, things can only get better. So long as you don't break anything.


----------



## dagget3450

Is the 6950xt going to be added to this thread? Seems like same card from stats I saw.


----------



## jonRock1992

dagget3450 said:


> Is the 6950xt going to be added to this thread? Seems like same card from stats I saw.


I'm not sure, but I really hope that 6950XT vBIOS's can be flashed to XTX-H GPU's. I need a vbios configured for the 18 Gbps mem chips that doesn't have a USB-C output. My GPU lacks the USB-C output, and it causes my system not to post if I flash the 6900 XTLC vBIOS.


----------



## OCmember

How's the Sapphire 6900 XT Toxic air cooled? Currently looking at it or a Sapphire 6900 XT Nitro


----------



## ZealotKi11er

jonRock1992 said:


> I'm not sure, but I really hope that 6950XT vBIOS's can be flashed to XTX-H GPU's. I need a vbios configured for the 18 Gbps mem chips that doesn't have a USB-C output. My GPU lacks the USB-C output, and it causes my system not to post if I flash the 6900 XTLC vBIOS.


 6950 XT will most likely have USB-C output if it shares the same PCB as 6900 XT.


----------



## Sufferage

OCmember said:


> How's the Sapphire 6900 XT Toxic air cooled? Currently looking at it or a Sapphire 6900 XT Nitro


I've got the Toxic Air Cooled, it's a fine card indeed, clocks pretty well even with stock cooling and looks just great, so yeah, i can absolutely recommend it 

Timespy Extreme with 3800XT CPU


----------



## jonRock1992

ZealotKi11er said:


> 6950 XT will most likely have USB-C output if it shares the same PCB as 6900 XT.


I'm hoping that there are AIB versions that don't have it. The only reason I'm in this predicament is because of Asus's DUMB USB overcurrent protection feature for the Dark Hero. I'm just too lazy to sell the dark hero and replace it with a different mobo.


----------



## alceryes

Has anyone experienced a substantial Time Spy (or similar benchmarking program) *graphics* score improvement from upgrading your CPU only? (not CPU + motherboard + memory)
If yes, please list the before and after CPUs and grpahics scores. I'm wondering how much my 9900k is holding back my 6900 XT.


----------



## rodac

LtMatt said:


> It's what can happen when the paste dries out and or contact is not very good. It's time to take the step and re-paste, things can only get better. So long as you don't break anything.


Thanks @LtMatt , yes, I am repasting this within a few days, took a day off work just for that. just need to clear my desk, monitor, headphone amps etc... and not rush things to unmount, open etc... going slow will minimize risks, but yea just wanted to share those stats and will see how this improves with newer paste. The next step will be to see how I can tune this up a little bit to see if I can take it a bit beyond what the toxic option can do, I have looked into your previous posts and will be utilizing this info. You have the same hardware I believe, so in theory I should be able to get nearer the sort of perf you get on your current hardware, but a steep learning curve for me.


----------



## LtMatt

rodac said:


> Thanks @LtMatt , yes, I am repasting this within a few days, took a day off work just for that. just need to clear my desk, monitor, headphone amps etc... and not rush things to unmount, open etc... going slow will minimize risks, but yea just wanted to share those stats and will see how this improves with newer paste. The next step will be to see how I can tune this up a little bit to see if I can take it a bit beyond what the toxic option can do, I have looked into your previous posts and will be utilizing this info. You have the same hardware I believe, so in theory I should be able to get nearer the sort of perf you get on your current hardware, but a steep learning curve for me.


Feel free to send me a message if you need help or have additional questions. Would be good if you took some pictures along the way as you do it, if you can be bothered.


----------



## rodac

LtMatt said:


> Feel free to send me a message if you need help or have additional questions. Would be good if you took some pictures along the way as you do it, if you can be bothered.


That is great, sure, I will take some photos, good idea. If I get stuck later on with tuning options, I sure will ask ;-) well reasonably. For now, the only issue I have had is the Thermalright TF8 paste I got, there is no way to check if the product is authentic and I believe that a significant percentage of fake products are sold by Amazon and cause problems, so I contacted ThermalRight and it looks like there is a potential issue, they wrote back stating that their recent products have some kind of code to check authenticity and mine does not have, the packaging images on the website do not match what my packaging looks like (worst case, it is a fake packaging, best case it is an old product that reached the end of its shelf life) so I purchased another tube of Artic MX-4 for less than 7 pounds and that one did have the authenticity code that I verified, it won't ruin me for sure and I will not get into troubles.


----------



## ArchStanton

rodac said:


> a significant percentage of fake products are sold by Amazon


possibly you mean "are sold by third parties through Amazon", or do you mean sold by Amazon itself? I am curious to hear if this has become an issue with Amazon itself.


----------



## D-EJ915

ArchStanton said:


> possibly you mean "are sold by third parties through Amazon", or do you mean sold by Amazon itself? I am curious to hear if this has become an issue with Amazon itself.


amazon and "fulfilled by amazon" usually get put into the same inventory pool it seems so both


----------



## rodac

ArchStanton said:


> possibly you mean "are sold by third parties through Amazon", or do you mean sold by Amazon itself? I am curious to hear if this has become an issue with Amazon itself.


Sold by third party on Amazon. I am not sure it is a fake but given the price of the Gpu, I have a no risk policy.


----------



## ZealotKi11er

jonRock1992 said:


> I'm hoping that there are AIB versions that don't have it. The only reason I'm in this predicament is because of Asus's DUMB USB overcurrent protection feature for the Dark Hero. I'm just too lazy to sell the dark hero and replace it with a different mobo.


Completely forgot about that. Yeah that would be cool to flash but I have a feeling it might not be possible if they make more changes.


----------



## ArchStanton

@rodac thank you for the clarification. I purchase quite a few items, both personal and business related, through the Amazon "portal", and if Amazon itself had begun to peddle knockoffs...that would be bad .


----------



## ArchStanton

D-EJ915 said:


> amazon and "fulfilled by amazon" usually get put into the same inventory pool it seems so both


I have always interpreted "fulfilled" to mean "facilitated" in a logistical sense. Amazon itself certainly gets a chunk of the pie, but I don't consider them the baker in that case. 🤷‍♂️


----------



## Blameless

alceryes said:


> Has anyone experienced a substantial Time Spy (or similar benchmarking program) *graphics* score improvement from upgrading your CPU only? (not CPU + motherboard + memory)
> If yes, please list the before and after CPUs and grpahics scores. I'm wondering how much my 9900k is holding back my 6900 XT.


Just a CPU upgrade should do very little to the graphics score, unless you have an extremely slow CPU, which you don't.


----------



## Iarwa1N

Hi guys, I just sold my 6800xt RedDevil and looking for a 6900 xtxh. I have a thick 360 rad custom loop on the 5900x. Should I go for Asrock OCF for best PCB and get a block and integrate that into my custom loop or should I get a Sapphira Toxic Extreme Edition with Aio. It seems to be that the Sapphire cards have a better binned chips in them, is this wrong? I will push the card with MPT and I don’t know the AIO on the toxic would be enough, since I saw lots of post with Toxic cards having high hot spots out of factory. Another option is to get the Asus Strix Top with AIO, but I don’t know the 240 rad on that card would be enough?


Sent from my iPhone using Tapatalk


----------



## alceryes

Iarwa1N said:


> Hi guys, I just sold my 6800xt RedDevil and looking for a 6900 xtxh. I have a thick 360 rad custom loop on the 5900x. Should I go for Asrock OCF for best PCB and get a block and integrate that into my custom loop or should I get a Sapphira Toxic Extreme Edition with Aio. It seems to be that the Sapphire cards have a better binned chips in them, is this wrong? I will push the card with MPT and I don’t know the AIO on the toxic would be enough, since I saw lots of post with Toxic cards having high hot spots out of factory. Another option is to get the Asus Strix Top with AIO, but I don’t know the 240 rad on that card would be enough?
> 
> 
> Sent from my iPhone using Tapatalk


Ultimately, it's gonna be the roll of a die. Even within these higher-binned XTHX cores you'll have ones that can barely pass what a standard XTH can do (bad luck) and ones that will be stable at 2800MHz+ (good luck). Yours will likely fall somewhere in the middle of these two extremes.

If you already have a great custom loop, and adding a GPU block to it wouldn't be much work, I would go that route - but that's just me.


----------



## LtMatt

Iarwa1N said:


> Hi guys, I just sold my 6800xt RedDevil and looking for a 6900 xtxh. I have a thick 360 rad custom loop on the 5900x. Should I go for Asrock OCF for best PCB and get a block and integrate that into my custom loop or should I get a Sapphira Toxic Extreme Edition with Aio. It seems to be that the Sapphire cards have a better binned chips in them, is this wrong? I will push the card with MPT and I don’t know the AIO on the toxic would be enough, since I saw lots of post with Toxic cards having high hot spots out of factory. Another option is to get the Asus Strix Top with AIO, but I don’t know the 240 rad on that card would be enough?
> 
> 
> Sent from my iPhone using Tapatalk


Agree with the post from acleryes.

What I will say is that the Toxic Extreme Editions are binned (as they guarantee a boost clock of 2740Mhz or so) so I think perhaps that may give you the best chance of a good clocker. But ultimately, it's still a lottery and any of the XTXH GPUs could be as good or better.


----------



## deadfelllow

After repasting/remounting like 949134 times on my sapphire ee xtxh i finally get good results. Insane cable management for more contact shown below. I zipped the cables to my case for better contact on gpu die lmao


----------



## CS9K

deadfelllow said:


> After repasting/remounting like 949134 times on my sapphire ee xtxh i finally get good results. Insane cable management for more contact shown below. I zipped the cables to my case for better contact on gpu die lmao
> 
> 
> View attachment 2555322
> 
> View attachment 2555325
> 
> 
> 
> 
> 
> 
> 
> View attachment 2555323
> 
> View attachment 2555324


Good lord, those Sapphire EE's have been some of the best bins out there!


----------



## deadfelllow

CS9K said:


> Good lord, those Sapphire EE's have been some of the best bins out there!


But i can assure you dont want to repaste those Sapphire EE s xD. Its really annoying because of the contact. Tbh its my 30th time to find good results


----------



## LtMatt

deadfelllow said:


> But i can assure you dont want to repaste those Sapphire EE s xD. Its really annoying because of the contact. Tbh its my 30th time to find good results


Lol, truth. Exactly why i wrote that guide.


----------



## deadfelllow

LtMatt said:


> Lol, truth. Exactly why i wrote that guide.


Exactly. But actually, my savior is corsaid 5000d zips for holding cables tight


----------



## rodac

deadfelllow said:


> Exactly. But actually, my savior is corsaid 5000d zips for holding cables tight


Nice case indeed that Corsair 5000D.
Ref, you previous post earlier , what was the problem with those screws when you re-assembled your Toxic EE, did you end up finding replacement screws OR you ended up applying more pressure to get the other end to stick out and that made it easier ? I am asking because I am soon going to re-paste that very same GPU and I am trying to get ready for the challenge. It looks like you had to try this many times to get it right.
And which Thermal paste did you apply ?


----------



## PanZwu

jonRock1992 said:


> I'm hoping that there are AIB versions that don't have it. The only reason I'm in this predicament is because of Asus's DUMB USB overcurrent protection feature for the Dark Hero. I'm just too lazy to sell the dark hero and replace it with a different mobo.


same boat - tried to flash my RDU on my ASUS TUF board - same overcurrent protection.


----------



## deadfelllow

rodac said:


> Nice case indeed that Corsair 5000D.
> Ref, you previous post earlier , what was the problem with those screws when you re-assembled your Toxic EE, did you end up finding replacement screws OR you ended up applying more pressure to get the other end to stick out and that made it easier ? I am asking because I am soon going to re-paste that very same GPU and I am trying to get ready for the challenge. It looks like you had to try this many times to get it right.
> And which Thermal paste did you apply ?


Hello the problem was screws. You need to try like 435345 times to get good contact for your plate and die . Im using my old gtx 1060 screws on 6900xt and it works like magic. Dont repaste it until you have to.


----------



## jonRock1992

PanZwu said:


> same boat - tried to flash my RDU on my ASUS TUF board - same overcurrent protection.


It really sucks because our motherboards are limiting the potential performance of our GPU's. I wonder why ASUS motherboards do this? I have contacted @safedisk about the issue, but haven't heard back from him in a while.


----------



## rodac

deadfelllow said:


> Hello the problem was screws. You need to try like 435345 times to get good contact for your plate and die . Im using my old gtx 1060 screws on 6900xt and it works like magic. Dont repaste it until you have to.


Sure, it makes sense indeed, just got through my repaste tonight but I did sweat quite a bit, I made it but did it at sloth pace. Posting the outcome shortly. Thanks


----------



## rodac

Successful *re-paste* job for my *AMD Radeon Sapphire 6900 XT Toxic Extreme*. based on @LtMatt 's tutorial (see above in thread)

Tonight, I repasted my GPU at 'sloth' pace and within about 5 hours, did my AMD Ryzen 5950X CPU while I was at it, I used the same thermal paste ThermalRight TF8.
I chose the TF8 over MX4 in the end after I tested both on a piece of cardboard, I found the MX4 to lack viscosity although it is easier to apply. It appears that better viscosity may help at high temperatures loads.

Here, I want to give *special credit to @LtMatt * for his amazing tutorial further up this thread that gave me the confidence to do this challenging job when you put in context the price of the GPU and its age (10 months) and warranty risk, that is quite scary indeed. This shows what collaboration can do. 
All went fine, I did not have to repeat any of the steps in @LtMatt 's tutorial.

*Before and After comparison [before],[After], [Comment]*
Top Hot Spot: 96 C, 77 C -> Amazingly good !
Diff temp Hot Spot and GPU temp: 41 C, 24 C -> Amazingly good !
Top GPU frequency Mhz: 2690, 2733 -> Back to Normal.
Top Memory Clock Mhz: 2592, 2886 -> Unexpected rise ?
Fan Speed: 67%, 52% -> Much less temperature load, very good
Top GPU temp: 57 C, 56 C -> Unexpected, hardly any change ?
GPU Chip Power Draw: 357 W, 358 W -> No change here of course

I also re-ran 3D Mark benchmarks and noted a significant improvements in the graphics scores.
The CPU AIO fans appears to be quieter although the top CPU temperature reaches the same peak, it looks as if there is less work for the AIO to bring the temperature down but I did not note any performance score improvement for the CPU in 3D Mark.


*The tools for the job*
























*BEFORE*: Here is what the factory paste condition looked like


















*AFTER*: Here is the sticky ThermalRight TF8 paste.



















My machine after I rebuilt it, a rather big case that sits on my desk.










Other images










Glad that @LtMatt insisted to turn those screws slowly a few turns at a time in a loop sequence, that avoided those the springs from jumping forward when I disassembled that retention bracket and allowed the plate to stick well on the chip at reassembly time.


----------



## CS9K

Let's hear it for @LtMatt! Hear hear! 🍻


----------



## jonRock1992

rodac said:


> Successful *re-paste* job for my *AMD Radeon Sapphire 6900 XT Toxic Extreme*. based on @LtMatt 's tutorial (see above in thread)
> 
> Tonight, I repasted my GPU at 'sloth' pace and within about 5 hours, did my AMD Ryzen 5950X CPU while I was at it, I used the same thermal paste ThermalRight TF8.
> I chose the TF8 over MX4 in the end after I tested both on a piece of cardboard, I found the MX4 to lack viscosity although it is easier to apply. It appears that better viscosity may help at high temperatures loads.
> 
> Here, I want to give *special credit to @LtMatt * for his amazing tutorial further up this thread that gave me the confidence to do this challenging job when you put in context the price of the GPU and its age (10 months) and warranty risk, that is quite scary indeed. This shows what collaboration can do.
> All went fine, I did not have to repeat any of the steps in @LtMatt 's tutorial.
> 
> *Before and After comparison [before],[After], [Comment]*
> Top Hot Spot: 96 C, 77 C -> Amazingly good !
> Diff temp Hot Spot and GPU temp: 41 C, 24 C -> Amazingly good !
> Top GPU frequency Mhz: 2690, 2733 -> Back to Normal.
> Top Memory Clock Mhz: 2592, 2886 -> Unexpected rise ?
> Fan Speed: 67%, 52% -> Much less temperature load, very good
> Top GPU temp: 57 C, 56 C -> Unexpected, hardly any change ?
> GPU Chip Power Draw: 357 W, 358 W -> No change here of course
> 
> I also re-ran 3D Mark benchmarks and noted a significant improvements in the graphics scores.
> The CPU AIO fans appears to be quieter although the top CPU temperature reaches the same peak, it looks as if there is less work for the AIO to bring the temperature down but I did not note any performance score improvement for the CPU in 3D Mark.
> 
> 
> *The tools for the job*
> View attachment 2555464
> View attachment 2555469
> 
> View attachment 2555465
> 
> 
> *BEFORE*: Here is what the factory paste condition looked like
> View attachment 2555466
> 
> 
> View attachment 2555468
> 
> 
> *AFTER*: Here is the sticky ThermalRight TF8 paste.
> 
> View attachment 2555470
> 
> 
> View attachment 2555471
> 
> 
> My machine after I rebuilt it, a rather big case that sits on my desk.
> 
> View attachment 2555472
> 
> 
> Other images
> 
> View attachment 2555473
> 
> 
> Glad that @LtMatt insisted to turn those screws slowly a few turns at a time in a loop sequence, that avoided those the springs from jumping forward when I disassembled that retention bracket and allowed the plate to stick well on the chip at reassembly time.
> 
> View attachment 2555474


Nice job! Is your memory clock really that high though? That seems impossible. I think these GPU's usually top out at around 2400MHz mem clock with the LC vBIOS.


----------



## LtMatt

rodac said:


> Successful *re-paste* job for my *AMD Radeon Sapphire 6900 XT Toxic Extreme*. based on @LtMatt 's tutorial (see above in thread)
> 
> Tonight, I repasted my GPU at 'sloth' pace and within about 5 hours, did my AMD Ryzen 5950X CPU while I was at it, I used the same thermal paste ThermalRight TF8.
> I chose the TF8 over MX4 in the end after I tested both on a piece of cardboard, I found the MX4 to lack viscosity although it is easier to apply. It appears that better viscosity may help at high temperatures loads.
> 
> Here, I want to give *special credit to @LtMatt * for his amazing tutorial further up this thread that gave me the confidence to do this challenging job when you put in context the price of the GPU and its age (10 months) and warranty risk, that is quite scary indeed. This shows what collaboration can do.
> All went fine, I did not have to repeat any of the steps in @LtMatt 's tutorial.
> 
> *Before and After comparison [before],[After], [Comment]*
> Top Hot Spot: 96 C, 77 C -> Amazingly good !
> Diff temp Hot Spot and GPU temp: 41 C, 24 C -> Amazingly good !
> Top GPU frequency Mhz: 2690, 2733 -> Back to Normal.
> Top Memory Clock Mhz: 2592, 2886 -> Unexpected rise ?
> Fan Speed: 67%, 52% -> Much less temperature load, very good
> Top GPU temp: 57 C, 56 C -> Unexpected, hardly any change ?
> GPU Chip Power Draw: 357 W, 358 W -> No change here of course
> 
> I also re-ran 3D Mark benchmarks and noted a significant improvements in the graphics scores.
> The CPU AIO fans appears to be quieter although the top CPU temperature reaches the same peak, it looks as if there is less work for the AIO to bring the temperature down but I did not note any performance score improvement for the CPU in 3D Mark.
> 
> 
> *The tools for the job*
> View attachment 2555464
> View attachment 2555469
> 
> View attachment 2555465
> 
> 
> *BEFORE*: Here is what the factory paste condition looked like
> View attachment 2555466
> 
> 
> View attachment 2555468
> 
> 
> *AFTER*: Here is the sticky ThermalRight TF8 paste.
> 
> View attachment 2555470
> 
> 
> View attachment 2555471
> 
> 
> My machine after I rebuilt it, a rather big case that sits on my desk.
> 
> View attachment 2555472
> 
> 
> Other images
> 
> View attachment 2555473
> 
> 
> Glad that @LtMatt insisted to turn those screws slowly a few turns at a time in a loop sequence, that avoided those the springs from jumping forward when I disassembled that retention bracket and allowed the plate to stick well on the chip at reassembly time.
> 
> 
> View attachment 2555474


Excellent work! Glad to hear it all went okay and the results look good.

That stock paste looks drier than one of Ghandi's flip flops. 

That toolkit that Sapphire included with the GPU is very handy. It's like they almost want you to use it, take apart the GPU to change paste and void your warranty. 🤔

Your next mission is to improve that cable management now. The work never stops.


----------



## rodac

LtMatt said:


> Excellent work! Glad to hear it all went okay and the results look good.
> 
> That stock paste looks drier than one of Ghandi's flip flops.
> 
> That toolkit that Sapphire included with the GPU is very handy. It's like they almost want you to use it, take apart the GPU to change paste and void your warranty. 🤔
> 
> Your next mission is to improve that cable management now. The work never stops.


Exactly what crossed my mind, it is like they had anticipated that you would use that glossy toolkit to change the thermal paste, they know who their target product is for.
ah yes the cables do not look good, it is true, but then I willingly took the risk to expose them ;-) the tempered glass hides this well though but way too heavy.

Well now, putting side the cable work, I have this odd feeling that my chip is not a top performer but since I have by now more thermal allowance, I should be able to push it up a notch, but probably not as far as your copy.
So, I disabled the TriXX 8.6 'Toxic boost' skull on a ventilator








and then enabled 'Rage' mode in the AMD GPU auto tuning in 'AMD Adrenaline' 22.4.1 with







,it sounds like it is pumping more electric juice and it is sure more noisy but the 3D mark score is worse than the performance I get with 'Toxic Boost' . This is obviously not the right approach and when I went into manual tuning mode, I ended up with a blue screen.
What would be the right tool for the job ?
Thanks


----------



## rodac

jonRock1992 said:


> Nice job! Is your memory clock really that high though? That seems impossible. I think these GPU's usually top out at around 2400MHz mem clock with the LC vBIOS.


Thanks @jonRock1992 , you are right, that must be an error, so we are talking about this:


* Memory Clock [MHz] *

TechPowerUp GPU-Z shows a clock set at 2100 Mhz indeed with 'Toxic Boost' enabled and and a set frequency of 2730 Mhz.








My last run with the TechPowerUp GPU-Z shows in the logs:
2140, there is not trace of something like 28xxx.

and yes 2886 is shown in the run I did yesterday at 22:00 h
I am attaching the full log to show you, so it may be a sensor error, I do not have a clue


----------



## LtMatt

@rodac I'd just recommend using the Toxic Boost option which you disabled. Then set the memory to 2112Mhz with Fast Timings enabled. Away you go.


----------



## deadfelllow

rodac said:


> Thanks @jonRock1992 , you are right, that must be an error, so we are talking about this:
> 
> 
> * Memory Clock [MHz] *
> 
> TechPowerUp GPU-Z shows a clock set at 2100 Mhz indeed with 'Toxic Boost' enabled and and a set frequency of 2730 Mhz.
> View attachment 2555499
> 
> My last run with the TechPowerUp GPU-Z shows in the logs:
> 2140, there is not trace of something like 28xxx.
> 
> and yes 2886 is shown in the run I did yesterday at 22:00 h
> I am attaching the full log to show you, so it may be a sensor error, I do not have a clue
> View attachment 2555501


While under stress test can you pull the tubes to upwards and see any noticable temp change?

Be gentle tho


----------



## PG705

Hello people, I've been following this forum for a while now and learning a lot about overclocking RDNA2 cards. I have a 6900 XT LC Strix (TOP version, so XTXH). Currently running an undervolt of 1100mV in MPT and running 2650 MHz max clocks. 

However, I was wondering if it's safe to increase VRAM voltage in MPT from 1.35v (1.356v in real life) to 1.4V (1.406v in real life). With this, I can run 2124 MHz fully stable, otherwise not really. With 1.35v I ran 2090 MHz. Of course it's a small bump in frequency and if 1.4v is not safe/degrades the VRAM, I will go back to 1.35v. But it does increase performance in some games, gains are gains 

I was just wondering what you think!


----------



## deadfelllow

PG705 said:


> Hello people, I've been following this forum for a while now and learning a lot about overclocking RDNA2 cards. I have a 6900 XT LC Strix (TOP version, so XTXH). Currently running an undervolt of 1100mV in MPT and running 2650 MHz max clocks.
> 
> However, I was wondering if it's safe to increase VRAM voltage in MPT from 1.35v (1.356v in real life) to 1.4V (1.406v in real life). With this, I can run 2124 MHz fully stable, otherwise not really. With 1.35v I ran 2090 MHz. Of course it's a small bump in frequency and if 1.4v is not safe/degrades the VRAM, I will go back to 1.35v. But it does increase performance in some games, gains are gains
> 
> I was just wondering what you think!


Stock LC bios uses 1.4 V btw for mem voltage.(I changed max core voltage through mpt so dontt mind that)


----------



## PG705

deadfelllow said:


> Stock LC bios uses 1.4 V btw for mem voltage.(I changed max core voltage through mpt so dontt mind that)
> 
> View attachment 2555517


Thanks! That is the reference LC version from AMD right? The one with the 120mm radiator?

Does that card have the same physical VRAM chips but running at 18 Gbp/s and thus 1.4v?


----------



## deadfelllow

PG705 said:


> Thanks! That is the reference LC version from AMD right? The one with the 120mm radiator?
> 
> Does that card have the same physical VRAM chips but running at 18 Gbp/s and thus 1.4v?


Im using 6900xt ee. Yes that is the LC reference bios with 120mm rad. I flashed vbios to my 6900xt sapphire ee. I have no idea about the vram chip models


----------



## alceryes

jonRock1992 said:


> Nice job! Is your memory clock really that high though? That seems impossible. I think these GPU's usually top out at around 2400MHz mem clock with the LC vBIOS.


I sometimes get an impossible VRAM speed reading - something in the 2600MHz range.
If you look at the release notes for some past AMD drivers, one of the known issues is improper reading of VRAM speeds. Seems like it's been an issue for over a year now although it's not listed in every driver's release notes.


----------



## PG705

deadfelllow said:


> Im using 6900xt ee. Yes that is the LC reference bios with 120mm rad. I flashed vbios to my 6900xt sapphire ee. I have no idea about the vram chip models


Good to know, then I guess it’s safe. Is there much extra performance with the LC bios? VRAM runs at higher speeds but looser timing right?


----------



## deadfelllow

PG705 said:


> Good to know, then I guess it’s safe. Is there much extra performance with the LC bios? VRAM runs at higher speeds but looser timing right?


Correct. Higher speeds but looser timing. I have no idea which one is better tho.


----------



## CS9K

deadfelllow said:


> Correct. Higher speeds but looser timing. I have no idea which one is better tho.


Looser timings at higher speed usually keep the same latency, usually. So long as one doesn't _increase_ memory latency, then faster speeds will always be beneficial (so long as the memory is stable).


----------



## J7SC

...I would just like to have higher-than-2150 speeds _and_ tight timings  
but I probably mentioned that before, several times...


----------



## CS9K

J7SC said:


> ...I would just like to have higher-than-2150 speeds _and_ tight timings
> but I probably mentioned that before, several times...


I knowwwwwwwwwwwwwwwwwwwwwwww :< I wish I had the answers, and kind of wish RDNA2 were as easy to bust as RDNA1 was, but alas~

Oh, the things I would do to have a go at GDDR6 memory timings again like I did with my RX 5600 XT in 2020; I know SO much more now than I did then, and have water to keep the memory nice and cool.

One day... Soon™


----------



## cfranko

Cards that have the LC bios can't get more than 55 mh while mining even with overclocked memory, can this indicate looser timings are not actually beneficial?


----------



## CS9K

cfranko said:


> Cards that have the LC bios can't get more than 55 mh while mining even with overclocked memory, can this indicate looser timings are not actually beneficial?


What can cards on normal RX 6900 XT (XTX/XTXH-non-LC) bioses do? It may be the looser timings, it may be other things.


----------



## cfranko

CS9K said:


> What can cards on normal RX 6900 XT (XTX/XTXH-non-LC) bioses do? It may be the looser timings, it may be other things.


Normal 6900 xt's that have regular memory do 64 mh


----------



## J7SC

CS9K said:


> I knowwwwwwwwwwwwwwwwwwwwwwww :< I wish I had the answers, and kind of wish RDNA2 were as easy to bust as RDNA1 was, but alas~
> 
> Oh, the things I would do to have a go at GDDR6 memory timings again like I did with my RX 5600 XT in 2020; I know SO much more now than I did then, and have water to keep the memory nice and cool.
> 
> One day... Soon™


....perhaps AMD's Dr. Lisa Su reads this thread intently and finally decides to end my long-standing suffering with VRAM on my card that could go much faster, if only it wouldn't be artificially limited...
hope springs eternal


----------



## PG705

Thanks for the answers guys! Did you also tune your 6900 XT's SOC and Fclk frequencies? Stock SOC frequency is 1200 MHz, and Fclk (and Fclk Boost) runs at 1940 MHz. I can get my Fclk to 2200 MHz, haven't tested the SOC frequency yet. It increases performance a little bit, but not much.


----------



## CS9K

PG705 said:


> Thanks for the answers guys! Did you also tune your 6900 XT's SOC and Fclk frequencies? Stock SOC frequency is 1200 MHz, and Fclk (and Fclk Boost) runs at 1940 MHz. I can get my Fclk to 2200 MHz, haven't tested the SOC frequency yet. It increases performance a little bit, but not much.


"fclk" is the cache frequency, and you are correct that it doesn't do a whole lot for performance, though it does do _something_. 

Don't bother messing with the SOC clock, changing that value makes no difference.


----------



## deadfelllow

PG705 said:


> Thanks for the answers guys! Did you also tune your 6900 XT's SOC and Fclk frequencies? Stock SOC frequency is 1200 MHz, and Fclk (and Fclk Boost) runs at 1940 MHz. I can get my Fclk to 2200 MHz, haven't tested the SOC frequency yet. It increases performance a little bit, but not much.


6900xt ee boots with 2200 fclk and boost, but i havent tested yet for performance gains


----------



## PG705

CS9K said:


> "fclk" is the cache frequency, and you are correct that it doesn't do a whole lot for performance, though it does do _something_.
> 
> Don't bother messing with the SOC clock, changing that value makes no difference.


Thank you. I got around 80 points more (in TimeSpy) with the increased fclk speed if I recall correctly, so indeed not that much.


----------



## PG705

deadfelllow said:


> 6900xt ee boots with 2200 fclk and boost, but i havent tested yet for performance gains
> 
> View attachment 2555544


Curious what your performance gains are! Do you think it will improve Smart Access Memory performance?


----------



## rodac

deadfelllow said:


> While under stress test can you pull the tubes to upwards and see any noticable temp change?
> 
> Be gentle tho


Nope, I cannot see any temp difference when lifting up the tubes, I am not sure, maybe related to how tight that pump is screwed on the board or maybe the type of thermal paste you have used and how much of it you have used ?


----------



## rodac

LtMatt said:


> @rodac I'd just recommend using the Toxic Boost option which you disabled. Then set the memory to 2112Mhz with Fast Timings enabled. Away you go.


Thanks, this setting was functional indeed, no blue screen like I got before, so you are talking about The AMD Adrenaline Tuning of course.
I tried this VRAM setting that did not do much for me, then I started playing with other settings and increasing the frequency percentage bar as well. I was able to increase the top and average frequency successfully but in Time Spy (1440x), the best graphics score I could get was 22323, the best I ever got in the past was 22565.
I am not sure if I will be able to get much more out of it, but eh, at least it does run cooler.
==========
Edited 11th April -> this works !!!
I did further adjustments and ran some more benchmarking and the improvement is clear cut, those settings on their own combined with a few more, do "deliver the goods", so I will re-post with some actual details and links.


----------



## rodac

alceryes said:


> I sometimes get an impossible VRAM speed reading - something in the 2600MHz range.
> If you look at the release notes for some past AMD drivers, one of the known issues is improper reading of VRAM speeds. Seems like it's been an issue for over a year now although it's not listed in every driver's release notes.


OK, that explains it all, I just got over 3K Mhz readings for the VRAM memory, clearly, that cannot be right.


----------



## OCmember

Just installed a Sapphire 6900XT Toxic Air cooled. Replaced a 3080 Ti. So far so good!


----------



## alceryes

OCmember said:


> Just installed a Sapphire 6900XT Toxic Air cooled. Replaced a 3080 Ti. So far so good!


3080 Ti and 6900 XT are pretty evenly matched. What made you switch?

When I finally decided to pull the trigger it was the 16GBs vs 10 or 12GBs that lead me to the 6900 XT. It should ensure the card grows old gracefully instead of performance just dropping off a cliff with the top AAA game of 2024 needing over 12GBs VRAM for all the high quality textures (at max settings).


----------



## Luggage

alceryes said:


> Has anyone experienced a substantial Time Spy (or similar benchmarking program) *graphics* score improvement from upgrading your CPU only? (not CPU + motherboard + memory)
> If yes, please list the before and after CPUs and grpahics scores. I'm wondering how much my 9900k is holding back my 6900 XT.


No.








Result







www.3dmark.com





3800x to 5800x
GPU +0.9%
CPU +14%


----------



## OCmember

alceryes said:


> 3080 Ti and 6900 XT are pretty evenly matched. What made you switch?
> 
> When I finally decided to pull the trigger it was the 16GBs vs 10 or 12GBs that lead me to the 6900 XT. It should ensure the card grows old gracefully instead of performance just dropping off a cliff with the top AAA game of 2024 needing over 12GBs VRAM for all the high quality textures (at max settings).


Purely curiosity. Most benchmarks say they are relatively even but not even an hour of game play with CS:GO, and UT4, and I can already tell something is better with AMD and the 6900XT Toxic. Maybe it's the Anti-lag implementation. I don't know. I ran LLM at Ultra with my Ti and it's nothing like this with AMD. I was using NVIDIA 510.06 drivers, maybe something was wrong with them. Maybe I was doing something wrong with my 3080 Ti setup. I don't know, but the 6900XT is miles ahead of the 3080 Ti for how I use it. I play at native 1080p, no ray tracing, no frills and thrills, just plain ole competitive low latency high fps settings.

What's troubling me now is what to do with the EVGA 3080 Ti FTW3 Ultra


----------



## Asael665

Hi there.
Is there someone in here who can share me a link with Void stickers warranty labels from aliexpress or something? Sapphire Radeon RX 6900 XT TOXIC Extreme Edition here.
after 6m of use i got hot spot temp 90c and 95-97c when OC 2600 and more.... (afterreaching 97whole systems restarts and AMD drivers reset OC settings and inform the there was an error...)
so i think new paste like kryonaut should do the trick. (right?)
But im not sure i want to void my warranty :/


----------



## deadfelllow

Asael665 said:


> Hi there.
> Is there someone in here who can share me a link with Void stickers warranty labels from aliexpress or something? Sapphire Radeon RX 6900 XT TOXIC Extreme Edition here.
> after 6m of use i got hot spot temp 90c and 95-97c when OC 2600 and more.... (afterreaching 97whole systems restarts and AMD drivers reset OC settings and inform the there was an error...)
> so i think new paste like kryonaut should do the trick. (right?)
> But im not sure i want to void my warranty :/


@LtMatt Can help you with stickers. Also I would not recommend to use kyronaut on your bare die. Gelid GC extreme is better.


----------



## LtMatt

I had a look and unfortunately it looks like they are out of stock at the moment. Here's the link for everyone in case they come back into stock. 180 sets of 6 mm diameter W and D tamper evidence label stickers, Item No. V52|Assorted Stickers| - AliExpress


----------



## Asael665

Hi @LtMatt thank you !
i hope they will return and now i know what to look for 
thank you guys!


----------



## deadfelllow

Guys Hello,

On LC BIOS which memory speed is sweetspot without performance drop.

e.g Stock bios 2120-2130 is sweet spot.


----------



## ArchStanton

deadfelllow said:


> I would not recommend to use kyronaut on your bare die. Gelid GC extreme is better.


Just asking. Would it be fair to say that "original" Kryonaut is not recommended for bare die unless the user already plans to re-paste every two months or so? Is the consensus such that Kryonaut "extreme" would be more suitable for long term deployment? I have yet to use either. I just like filing these little tid-bits away in my "rules of thumb" folder.


----------



## Asael665

So i just wanted to let you know that i just repassted 6900xt (yes i did use thermal grizzly as i will be selling card anyway at the end of the year)
all went good and no issue with aio pump montage (thanks @LtMatt for your tips)
Temps from 93-97and shutdown went to 76-80 on OC Trixx so about 2630-2680mhz and fans are SO MUCH quieter!


----------



## LtMatt

Asael665 said:


> So i just wanted to let you know that i just repassted 6900xt (yes i did use thermal grizzly as i will be selling card anyway at the end of the year)
> all went good and no issue with aio pump montage (thanks @LtMatt for your tips)
> Temps from 93-97and shutdown went to 76-80 on OC Trixx so about 2630-2680mhz and fans are SO MUCH quieter!


Nice, decent result!!


----------



## rodac

Asael665 said:


> So i just wanted to let you know that i just repassted 6900xt (yes i did use thermal grizzly as i will be selling card anyway at the end of the year)
> all went good and no issue with aio pump montage (thanks @LtMatt for your tips)
> Temps from 93-97and shutdown went to 76-80 on OC Trixx so about 2630-2680mhz and fans are SO MUCH quieter!


Great to hear, did the same job a few days ago following @LtMatt guide and very happy with the outcome.


----------



## rodac

ArchStanton said:


> Just asking. Would it be fair to say that "original" Kryonaut is not recommended for bare die unless the user already plans to re-paste every two months or so? Is the consensus such that Kryonaut "extreme" would be more suitable for long term deployment? I have yet to use either. I just like filing these little tid-bits away in my "rules of thumb" folder.


I agree with @deadfelllow, I read from quite a few people that viscosity is important, in Amazon reviews as well, Glid is one of the thicker paste, they are a little harder to apply.


----------



## deadfelllow

rodac said:


> I agree with @deadfelllow, I read from quite a few people that viscosity is important, in Amazon reviews as well, Glid is one of the thicker paste, they are a little harder to apply.


I tried most of the pastes i believe. Lately i've used thermalright TFX and its sucks for me. It is so solid and you cant even spread  I applied as a dot on die and i couldnt even spread it. I put thermalright TFX to boiled water and waited to liquify a bit. After that i was able to use TFX XD. But results are worse than Gelid GC. Also Gelid is very easy to use.

Gc is better than kyronaut on GPU's i believe. I have no idea about cpu applications.


----------



## kairi_zeroblade

deadfelllow said:


> I tried most of the pastes i believe. Lately i've used thermalright TFX and its sucks for me. It is so solid and you cant even spread  I applied as a dot on die and i couldnt even spread it. I put thermalright TFX to boiled water and waited to liquify a bit. After that i was able to use TFX XD. But results are worse than Gelid GC. Also Gelid is very easy to use.
> 
> Gc is better than kyronaut on GPU's i believe. I have no idea about cpu applications.


I have tried the Kryonaut Extreme for both CPU and GPU applications, they last longer than the ordinary/OG Kryonaut, though the application is expensive, I have TFX now and I have had no issues using it, I use a hairdryer to heat it up before spreading, and the temperatures are almost identical to what the Kryonaut Extreme exhibits..

Gelid has relabeled the Extreme and based on some reviews its not performing any better than the older one..


----------



## rodac

LtMatt said:


> @rodac I'd just recommend using the Toxic Boost option which you disabled. Then set the memory to 2112Mhz with Fast Timings enabled. Away you go.


OK, OK, so I had written earlier that it did not work hmmmmm... but it does.
Here we go again, *credit to @LtMatt* based on the post above.

Today, I had a day off work (sometimes I get to play after my 9 hours day) and at last ... I could spend a couple of hours testing this out <<proper>> and yes it works !

Here are some actual results / evidence and the settings used, quite basic, nothing is smoking and without going into extreme "Igor territory" style with the MorePowerTooks tweaks that let you override the hardware limits, but just ONLY using AMD Adrenaline out of the box, that is pretty decent indeed, yes I dare to say, I am happy as it is...well could always get bit more, how about a 3090 TI, well just joking, though tempted, just to have a look 

I am just left wondering what it would have been like, if I had been more lucky with the silicon lottery.

Benchmarking did produce the best results I have ever had with this GPU.

Highest frequency reached now up to : 2797 Mhz and average was around 2566 Mhz











*AMD Adrenaline 22.4.1 *
(Tested April 11th 2022 with *Sapphire Toxic Extreme 6900*)

*1- Settings*













































You probably do not need this max freq but well my silicon is not getting up there anyway





















*2 - Benchmarking*


CyberPunk 2077 (the new Crysis ?)
Resolution: 3840 X 2160 (4K)
FPS: 45.25
Quick Preset: Ultra (the quality templates top fo the graphics options)
AMD Fidelity FX Super Resolution 1.0: OFF
VSync: OFF

Resolution: 3840 X 2160 (4K)
FPS: 74.16
Quick Preset: Ultra (the quality templates top fo the graphics options)
AMD Fidelity FX Super Resolution 1.0: Ultra Quality (Resolution Enhancer best quality)
VSync: OFF

That about 10 frames less than what the 3090 TI can now achieve (55), so maybe not worth purchasing a 3090 TI plus I am really wondering what kind of next gen GPU AMD is going to release next


3D Mark
TimeSpy Extreme Graphics score: *11169*








I scored 10 843 in Time Spy Extreme


AMD Ryzen 9 5950X, AMD Radeon RX 6900 XT x 1, 131072 MB, 64-bit Windows 10}




www.3dmark.com













I scored 21 083 in Time Spy


AMD Ryzen 9 5950X, AMD Radeon RX 6900 XT x 1, 131072 MB, 64-bit Windows 10}




www.3dmark.com




TimeSpy Graphics Score: *23116*


----------



## LtMatt

If you want more performance or a higher score, you'll need to up the power limit using MPT.


----------



## rodac

LtMatt said:


> If you want more performance or a higher score, you'll need to up the power limit using MPT.


Cool, thanks, that is what I suspected, but good to hear this from you. Was I right thinking that MPT overrides the hardware limits, I do not want to break my GPU until I can buy that Radeon 4900 
Everywhere I read, the claims is that it would be 2.5 times faster, I struggle to believe that but then Intel is on their heels and entering the market


----------



## LtMatt

rodac said:


> Cool, thanks, that is what I suspected, but good to hear this from you. Was I right thinking that MPT overrides the hardware limits, I do not want to break my GPU until I can buy that Radeon 4900
> Everywhere I read, the claims is that it would be 2.5 times faster, I struggle to believe that but then Intel is on their heels and entering the market


Up to 400W should be fine with the Toxic PCB in theory. I use a 399W daily limit for max OC gaming. Then I undervolt if I want t bring power down to 300W or below.


----------



## deadfelllow

rodac said:


> Cool, thanks, that is what I suspected, but good to hear this from you. Was I right thinking that MPT overrides the hardware limits, I do not want to break my GPU until I can buy that Radeon 4900
> Everywhere I read, the claims is that it would be 2.5 times faster, I struggle to believe that but then Intel is on their heels and entering the market


Power limit doesnt matter that much for degradation(if your temps are safe). If you re not gonna increase the Vcore voltage(which is not needed for regular gaming) you re good to go with 400W power limit.

Im going to post a 2 pics below and see fps differ. (Look at the total power of gpu)

First img 450w TDP max.
Second img 390 TDP max i guess

Also dont mind that 80 frequency difference between the images. It wont affect +15 fps for sure


----------



## rodac

LtMatt said:


> Up to 400W should be fine with the Toxic PCB in theory. I use a 399W daily limit for max OC gaming. Then I undervolt if I want t bring power down to 300W or below.


All fine then, that is what I intended to do and I understand that it should be safe, i.e. to input as much watts as possible, by sliding this power bar all the way to the right in the UI (power limit %) ) but then you are hinting that I need another tool to push that GPU to the max 399 W, I cannot do this with Adrenaline, right ?
By the way, forgot to provide the top Watt consumption figure, in my latest run earlier today, the highest reading was :



GPU Chip Power Draw [W] 385


So looks like there is room for more power input indeed, all the way up to 400.

OK, so you are talking about this tool below, right ?

MPT download link:








RED BIOS EDITOR and MorePowerTool for Polaris, Navi and Big Navi - MPT 1.3.18 | Page 3 | igor'sLAB


New curve options have been added. Of course we would be very happy about a feedback in the forum about the use of this new function! The More PowerTool (MPT) has been also revised once again for the…




www.igorslab.de


----------



## rodac

deadfelllow said:


> Power limit doesnt matter that much for degradation(if your temps are safe). If you re not gonna increase the Vcore voltage(which is not needed for regular gaming) you re good to go with 400W power limit.
> 
> Im going to post a 2 pics below and see fps differ. (Look at the total power of gpu)
> 
> First img 450w TDP max.
> Second img 390 TDP max i guess
> 
> Also dont mind that 80 frequency difference between the images. It wont affect +15 fps for sure


That is good to have that comparison indeed, plus I like the screen print by night, nice reflections in 4K.

at 450 W you get 113 FTP
at 390 W, you get 97 FPS

Well, that certainly makes a clear statement, more is better.

ps: which tool did you use to display those useful stats in the top left corner, does not look like out of the box Adrenaline wizardy.

Thanks


----------



## deadfelllow

rodac said:


> That is good to have that comparison indeed, plus I like the screen print by night, nice reflections in 4K.
> 
> at 450 W you get 113 FTP
> at 390 W, you get 97 FPS
> 
> Well, that certainly makes a clear statement, more is better.
> 
> ps: which tool did you use to display those useful stats in the top left corner, does not look like out of the box Adrenaline wizardy.
> 
> Thanks


Im using afterburner and hwinfo rtss


----------



## PanZwu

the more PL the more clocks you get - mine seems to settle at 2750ish mhz core clock at like 500W. tried even more but it wouldnt clock over 2800mhz without crashing in Timespy.
max i got was 25300 Timespy GPU Score. more aint possible without LC Bios


----------



## deadfelllow

PanZwu said:


> the more PL the more clocks you get - mine seems to settle at 2750ish mhz core clock at like 500W. tried even more but it wouldnt clock over 2800mhz without crashing in Timespy.
> max i got was 25300 Timespy GPU Score. more aint possible without LC Bios


Timespy is very demanding test thats why.But in games most of the xtxh cards can easily run at 2800 mhz(effective clock) with stock settings.(332ppt)


----------



## Nizzen

CS9K said:


> DX12 is a massive stuttery mess for a lot of, I would probably say most, people. I'm happy that it works for you, but until ASOBO start making improvements specific to DX12, for some/most folks, DX11 is the way to go. ASOBO currently are not on the record saying that they have made any DX12-specific improvements... so far. It's coming, though, hopefully in Sim Update 10, per ASOBO.


Must be stutter because of Ryzen cpu in FS2020?
Last time I tried fs2020 it was way smoother on my intel 12900k plattform vs my Amd 5950x. 3090 gpu on both.


----------



## LtMatt

LtMatt said:


> And how tedious it is.
> 
> 
> 1) As ArchStanton suggested. What I will say is I recommend that you have the GPU running for 15 minutes or so at full load and put the liquid metal tube on top of the radiator. I find it spreads easier when it is warm. Also a warning, sometimes it can be a bit of a pain to spread and takes patience and gentle strokes, you'll get the hang of it but it can be a bit fiddly until you get used to it.
> 
> 2) I've just taken apart my Toxic this morning to repaste it with liquid metal as I was using standard thermal paste previously. I'll share steps, screenshots etc below.
> 
> 3) It'll be clear in the screenshots below. Brace yourself folks, wall of text and screenshots incoming...
> 
> *Sapphire Toxic 6900 XT AIO Disassembly, Re-paste and Reassembly guide* (sponsored by theregoesmywarranty.com)
> 
> As Juan Carlos likes to say in Far Cry 6, always use the right tool for the right job. Make sure you have a screwdriver set as you'll need a couple of different types of screw heads.
> 
> I also suggest you take note of GPU temperatures (Edge and Junction/Hotspot) running a game or benchmark so you have something to compare to. Try to keep ambient temperature similar so as to not affect the results. What we are looking for is a lower delta between edge and junction and a lower junction temperature overall.
> 
> Step1:
> Remove the two screws from the PCI bracket.
> View attachment 2553922
> 
> 
> Step2:
> Remove the five screws from the shroud. Three on top, two on the bottom that are not shown below.
> View attachment 2553923
> 
> 
> Step3:
> Gently pull the shroud off. Be careful not to pull out the RGB, pump and fan wires which are all tucked in near the bottom right of the shroud.
> View attachment 2553943
> 
> 
> Step4:
> Turn the the GPU over and remove the retention bracket. Two full turns following the X patter in the order shown.
> View attachment 2553924
> 
> 
> Step5:
> Turn the GPU over again and gently lift the pump off the GPU die as shown below. Clean the thermal paste using a lint free cloth or similar and some isopropyl 99% alcohol or similar. Repeat for the copper baseplate on the pump.
> View attachment 2553925
> 
> 
> Step6:
> Clean GPU die and copper baseplate. I personally don't bother to clean the outsides of the GPU die as I'll be using liquid metal and I figure that gives it a little protection should a bit of liquid metal spill onto the nearby components. Some people clean it all up and put thermal tape over the components, it's up to you what you do here. If you are careful and don't apply too much liquid metal, my method works fine.
> View attachment 2553926
> 
> 
> Step7:
> Apply a half pea size drop of liquid metal. If you put on more than this, you can suck it back up using the liquid metal applicator shown in the second screenshot.
> View attachment 2553927
> 
> 
> View attachment 2553928
> 
> 
> Step8:
> Start spreading the liquid metal. You can use either the brush or the qtip, I prefer the qtip as it gives more control with the brush it is possible to flick some off the GPU die. Slow long strokes to spread it. If you followed my earlier tip and heated up the tube by resting it on the GPU radiator under load, it should spread fairly easily.
> View attachment 2553929
> 
> 
> Step9:
> Once complete, it should look something like this. I personally don't recommend putting a layer on the copper baseplate, a thin coating should give the overall best results.
> View attachment 2553932
> 
> 
> Step10:
> Gently Re-attach the pump to the GPU die.
> View attachment 2553933
> 
> 
> Step11:
> Turn the GPU over. Now you will see the problem I mentioned before with the Toxic. The screw holes on the pump do not come through the PCB evenly unless you use a hand or finger to balance it all out. See the example below and then how I applied pressure with my thumb to ensure it is even.
> View attachment 2553934
> 
> 
> View attachment 2553935
> 
> 
> Step12:
> Secure the retention bracket, X pattern following 1 > 2 > 3 >4. Initially you want to do two full turns on each of the screws. Once each screw has had two full turns to ensure the retention bracket is secured, you can drop to one full turn, keeping the X pattern going. This is the most difficult part and if not done correctly, you may get bad temperatures and contact issues. Just be patient, take your time and do not overtighten the screws. As soon as you feel resistance stop. If one screw seems to require a lot more tightening than the others, stop and start again as something is wrong.
> View attachment 2553936
> 
> 
> Step13:
> Re-attach the shroud using the five screws. Apply pressure to the shroud area just before you put the screw in. It can be easy to strip the screw head and or screw hole if not done properly.
> View attachment 2553939
> 
> 
> Step14:
> Re-attach the PCI bracket screws.
> View attachment 2553938
> 
> 
> Step15:
> We are finished. Time to get on your knees and pray to the holy lord that the PC boots up. 😅
> View attachment 2553940
> 
> 
> It took me three attempts to get a perfect mount and I've changed thermal paste on these Toxics a dozen times already over a few samples.
> 
> Initially I was using SYY 157 thermal paste. It's very thick and does a decent job as a placeholder.
> *Here is the GPU edge/junction temperature running Days Gone at 400W load, 2750/2850Mhz using SYY 157:*
> View attachment 2553941
> 
> Ambient Temp: 19c
> Edge Temp: 56c
> Junction Temp: 80
> Delta: 24c
> Clock Frequency: 2747Mhz (throttling a little bit as GPU is hitting the 399W power limit)
> 
> Not bad, this was a lot better than the stock paste but still not ideal.
> 
> *Here is the GPU edge/junction temperature running Days Gone at 400W load, 2750/2850Mhz using Silver King Liquid Metal:
> View attachment 2553942
> *
> Ambient Temp: 20.5c
> Edge Temp: 59c
> Junction Temp: 70c
> Delta: 11c
> Clock Frequency: 2794Mhz (no throttling as the GPU is not hitting the 399W power limit)
> 
> Excellent results. Edge has creeped up a little, but as I believe I improved my overall contact on the mount, my junction temp has dropped 10c. Pretty happy with that.
> 
> Hope this helps anyone thinking about changing paste on the Toxic. Just remember that this will almost certainly void your warranty and you could potentially damage the GPU so only do it if you are comfortable and agree to the risks.


As a follow up to this, today I decided to re-paste again with liquid metal (LM) due to the oxidation and obtained some even better results.









Ambient Temp: 19.5c
Edge Temp: 46c
Junction Temp: 62c
Delta: 16c
Clock Frequency: 2795Mhz (no throttling, power draw reduced further)
Fan speed has dropped significantly due to the lower overall temps, although I was running a fixed speed here similar to my last tests. 

Hopefully I won’t need to change paste again now.


----------



## J7SC

Nizzen said:


> Must be stutter because of Ryzen cpu in FS2020?
> Last time I tried fs2020 it was way smoother on my intel 12900k plattform vs my Amd 5950x. 3090 gpu on both.


...could be s.th. else - FS2020 is buttery-smooth on my 5950X / 3090 Strix combo. I find that RAM timing settings can play a role.


----------



## Alexxxx#€

hello good, I think I messed up a lot :-( I wanted to flash my 6900xt @ xtxh because the problem is that the frequencies didn't work for me so I went back to the stock rom... well now the SAM doesn't work does anyone know what's going on? I can't it works


----------



## PanZwu

you cant possibly flash a xt to a xtxh


----------



## deadfelllow

Alexxxx#€ said:


> hello good, I think I messed up a lot :-( I wanted to flash my 6900xt @ xtxh because the problem is that the frequencies didn't work for me so I went back to the stock rom... well now the SAM doesn't work does anyone know what's going on? I can't it works


Check bios for pci subsystem settings. Maybe your mobo bios resetted itself.


----------



## rodac

LtMatt said:


> As a follow up to this, today I decided to re-paste again with liquid metal (LM) due to the oxidation and obtained some even better results.
> 
> Ambient Temp: 19.5c
> Edge Temp: 46c
> Junction Temp: 62c
> Delta: 16c
> Clock Frequency: 2795Mhz (no throttling, power draw reduced further)
> Fan speed has dropped significantly due to the lower overall temps, although I was running a fixed speed here similar to my last tests.
> 
> Hopefully I won’t need to change paste again now.


I hope I will not have to redo mine that soon


----------



## Alexxxx#€

deadfelllow said:


> Compruebe bios para la configuración del subsistema pci. Tal vez tu bios de mobo se restableció solo.
> 
> 
> 
> 
> 
> 
> 
> 
> 
> [/CITA]
> This was checked by me


----------



## Alexxxx#€

PanZwu said:


> you cant possibly flash a xt to a xtxh


Yes forced for linux


----------



## Alexxxx#€

Does anyone know why the sam has disappeared?


----------



## CS9K

J7SC said:


> ...could be s.th. else - FS2020 is buttery-smooth on my 5950X / 3090 Strix combo. I find that RAM timing settings can play a role.


This, @Nizzen. There are SO many variables that can cause FS2020 to go MainThread Limited. I've seen nearly-identical systems on both Intel and AMD's sides, perform vastly different. 

Why? Nobody friggin' knows, and that's part of why FS2020 is love it or hate it. When it works: It is _awesome_. When it doesn't work? absolute crap


----------



## deadfelllow

Guys hello,

I asked sapphire to tell me screw types of 6900xt EE and they send me this but i dont even understand  Will you help me please?


I need E440_GPU screws spec-2.JPG


----------



## LtMatt

deadfelllow said:


> Guys hello,
> 
> I asked sapphire to tell me screw types of 6900xt EE and they send me this but i dont even understand  Will you help me please?
> 
> 
> I need E440_GPU screws spec-2.JPG


Wow, I'm amazed they even gave you this. Good on them!

I need two replacement screws for the shroud, and it appears to be this one... I think.








Actually finding it is easier said than done though.


----------



## alceryes

Some of these look to be E440 -









LENOVO THINKPAD E440 SCREW FOR REPAIR SCREWS AND BRACKESTS LAPTOP BOLT BOLTS | eBay


Find many great new & used options and get the best deals for LENOVO THINKPAD E440 SCREW FOR REPAIR SCREWS AND BRACKESTS LAPTOP BOLT BOLTS at the best online prices at eBay! Free shipping for many products!



www.ebay.com


----------



## deadfelllow

alceryes said:


> Some of these look to be E440 -
> 
> 
> 
> 
> 
> 
> 
> 
> 
> LENOVO THINKPAD E440 SCREW FOR REPAIR SCREWS AND BRACKESTS LAPTOP BOLT BOLTS | eBay
> 
> 
> Find many great new & used options and get the best deals for LENOVO THINKPAD E440 SCREW FOR REPAIR SCREWS AND BRACKESTS LAPTOP BOLT BOLTS at the best online prices at eBay! Free shipping for many products!
> 
> 
> 
> www.ebay.com


This ones is for backplate. I need retention bracket screws :/( image below.


----------



## deadfelllow

Does anyone know this language?


----------



## LtMatt

CS9K said:


> This, @Nizzen. There are SO many variables that can cause FS2020 to go MainThread Limited. I've seen nearly-identical systems on both Intel and AMD's sides, perform vastly different.
> 
> Why? Nobody friggin' knows, and that's part of why FS2020 is love it or hate it. When it works: It is _awesome_. When it doesn't work? absolute crap


I am definitely getting a 5800X3D, some of the performance gains are incredible in the games I play and faster than the KS.

Thought you might like this too as you mentioned MSFS.


----------



## alceryes

LtMatt said:


> I am definitely getting a 5800X3D, some of the performance gains are incredible in the games I play and faster than the KS.
> 
> Thought you might like this too as you mentioned MSFS.
> View attachment 2556135


That's a huge difference.

I think MSFS may be the outlier here. Most other reviews have it sitting at ~5-10% higher performance than the 12900k. I'm glad AMD has taken back the gaming crown, for now at least. I just wish AM5 was much closer. [sigh]


----------



## damric

After about a year and a half of trying, I finally scored a card from AMD.com. I was browsing the web on my phone and, since it's Thursday I figured I would drop by the AMD store, just for curiosity's sake. It had me sign up for the queue and it said, over an hour so I was like, whelp lost again, but I got a phone call. 20 minutes later the phone call was over and as I hung up, my phone went DING. i was like huh what, and I look at it and it says I'm next in line. Ok. Only 6900xt available. Ok


----------



## CS9K

LtMatt said:


> I am definitely getting a 5800X3D, some of the performance gains are incredible in the games I play and faster than the KS.
> 
> Thought you might like this too as you mentioned MSFS.
> View attachment 2556135


I did see these results this morning. Crazy how the cache takes a bit of pressure off of the MainThread bottleneck a bit.

Which in DX11, that result is _fantastically impressive_, I don't think the 5800x3d will have as-large of a lead over other modern CPU's, once DX12 optimizations are in place (assuming ASOBO optimize DX12 properly).

Even if ASOBO completely mess up DX12's implementation, at least the 5800x3d improves DX11 performance!

And to think, the 5800x3d is an existing product with a few layers of 3d cache held on with superglue and duct tape. Zen 4... that's going to be something _special_


----------



## J7SC

CS9K said:


> I did see these results this morning. Crazy how the cache takes a bit of pressure off of the MainThread bottleneck a bit.
> 
> Which in DX11, that result is _fantastically impressive_, I don't think the 5800x3d will have as-large of a lead over other modern CPU's, once DX12 optimizations are in place (assuming ASOBO optimize DX12 properly).
> 
> Even if ASOBO completely mess up DX12's implementation, at least the 5800x3d improves DX11 performance!
> 
> And to think, the 5800x3d is an existing product with a few layers of 3d cache held on with superglue and duct tape. Zen 4... that's going to be something _special_


What's your connection speed ? I ask because that can make a difference re. jerkiness and often appears as a 'main thread' issue with FS2020....

I happened to have a couple of real nice long FS2020 sessions on the weekend, both on DX11 (via 2x 2080 Ti w-cooled in SLI-CFR / 55 inch LG IPS HDR) and DX12 (1x 3090 w-cooled / 48 inch C1 OLED). Very few performance issues on either with 4K and everything totally maxed apart from lens flare and motion blur...The 3090 had an effective speed of just under 2200 MHz, along with the 5950X w/ effective main cores at 5040 MHz). Stunningly beautiful and buttery smooth FS2020 for over 1.5 hrs...

Things had improved dramatically with both my FS2020 installs when I went to a 1 Gbps (symmetric up, down) connection from a 250 Mbps...


----------



## CS9K

J7SC said:


> What's your connection speed ? I ask because that can make a difference re. jerkiness and often appears as a 'main thread' issue with FS2020....
> 
> I happened to have a couple of real nice long FS2020 sessions on the weekend, both on DX11 (via 2x 2080 Ti w-cooled in SLI-CFR / 55 inch LG IPS HDR) and DX12 (1x 3090 w-cooled / 48 inch C1 OLED). Very few performance issues on either with 4K and everything totally maxed apart from lens flare and motion blur...The 3090 had an effective speed of just under 2200 MHz, along with the 5950X w/ effective main cores at 5040 MHz). Stunningly beautiful and buttery smooth FS2020 for over 1.5 hrs...
> 
> Things had improved dramatically with both my FS2020 installs when I went to a 1 Gbps (symmetric up, down) connection from a 250 Mbps...


AT&T Fiber on the 1G/1G package. We're in an area that can get up to 5G/5G (XGS-PON w/BGW320 gateway), and I'm probably the only person in my neighbourhood that can ingest and route that to a client (5Gbase-T to my PFSense router, and 10G fiber to my desktop); internet is good, no jitter no latency spikes no packet loss.

Some days are better than others, but people have surmised that it's a server-side issue, not a client nor ISP issue, which I would agree with since my connection is good even on the "bad" FS2020 days.

Likewise, even with photogrammetry off, FS2020 is still MainThread Logjammed in some scenarios, even at 4K.


----------



## J7SC

CS9K said:


> AT&T Fiber on the 1G/1G package. We're in an area that can get up to 5G/5G (XGS-PON w/BGW320 gateway), and I'm probably the only person in my neighbourhood that can ingest and route that to a client (5Gbase-T to my PFSense router, and 10G fiber to my desktop); internet is good, no jitter no latency spikes no packet loss.
> 
> Some days are better than others, but people have surmised that it's a server-side issue, not a client nor ISP issue, which I would agree with since my connection is good even on the "bad" FS2020 days.
> 
> Likewise, even with photogrammetry off, FS2020 is still MainThread Logjammed in some scenarios, even at 4K.


...well, that's not it then in your case. Before I could play last weekend, I had to download yet another (x2  ) obligatory big FS2020 patch (you know how it is). The highest I saw on the FS2020 speed bar during that download was 450 Mbps - not bad, really, for their server side.


----------



## CS9K

J7SC said:


> ...well, that's not it then in your case. Before I could play last weekend, I had to download yet another (x2  ) obligatory big FS2020 patch (you know how it is). The highest I saw on the FS2020 speed bar during that download was 450 Mbps - not bad, really, for their server side.


Lucky, I usually see 250-300 on the in-game downloads.

Suffice it to say, ASOBO, the little corner of Azure that hosts FS2020 data, and FS2020 itself, all have their... Quirks and Features, as Doug DeMuro would say.

More quirks than features some days


----------



## J7SC

CS9K said:


> Lucky, I usually see 250-300 on the in-game downloads.
> 
> Suffice it to say, ASOBO, the little corner of Azure that hosts FS2020 data, and FS2020 itself, all have their... Quirks and Features, as Doug DeMuro would say.
> 
> More quirks than features some days


Almost time to watch Twin Peaks reruns on TV - but yeah, 'quirks and features' in FS2020...then again, I only run FS2020 on my NVidia cards, so I don't have any comparative basis for my 6900XT. 

As to DX11 vs DX12, the one thing I did notice between DX11 and DX12 versions is that the clouds are generated very differently...


----------



## CS9K

J7SC said:


> Almost time to watch Twin Peaks reruns on TV - but yeah, 'quirks and features' in FS2020...then again, I only run FS2020 on my NVidia cards, so I don't have any comparative basis for my 6900XT.
> 
> As to DX11 vs DX12, the one thing I did notice between DX11 and DX12 versions is that the clouds are generated very differently...


DX12 in its current state has memory allocation/asset handling "issues". They're more up-front about manifesting on AMD GPU's, but almost every person that has run DX12 mode for more than a few minutes has run into the mainthread-crushing un-optimization of DX12's current state.

That said, ASOBO never advertised DX12 to work, only that it existed (which is consistent with DX12 mode being DX11 wearing a jacket with DX12 written on the front).

ASOBO _are_ working to get DX12 optimizations going, but the earliest we _could_ see that is in the SU10 beta... IF it doesn't get pushed, which is a possibility as-stated in a dev livestream early this year.


----------



## J7SC

CS9K said:


> DX12 in its current state has memory allocation/asset handling "issues". They're more up-front about manifesting on AMD GPU's, but almost every person that has run DX12 mode for more than a few minutes has run into the mainthread-crushing un-optimization of DX12's current state.
> 
> That said, ASOBO never advertised DX12 to work, only that it existed (which is consistent with DX12 mode being DX11 wearing a jacket with DX12 written on the front).
> 
> ASOBO _are_ working to get DX12 optimizations going, but the earliest we _could_ see that is in the SU10 beta... IF it doesn't get pushed, which is a possibility as-stated in a dev livestream early this year.


...to tell you the truth, even DX11 was (is) pretty spiffy in the visual department, even how it handles clouds, though I would love to see a nice / real ray-tracing addition for the final DX12. Anyway, per below, 2080 Tis in SLI-CFR ("checkerboard") with some pretty greedy VRAM allocation / 4K. 

I'm tempted to install FS2020 on my 3950X/6900XT, but then I have to update the oh-so-many-ultra-large patches FS2020 is known for in triplicate 🤢.


----------



## rodac

Spent some time tonight, looking into MPT More Power Tools.
The default BIOS power settings for the Sapphire 6900XT toxic extreme look like this










It looks like the most tweaked settings are
Power Limit(W) and TDC Limits(A)
It looks like Power Limit(W) 332 X 1.15 (the max 15% in Wattman) = 381.8 which indeed the max Watt consumption I have seen logged in TechPowerUp.
It looks as if changing that only would be good enough but quite unsure.

What kind of safe settings can be reasonably tried no to push the thermal limits too far ?

It looks like DS_GFXCLK when disabled prevents clocks speed from decreasing too quickly.


----------



## LtMatt

rodac said:


> Spent some time tonight, looking into MPT More Power Tools.
> The default BIOS power settings for the Sapphire 6900XT toxic extreme look like this
> 
> View attachment 2556381
> 
> 
> It looks like the most tweaked settings are
> Power Limit(W) and TDC Limits(A)
> It looks like Power Limit(W) 332 X 1.15 (the max 15% in Wattman) = 381.8 which indeed the max Watt consumption I have seen logged in TechPowerUp.
> It looks as if changing that only would be good enough but quite unsure.
> 
> What kind of safe settings can be reasonably tried no to push the thermal limits too far ?
> 
> It looks like DS_GFXCLK when disabled prevents clocks speed from decreasing too quickly.


As you have a Toxic, I use 347W power limit in MPT +15% in AMD Software = 399W power limit.

This is enough for 99% of games without hitting the power limit in my experience at 4k max settings.

A few examples from various games:


Spoiler































Going higher on the power limit for 24/7 is possible, but not really necessary unless you start going over 1.2v. Plus the radiator can comfortably handle 400W and lower at reasonable to quiet fan speeds. Above that you would have to ramp the fans up well over 50%.


----------



## rodac

LtMatt said:


> As you have a Toxic, I use 347W power limit in MPT +15% in AMD Software = 399W power limit.
> 
> This is enough for 99% of games without hitting the power limit in my experience at 4k max settings.
> 
> A few examples from various games:
> 
> 
> Spoiler
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> Going higher on the power limit for 24/7 is possible, but not really necessary unless you start going over 1.2v. Plus the radiator can comfortably handle 400W and lower at reasonable to quiet fan speeds. Above that you would have to ramp the fans up well over 50%.


@LtMatt , I did not know you had that great channel on Youtube, great videos, all the way back to 8 years ago, ouaaa.
Thanks for confirming this value of 347, that looks reasonable. 

So we are talking about the PowerLimit W, so from 332 which is already more than stock 6900s, we take this up to 347.

I have seen others like @weleh make those figures the same both for 'PowerLimit W' as well as 'TDC Limits GFX' and then up the Soc from 55 to something like 70. He posted a handy spreadsheet showing the values he tested with. 

Here it is (credit to @weleh , thanks for that useful spreadsheet posted back in 2021 showing the PMT steps, this image is an extract below)

Here is his original post link








[Official] AMD Radeon RX 6900 XT Owner's Club


I've got the same card under water, and get 300 points less than you do with the same clocks. You've got yourself a great 6900XT Red Devil card, sir. Thx. I think that's just variance in the run. Looks to be the same as yours, except my CPU score is much higher (12 cores vs 6 cores).




www.overclock.net
















The truth is that I do not have a real clue what those other values are here for. 
I want to keep a conservative approach to keep this within safe limits but at the same time, I want to use up what is available since it is supposed to be a better binned GPU, I could already reach 2800 for sure using your previous recommendations, but when I set it to 2806, 3DMark failed and stopped running showing that it became unstable.


----------



## LtMatt

@CS9K Thought this might interest you also.


----------



## rodac

I found this tutorial in German, some values tested suggest that the defaults for TDCs values are OK, so I could keep 320/55.
So it looks like you can get away only changing the 'PowerLimit W' indeed.









[Guide] - Navi 21 Max Overclocking Tutorial [6800 XT / 69X0 XT]


Wer wissen will, was die eigene Navi 21 Karte wirklich kann, aber nicht weiß, wie man das anstellt, der ist hier richtig. Ein Typischer Fall ist dieser: Karte gekauft und jetzt läuft die viel langsamer als bei den großen Jungs im Luxx Forum. Was tun? Inhaltsverzeichnis 1. Time Spy: das (fast)...




www.hardwareluxx.de














<<Ganz grob passts aber z.B.: 1,150 V * 320 A = 368 W = 320 W * 15%>>
It looks like that GFX values are A. and SoC would be the 'state of charge'


----------



## rodac

@LtMatt . Cool, that MPT tune worked, reached 400 W, timespy: 23505 and timespyExtreme: 11352. Average frequencies 2625
Now I need to tune up my CPU


----------



## CS9K

rodac said:


> I found this tutorial in German, some values tested suggest that the defaults for TDCs values are OK, so I could keep 320/55.
> So it looks like you can get away only changing the 'PowerLimit W' indeed.
> 
> 
> 
> 
> 
> 
> 
> 
> 
> [Guide] - Navi 21 Max Overclocking Tutorial [6800 XT / 69X0 XT]
> 
> 
> Wer wissen will, was die eigene Navi 21 Karte wirklich kann, aber nicht weiß, wie man das anstellt, der ist hier richtig. Ein Typischer Fall ist dieser: Karte gekauft und jetzt läuft die viel langsamer als bei den großen Jungs im Luxx Forum. Was tun? Inhaltsverzeichnis 1. Time Spy: das (fast)...
> 
> 
> 
> 
> www.hardwareluxx.de
> 
> 
> 
> 
> 
> View attachment 2556554
> 
> 
> <<Ganz grob passts aber z.B.: 1,150 V * 320 A = 368 W = 320 W * 15%>>
> It looks like that GFX values are A. and SoC would be the 'state of charge'


I did! Some of my thoughts are posted on the last page (336): [Official] AMD Radeon RX 6900 XT Owner's Club


----------



## CS9K

rodac said:


> @LtMatt , I did not know you had that great channel on Youtube, great videos, all the way back to 8 years ago, ouaaa.
> Thanks for confirming this value of 347, that looks reasonable.
> 
> So we are talking about the PowerLimit W, so from 332 which is already more than stock 6900s, we take this up to 347.
> 
> I have seen others like @weleh make those figures the same both for 'PowerLimit W' as well as 'TDC Limits GFX' and then up the Soc from 55 to something like 70. He posted a handy spreadsheet showing the values he tested with.
> 
> Here it is (credit to @weleh , thanks for that useful spreadsheet posted back in 2021 showing the PMT steps, this image is an extract below)
> 
> Here is his original post link
> 
> 
> 
> 
> 
> 
> 
> 
> [Official] AMD Radeon RX 6900 XT Owner's Club
> 
> 
> I've got the same card under water, and get 300 points less than you do with the same clocks. You've got yourself a great 6900XT Red Devil card, sir. Thx. I think that's just variance in the run. Looks to be the same as yours, except my CPU score is much higher (12 cores vs 6 cores).
> 
> 
> 
> 
> www.overclock.net
> 
> 
> 
> 
> 
> View attachment 2556553
> 
> 
> 
> 
> The truth is that I do not have a real clue what those other values are here for.
> I want to keep a conservative approach to keep this within safe limits but at the same time, I want to use up what is available since it is supposed to be a better binned GPU, I could already reach 2800 for sure using your previous recommendations, but when I set it to 2806, 3DMark failed and stopped running showing that it became unstable.


Regarding some of the changes Weleh suggested back in June of 2021, we know more now than we did then, and things have changed a little since then with drivers.


Setting Memory Timing Control to "2" has not been proven to do anything
The issues that I personally had with Deep Sleep don't manifest for me in Windows 11 (thank goodness WDDM 3.0 fixed it)
DF_CSTATE and Min SoC clocks don't need to be modified for daily-driver tunes
dclk, vclk, minimum fclk, and FclkBoostFreq likewise all don't do anything relevant to GPU tuning


I suppose I could have approached the above with "The things that _are_ relevant to daily-driver tuning are as follows:"

Power Limit GPU (W)
TDC Limit GFX (A)
TDC Limit SoC (A)
Maximum fclk frequency
Deep Sleep, if you're on Windows 10


----------



## J7SC

LtMatt said:


> @CS9K Thought this might interest you also.
> View attachment 2556552


...clearly, the 5800X3D seems to be the proc to have for less-than-perfectly optimized systems, due to its big cache. I wonder also how resizable_BAR / Smart Access Memory would impact (on/off) with the 5800X3D for 3090 and 6900XT.

Anyway, FS2020 has some quirks which the 5800X3D's cache clearly manages to mask, making it the best choice with apps like FS2020. Below is an early screenie excerpt from a 4K max setup with a TR 2950X and dual 2080 Ti. The interesting bit in this context is the top left bit, showing just how hard one CPU core gets hammered (btw, proc happened to be set on all-core 4250 MHz which makes little sense for FS2020; max single core 4.5GHz would have been better). The TR2950X has a decent cache, quad channel RAM (hitting over 100 GB/s memory reads) and the option to adjust between NUMA and UMA ram modes and those options clearly benefit apps like FS2020 to this day.

I realize that this is the 6900XT thread, but getting the right overall system spec for BigNavi with its 16 GB of VRAM is key, IMO.


----------



## CS9K

J7SC said:


> ...clearly, the 5800X3D seems to be the proc to have for less-than-perfectly optimized systems, due to its big cache. I wonder also how resizable_BAR / Smart Access Memory would impact (on/off) with the 5800X3D for 3090 and 6900XT.
> 
> Anyway, FS2020 has some quirks which the 5800X3D's cache clearly manages to mask, making it the best choice with apps like FS2020. Below is an early screenie excerpt from a 4K max setup with a TR 2950X and dual 2080 Ti. The interesting bit in this context is the top left bit, showing just how hard one CPU core gets hammered (btw, proc happened to be set on all-core 4250 MHz which makes little sense for FS2020; max single core 4.5GHz would have been better). The TR2950X has a decent cache, quad channel RAM (hitting over 100 GB/s memory reads) and the option to adjust between NUMA and UMA ram modes and those options clearly benefit apps like FS2020 to this day.
> 
> I realize that this is the 6900XT thread, but getting the right overall system spec for BigNavi with its 16 GB of VRAM is key, IMO.
> View attachment 2556619


Aye, it is VERY interesting how well the 5800x3d does with FS2020 in DX11 mode.

DirectX 11 was _not_ designed for what ASOBO/FS2020 are asking of it, and to be honest, I am amazed that FS2020 works _at all_ given how complex it is. The poor MainThread has SO MUCH to do, and the (lack of) performance current CPU's have is by no means the fault of any modern processor; it is just a coencidence that the giant cache in the 5800x3d can keep data moving along as well as it can. Hey, I will take performance improvements wherever they may come from!

That said, I am SO READY for DX12 optimizations to happen! <3


----------



## Trevbev

I'm having a problem where if I apply an overclock to the memory (even only 2MHz) I get a black screen and I have to press the reset button. When it reboots wattman has restored all default settings.
I thought this had gone away with a new driver update but it's back again. I just tried using DDU and reinstalling the drivers - 22.3.2 but it didn't help.
Anyone else experienced this? Maybe a driver update will fix it?


----------



## 99belle99

Trevbev said:


> I'm having a problem where if I apply an overclock to the memory (even only 2MHz) I get a black screen and I have to press the reset button. When it reboots wattman has restored all default settings.
> I thought this had gone away with a new driver update but it's back again. I just tried using DDU and reinstalling the drivers - 22.3.2 but it didn't help.
> Anyone else experienced this? Maybe a driver update will fix it?


That's a weird one. It doesn't happen me.


----------



## rodac

Trevbev said:


> I'm having a problem where if I apply an overclock to the memory (even only 2MHz) I get a black screen and I have to press the reset button. When it reboots wattman has restored all default settings.
> I thought this had gone away with a new driver update but it's back again. I just tried using DDU and reinstalling the drivers - 22.3.2 but it didn't help.
> Anyone else experienced this? Maybe a driver update will fix it?


Not seen this but clearly, at one point in time when I blindly tested some settings with the GPU clock frequency that was not within the working range, the screen turned green and like you said only a restart with reset of the Wattman settings could get me to recover. So you must have hit a limit or else there is another conflicting setting from another tuning software, I am not sure as I have not experienced the exact same issue. You can always post more specific details, maybe someone will know.


----------



## rodac

CS9K said:


> Regarding some of the changes Weleh suggested back in June of 2021, we know more now than we did then, and things have changed a little since then with drivers.
> 
> 
> Setting Memory Timing Control to "2" has not been proven to do anything
> The issues that I personally had with Deep Sleep don't manifest for me in Windows 11 (thank goodness WDDM 3.0 fixed it)
> DF_CSTATE and Min SoC clocks don't need to be modified for daily-driver tunes
> dclk, vclk, minimum fclk, and FclkBoostFreq likewise all don't do anything relevant to GPU tuning
> 
> 
> I suppose I could have approached the above with "The things that _are_ relevant to daily-driver tuning are as follows:"
> 
> Power Limit GPU (W)
> TDC Limit GFX (A)
> TDC Limit SoC (A)
> Maximum fclk frequency
> Deep Sleep, if you're on Windows 10


That is very useful @CS9K , thanks, so these are all of the variable that can be changed:



Code:


Power Limit GPU (W)
TDC Limit GFX (A)
TDC Limit SoC (A)
Maximum fclk frequency
Deep Sleep, if you're on Windows 10

OK, so far only changed the "Power Limit GPU (W)" and that does provide more performance indeed as confirmed by my 3DMark scores. 
So happy with those scores, not sure at this stage if I will want to take this further, but we all say that and then we do ;-)
For now I think that I need to work out how to tune the CPU (5950X), in fact I am getting worse scores than I used to get, and I just re-pasted both GPU and CPU, so that is not a thermal limit. I get 2000 points less in the TimeSpy CPU scores, no clue what is going on.
I tried the auto tuning in Wattman (AMD software) and that did not help, then this afternoon, I enabled the 'Precision Boost Overdrive' in the BIOS but that did not help either when running 3DMark.


----------



## 2080tiowner

Hi,

it's always not possible to flash vbios on windows ?

If no, only under linux or with a vbios programmer ?

Have you got a tutorial to do this ?

It's possible to flash 6900xt Founder with 6900xt liquid cooled vbios to obtain more frequency on vram ?

Thanks you for your help !!!


----------



## deadfelllow

2080tiowner said:


> Hi,
> 
> it's always not possible to flash vbios on windows ?
> 
> If no, only under linux or with a vbios programmer ?
> 
> Have you got a tutorial to do this ?
> 
> It's possible to flash 6900xt Founder with 6900xt liquid cooled vbios to obtain more frequency on vram ?
> 
> Thanks you for your help !!!


Hello,

On AMD cards you cant flash vbios through Windows. Either you flash with linux or Hiveos. (Hiveos is much easier than linux)

And some people were able to run lc bios even though they had xtx card.( have no idea about reference 6900xt)


----------



## 2080tiowner

deadfelllow said:


> Hello,
> 
> On AMD cards you cant flash vbios through Windows. Either you flash with linux or Hiveos. (Hiveos is much easier than linux)
> 
> And some people were able to run lc bios even though they had xtx card.( have no idea about reference 6900xt)


Thanks you for your answer, can you say me how to flash vbios with hive os ?

Regards !


----------



## CS9K

2080tiowner said:


> Thanks you for your answer, can you say me how to flash vbios with hive os ?
> 
> Regards !


I would recommend not attempting to flash the bios on any "reference" RDNA2 GPU. It is still very experimental, and if you mess it up, the only way to recover is to use an external EEPROM programmer. 

The only GPU's that can easily be flashed, are RX 6900 XT "XTXH" GPU's, and even then, there are risks and quirks that one has to live with.

Also, what a waste of silicon to be mining with RDNA2 GPU's...


----------



## 99belle99

Just to add to above post if the GPU has a dual bios you are fine as you can force flash it by booting with good bios and flicking the switch just before flashing to the bad bios.


----------



## Trevbev

rodac said:


> Not seen this but clearly, at one point in time when I blindly tested some settings with the GPU clock frequency that was not within the working range, the screen turned green and like you said only a restart with reset of the Wattman settings could get me to recover. So you must have hit a limit or else there is another conflicting setting from another tuning software, I am not sure as I have not experienced the exact same issue. You can always post more specific details, maybe someone will know.


The first time I noticed it was after using MPT. I deleted SPPT but it was still happening so I used DDU and updated the drivers and that fixed it for a while. 
I'm thinking I might need to do a fresh windows install to fix it but I don't really want to do that at the moment. 
I think it's unlikely that it's hardware but I don't have another machine to test it. I tried the second bios and it happened again.


----------



## 2080tiowner

CS9K said:


> I would recommend not attempting to flash the bios on any "reference" RDNA2 GPU. It is still very experimental, and if you mess it up, the only way to recover is to use an external EEPROM programmer.
> 
> The only GPU's that can easily be flashed, are RX 6900 XT "XTXH" GPU's, and even then, there are risks and quirks that one has to live with.
> 
> Also, what a waste of silicon to be mining with RDNA2 GPU's...


Nobody have tried to do it on a reference 6900xt ?

Is there a solution to change vram timing and bypass 2150 mhz max without flashing the board ?

Thanks for your help !!!


----------



## 2080tiowner

99belle99 said:


> Just to add to above post if the GPU has a dual bios you are fine as you can force flash it by booting with good bios and flicking the switch just before flashing to the bad bios.


Good idea and safe method !


----------



## CS9K

2080tiowner said:


> Nobody have tried to do it on a reference 6900xt ?
> 
> Is there a solution to change vram timing and bypass 2150 mhz max without flashing the board ?
> 
> Thanks for your help !!!


Someone tried far back in this thread, as well as I think over on hardwareluxx dot DE. They were not successful.

And no, even if you _can_ flash a bios, nobody has found a way to unlock the RDNA2 bios for modification yet.

To add some context:

RDNA1's bios was unlocked in mid 2020; it could be flashed to most RX 5600 XT and greater models, and allowed for effectively unlimited modification of parameters, including memory timings
A great number of BIOS checks have been added to the drivers _and_ the firmware of RDNA2, which has made finding a workaround much more difficult.
No RDNA2 GPU's can have other bios files flashed to them successfully. There have been a few attempts, but no working GPU's
Except for RX 6900 XT "XTXH" models; a _FEW_ of these models can successfully take a bios flash from other RX 6900 XT "XTXH" cards' bios files. Some work, some don't; the bios file can not be modified still, _only_ cross-flashed between a _few_ models of RX 6900 XT "XTXH"

I hope things make more sense, @2080tiowner 🧡


----------



## alceryes

CS9K said:


> I would recommend not attempting to flash the bios on any "reference" RDNA2 GPU. It is still very experimental, and if you mess it up, the only way to recover is to use an external EEPROM programmer.
> 
> The only GPU's that can easily be flashed, are RX 6900 XT "XTXH" GPU's, and even then, there are risks and quirks that one has to live with.
> 
> Also, what a waste of silicon to be mining with RDNA2 GPU's...


Seconded.
You can very easily turn your $1000 GPU into a paperweight.


----------



## LtMatt

Anyone know if the 6900 XTXH LC (MBA) version has 0RPM mode showing in AMD Software? Am sure it does but would appreciate conformation.


----------



## crastopher

Is there any good information on undervolting these cards? I just got a RX-69XTACSD9 (Merc 319) and wanting a safe undervolt for my SFFPC so I can get lower temps.


----------



## deadfelllow

crastopher said:


> Is there any good information on undervolting these cards? I just got a RX-69XTACSD9 (Merc 319) and wanting a safe undervolt for my SFFPC so I can get lower temps.


You need to try it. Every silicon is different. You can try core clock [email protected] Volts and see if things are working properly. ( furmark / 3dmark timespy etc)


----------



## crastopher

deadfelllow said:


> You need to try it. Every silicon is different. You can try core clock [email protected] Volts and see if things are working properly. ( furmark / 3dmark timespy etc)


Should I just be using the Radeon software? Right now I have it set to [email protected] and it seems pretty stable, although I haven't ran timespy's.


----------



## deadfelllow

crastopher said:


> Should I just be using the Radeon software? Right now I have it set to [email protected] and it seems pretty stable, although I haven't ran timespy's.


Yes, Just use radeon software. Dont use afterburner or other 3rd pt software. Lower the voltage as long as performance doesnt change.


----------



## deadfelllow

What is your temps btw? Can you send a screenshot while stress testing(furmark).


----------



## crastopher

deadfelllow said:


> What is your temps btw? Can you send a screenshot while stress testing(furmark).


Idle:










A standard Furmark (which I guess runs for only a minute?):


----------



## deadfelllow

crastopher said:


> Idle:
> 
> View attachment 2556950
> 
> 
> A standard Furmark (which I guess runs for only a minute?):
> View attachment 2556951


I cant see junction temps on furmark. I suppose 62c is gpu temp not junction. Can you send a screenshot of hwinfo gpu section?


----------



## crastopher

deadfelllow said:


> I cant see junction temps on furmark. I suppose 62c is gpu temp not junction. Can you send a screenshot of hwinfo gpu section?
> 
> View attachment 2556953


Here it is with the default 4k preset in furmark.


----------



## Kyleboi78

Hello
I have been using more power tools recently to trying and get a better score on 3dmark. But I booted my pc up today and it says my version of amd drivers is out of date. So I go to install the drivers again and its errors out at %83 saying general error 1603. But when I install the minimal driver it is fine and installs correctly. Has anyone else had this issue if so how did you fix it ?. I think it is something to do with installing the adrenalin software that allows you to overclock and stuff.


----------



## rodac

Kyleboi78 said:


> Hello
> I have been using more power tools recently to trying and get a better score on 3dmark. But I booted my pc up today and it says my version of amd drivers is out of date. So I go to install the drivers again and its errors out at %83 saying general error 1603. But when I install the minimal driver it is fine and installs correctly. Has anyone else had this issue if so how did you fix it ?. I think it is something to do with installing the adrenalin software that allows you to overclock and stuff.





https://www.amd.com/en/support/kb/faq/gpu-kb1603


*Possible Causes*
Error 1603 can be caused by one or a combination of the following problems: 

Graphics driver and software conflicts
False positives reported and blocked by antivirus software
Missing critical and/or important Windows updates
Corrupted Microsoft Visual C++ Redistributable files
Corrupted registry keys and/or system files


----------



## Kyleboi78

rodac said:


> https://www.amd.com/en/support/kb/faq/gpu-kb1603
> 
> 
> *Possible Causes*
> Error 1603 can be caused by one or a combination of the following problems:
> 
> Graphics driver and software conflicts
> False positives reported and blocked by antivirus software
> Missing critical and/or important Windows updates
> Corrupted Microsoft Visual C++ Redistributable files
> Corrupted registry keys and/or system files


Yeah I did all that. Turns out it was the fact more power tools was installed so I just uninstalled it and reinstalled it after


----------



## deadfelllow

crastopher said:


> Here it is with the default 4k preset in furmark.
> View attachment 2556971


It looks good for air cooling card.

Almost [email protected] hotspot.


----------



## crastopher

deadfelllow said:


> It looks good for air cooling card.
> 
> Almost [email protected] hotspot.


It's in an NR200p. Should I be running furmark longer than a minute? Wondering what values I should push down for the undervolt.


----------



## Kyleboi78

Finally Took top spot again after having it taken away by a guy I have been fighting for ages lol. New pb


----------



## deadfelllow

crastopher said:


> It's in an NR200p. Should I be running furmark longer than a minute? Wondering what values I should push down for the undervolt.


Do it for like 10mins. And send the results afterwards


----------



## CS9K

deadfelllow said:


> Do it for like 10mins. And send the results afterwards


Do not run furmark at all @crastopher. That program does not utilize modern graphics cards properly in a way that one would want, in order to test an undervolt.

The best programs, in my opinion, are 3dMark Time Spy, and Unigine Heaven, set to the exact settings as seen below:









And I mean _exactly_ these settings. These settings will push your GPU core clock as hard as it can go, _usually_ without hitting voltage/power limits first. Test with Heaven or Time Spy until you crash, then increase until you can pass the 20-minute Time Spy stability test, or 30 minutes of Heaven at the _exact_ settings above. Once you can, increase your undervolt by 10mV and you should be absolutely stable from that point forward.

For context: the "Voltage" slider does not directly control the voltage the card receives, but instead, is an _offset_ to the _entire voltage/MHz curve_. Below 2600MHz, the amount you set the Voltage slider below its maximum, it will offset the entire curve of the GPU by that many mV. 

Make sure your core clock is not set above 2600MHz, because once it is set above that level, the GPU takes control of voltage and the setting that you input on the "voltage" slider is ignored.


----------



## WR-HW95

deadfelllow said:


> This ones is for backplate. I need retention bracket screws :/( image below.


So screw for that is this?








That is metric M2x5mm screw if that info is right. If thread is something "non-standard" it should be writen after M2.0 part.
*After looking picks in google, it seems they sell anykind of M2 screws plain M2.*


----------



## alceryes

CS9K said:


> For context: the "Voltage" slider does not directly control the voltage the card receives, but instead, is an _offset_ to the _entire voltage/MHz curve_. Below 2600MHz, the amount you set the Voltage slider below its maximum, it will offset the entire curve of the GPU by that many mV.



Yup.
For example, I've got mine set to 1.125V. This causes my card to max out at 1.14V instead of the standard 1.175V it was doing before I touched the slider.


----------



## call303

crastopher said:


> Is there any good information on undervolting these cards? I just got a RX-69XTACSD9 (Merc 319) and wanting a safe undervolt for my SFFPC so I can get lower temps.


With the 6900XT(XH), we own the most efficient rasterization GPU available on nowadays GPU market - at least if you don't want to break Time Spy records.
It sure is always fun the see how far such cards can be pushed for benchmarking purposes when unleashing power limits way beyond 400W, but utilizing a 3840x1600 75hz panel, i'm totally fine with 75% of this cards actual performance when i can obtain that with 36% of the power draw, which is totally possible.

i made an MPT optimized setting for my Red Devil XTXH, which focuses on saving power on VRAM and SOC domains, to make it available for the core frequency.

here are my settings for 110/130/150/170W including TS results, with 110W being by far the most efficient refering to the score per watt value:









I think 15.803 TS Score is pretty decent for 110W - it's right in between the 3070Ti/6800non-XT and 3080 ballpark @stock.
Increasing the PL from there on will yield diminishing returns, so i stopped testing at 170W.

You will also get lower idle and desktop power consumption with media playback hovering at around 23-25W with 4k/60hz content.
result links, for those interested:

110W I scored 16 078 in Time Spy
130W I scored 17 424 in Time Spy
150W I scored 18 275 in Time Spy
170W I scored 18 796 in Time Spy

Positive side effect: your fans will hardly need to spin.. i meanwhile have no idea why i even slapped a waterblock on this card with the gpu hotspot maxing out at 44°C with water at 27°C after a couple of hours..

coming from a water cooled Vega64, i was amazed to see this kind of progress in efficiency, as i get twice the performance at less than half the power consumption, lol.


----------



## rodac

call303 said:


> With the 6900XT(XH), we own the most efficient rasterization GPU available on nowadays GPU market - at least if you don't want to break Time Spy records.
> It sure is always fun the see how far such cards can be pushed for benchmarking purposes when unleashing power limits way beyond 400W, but utilizing a 3840x1600 75hz panel, i'm totally fine with 75% of this cards actual performance when i can obtain that with 36% of the power draw, which is totally possible.
> 
> i made an MPT optimized setting for my Red Devil XTXH, which focuses on saving power on VRAM and SOC domains, to make it available for the core frequency.
> 
> here are my settings for 110/130/150/170W including TS results, with 110W being by far the most efficient refering to the score per watt value:
> 
> 
> I think 15.803 TS Score is pretty decent for 110W - it's right in between the 3070Ti/6800non-XT and 3080 ballpark @stock.
> Increasing the PL from there on will yield diminishing returns, so i stopped testing at 170W.
> 
> You will also get lower idle and desktop power consumption with media playback hovering at around 23-25W with 4k/60hz content.
> result links, for those interested:
> 
> 110W I scored 16 078 in Time Spy
> 130W I scored 17 424 in Time Spy
> 150W I scored 18 275 in Time Spy
> 170W I scored 18 796 in Time Spy
> 
> Positive side effect: your fans will hardly need to spin.. i meanwhile have no idea why i even slapped a waterblock on this card with the gpu hotspot maxing out at 44°C with water at 27°C after a couple of hours..
> 
> coming from a water cooled Vega64, i was amazed to see this kind of progress in efficiency, as i get twice the performance at less than half the power consumption, lol.


That is quite cool to see the opposite approach, underclocking , there is no denying that with the increasing cost of energy, at around 2.5 more, this may become a more and more popular trend, this certainly shows as well in the reviews about the 3090ti, unanimously criticised by reviewers for its excessive energy consumption.


----------



## J7SC

rodac said:


> That is quite cool to see the opposite approach, underclocking , there is no denying that with the increasing cost of energy, at around 2.5 more, this may become a more and more popular trend, this certainly shows as well in the reviews about the 3090ti, unanimously criticised by reviewers for its excessive energy consumption.


...interestingly enough, Igor's Lab just did a review of a 3090 Ti on a > reduced-watts diet, including with per-watt comparisons for Radeons. But overall, yeah, some top-end 3090 Ti (and presumably select RTX4) can easily slurp well _past _600 W...


----------



## alceryes

call303 said:


> Positive side effect: your fans will hardly need to spin..


I'm not a fan of zero RPM fans after seeing my hotspot temps and figuring out that it takes several seconds for the fans to spin up and start to affect the thermal characteristics of the cooler/heatsink in certain instances.

Think about it this way, let's say that you're doing some heavy browsing with video playing. Your heatsink is getting saturated with heat but it's dissapating quickly enough to keep the fans off. You then fire up a very GPU intensive game/program/benchmark. Your core TDP goes up by 200W - instantly. The fans immediately kick on but look at what they have to deal with. They have to move the heat off an already heat-saturated heatsink which is in turn being heated by a now extremely hot core.

I've done some very basic testing and have seen a hotspot/peak difference of almost 15ºC between the default zero RPM fan setting and a lower-but-always-on RPM fan setting. This is with my lower-but-always-on RPM fan setting at a peak fan speed that is 20% lower than the default zero RPM setting. Much less jolting fan spin up sounds and a much cooler GPU core - what's not to love?

Yes, with the fans running all the time (albeit at a generally lower speed) there could be more wear and tear on them but, for a fan designed to last the life of the card, I don't consider that an issue.


----------



## alceryes

I just upgraded from my golden-oldie 21.8.2 to the new 22.3.1.
The 21.8.2 driver has been the best performer I've found. Luckily, the 22.3.1 doesn't take away any performance.


----------



## J7SC

alceryes said:


> I just upgraded from my golden-oldie 21.8.2 to the new 22.3.1.
> The 21.8.2 driver has been the best performer I've found. Luckily, the 22.3.1 doesn't take away any performance.


I had reverted back to 21.7.1. w/o issues and great performance, but might give 22.3.1 a try.


----------



## PanZwu

call303 said:


> With the 6900XT(XH), we own the most efficient rasterization GPU available on nowadays GPU market - at least if you don't want to break Time Spy records.
> It sure is always fun the see how far such cards can be pushed for benchmarking purposes when unleashing power limits way beyond 400W, but utilizing a 3840x1600 75hz panel, i'm totally fine with 75% of this cards actual performance when i can obtain that with 36% of the power draw, which is totally possible.
> 
> i made an MPT optimized setting for my Red Devil XTXH, which focuses on saving power on VRAM and SOC domains, to make it available for the core frequency.
> 
> here are my settings for 110/130/150/170W including TS results, with 110W being by far the most efficient refering to the score per watt value:
> View attachment 2557192
> 
> 
> I think 15.803 TS Score is pretty decent for 110W - it's right in between the 3070Ti/6800non-XT and 3080 ballpark @stock.
> Increasing the PL from there on will yield diminishing returns, so i stopped testing at 170W.
> 
> You will also get lower idle and desktop power consumption with media playback hovering at around 23-25W with 4k/60hz content.
> result links, for those interested:
> 
> 110W I scored 16 078 in Time Spy
> 130W I scored 17 424 in Time Spy
> 150W I scored 18 275 in Time Spy
> 170W I scored 18 796 in Time Spy
> 
> Positive side effect: your fans will hardly need to spin.. i meanwhile have no idea why i even slapped a waterblock on this card with the gpu hotspot maxing out at 44°C with water at 27°C after a couple of hours..
> 
> coming from a water cooled Vega64, i was amazed to see this kind of progress in efficiency, as i get twice the performance at less than half the power consumption, lol.


actually 6900xts are the best cards to break timespy records ;D


----------



## jonRock1992

J7SC said:


> I had reverted back to 21.7.1. w/o issues and great performance, but might give 22.3.1 a try.


I'm still on 22.2.3. All of the 22.3.x drivers break freesync in certain applications for me. Specifically in God Of War and Death Stranding DC. My monitor is an Odyssey G7.


----------



## LtMatt

The Toxic has a new engine under the hood.


----------



## ArchStanton

@LtMatt That is one of, if not the most, aesthetically pleasing double CLC rigs I've seen.


----------



## LtMatt

ArchStanton said:


> @LtMatt That is one of, if not the most, aesthetically pleasing double CLC rigs I've seen.


Appreciate that. It's not easy trying to make those horrible tubes look okay.


----------



## rodac

LtMatt said:


> The Toxic has a new engine under the hood.


Beautiful rig but how can you use 2 processors on that mb? It only has one cpu socket


----------



## LtMatt

rodac said:


> Beautiful rig but how can you use 2 processors on that mb? It only has one cpu socket


Going to sell one. I ordered two just in case one of the retailers oversold and didn't ship. Had this happen to me before on product launches. 

Ended up trying both and will keep the one that likes 4000Mhz/2000Mhz FCLK.


----------



## rodac

LtMatt said:


> Going to sell one. I ordered two just in case one of the retailers oversold and didn't ship. Had this happen to me before on product launches.
> 
> Ended up trying both and will keep the one that likes 4000Mhz/2000Mhz FCLK.


Now it makes sense, looking forward to hear about this, sure is as new as it gets


----------



## jonRock1992

LtMatt said:


> Going to sell one. I ordered two just in case one of the retailers oversold and didn't ship. Had this happen to me before on product launches.
> 
> Ended up trying both and will keep the one that likes 4000Mhz/2000Mhz FCLK.


You should do a Timespy bench to see how that extra cache affects scores.


----------



## LtMatt

jonRock1992 said:


> You should do a Timespy bench to see how that extra cache affects scores.


Will do, I don’t think it makes any difference outside of games for the most part though.


----------



## J7SC

LtMatt said:


> Going to sell one. I ordered two just in case one of the retailers oversold and didn't ship. Had this happen to me before on product launches.
> 
> Ended up trying both and will keep the one that likes 4000Mhz/2000Mhz FCLK.


I think you should keep both and run a nice rumbling V-Twin under the hood


----------



## wermad

Just got mine yesterday. Waiting on a 360 aio for the cpu to do a final mount. Might downgrade the case to a 5000D air for a more compact setup


----------



## LtMatt

wermad said:


> Just got mine yesterday. Waiting on a 360 aio for the cpu to do a final mount. Might downgrade the case to a 5000D air for a more compact setup
> View attachment 2557458
> 
> View attachment 2557459


Welcome, always nice to see another Toxic user. Which version did you get?

In other news the 200-250 FPS limit currently seen in Warzone with high end processors is now a thing of the past with the 5800X3D. 
Rebirth Island Warzone | 5800X3D + 6900 XTXH Toxic EE | 1080P Competitive Settings & 300+ FPS! - YouTube

Expecting similar FPS at 1440P too as the 6900 XTXH is not stretching its legs here.


----------



## deadfelllow

LtMatt said:


> Welcome, always nice to see another Toxic user. Which version did you get?
> 
> In other news the 200-250 FPS limit currently seen in Warzone with high end processors is now a thing of the past with the 5800X3D.
> Rebirth Island Warzone | 5800X3D + 6900 XTXH Toxic EE | 1080P Competitive Settings & 300+ FPS! - YouTube
> 
> Expecting similar FPS at 1440P too as the 6900 XTXH is not stretching its legs here.


Matt could you do Matrix awakens demo benchmark please?

Because it seems the game has massive cpu bottleneck. I just wonder how it goes with 5800x3d.


----------



## LtMatt

deadfelllow said:


> Matt could you do Matrix awakens demo benchmark please?
> 
> Because it seems the game has massive cpu bottleneck. I just wonder how it goes with 5800x3d.


Just replied to your PM, can do.


----------



## J7SC

LtMatt said:


> Welcome, always nice to see another Toxic user. Which version did you get?
> 
> In other news the 200-250 FPS limit currently seen in Warzone with high end processors is now a thing of the past with the 5800X3D.
> Rebirth Island Warzone | 5800X3D + 6900 XTXH Toxic EE | 1080P Competitive Settings & 300+ FPS! - YouTube
> 
> Expecting similar FPS at 1440P too as the 6900 XTXH is not stretching its legs here.


...another idea for your 2x 5800X3D chip dilemma is to solder one of the chips as the 2nd chiplet onto the other > voila, 5950X V3 !!  
...fyi, the related Epyc Milan are available with up to 64C/128T V3 cache (mind you at slower speeds and with much bigger Threadripper-style heat spreaders).


----------



## LtMatt

Any MSFS fans out there might find this interesting. Ryzen7 5800X3D MSFS2020で真価を発揮！12900Kを軽く打ち負かす？ - YouTube


----------



## MSIMAX

LtMatt said:


> Any MSFS fans out there might find this interesting. Ryzen7 5800X3D MSFS2020で真価を発揮！12900Kを軽く打ち負かす？ - YouTube


any chance you could dump your gpu bios?


----------



## LtMatt

MSIMAX said:


> any chance you could dump your gpu bios?


For the Toxic EE? Sure will do when I get home.


----------



## CS9K

LtMatt said:


> Any MSFS fans out there might find this interesting. Ryzen7 5800X3D MSFS2020で真価を発揮！12900Kを軽く打ち負かす？ - YouTube


I haven't seen this particular video myself, yet, but ugh, if DX12 optimizations weren't coming this summer, I would already have a 5800x3d without question.


----------



## wermad

LtMatt said:


> Welcome, always nice to see another Toxic user. Which version did you get?
> 
> In other news the 200-250 FPS limit currently seen in Warzone with high end processors is now a thing of the past with the 5800X3D.
> Rebirth Island Warzone | 5800X3D + 6900 XTXH Toxic EE | 1080P Competitive Settings & 300+ FPS! - YouTube
> 
> Expecting similar FPS at 1440P too as the 6900 XTXH is not stretching its legs here.


Hi thank you! 

Not sure tbh, I only see one version of the 6900 XT Toxic, unless I'm wrong???


----------



## LtMatt

wermad said:


> Hi thank you!
> 
> Not sure tbh, I only see one version of the 6900 XT Toxic, unless I'm wrong???
> 
> View attachment 2557900


There's a few variations. The highest bin is the Extreme, then there are like four other variants below that.









Click the Limited Edition and check the various SKU numbers to see what they match on your box. The Limited Edition versions all have lower boost clocks than the Extreme version.

It is not clear to me if any of them are the XTXH version, unlike the Extreme. The coolers/fans are identical over the different versions.
TOXIC Explore (sapphiretech.com)


----------



## LtMatt

MSIMAX said:


> any chance you could dump your gpu bios?


Sorry for the delay forgot about this. 
https://file.io/BgPcViPAxqat


----------



## rodac

wermad said:


> Just got mine yesterday. Waiting on a 360 aio for the cpu to do a final mount. Might downgrade the case to a 5000D air for a more compact setup


Whichever one you have got, you will not regret it, the fastest GPU except for Ray Tracing. I will post again soon to provide more insight.


----------



## Stiff Pwner

xR00Tx said:


> I have an XTX 6900 XT Sapphire Nitro+ and initially the hotspot temperature was also going over 100c.
> Right away, I took it apart and replaced the original thermal paste with Kryonaut. Hotspot temperature dropped about 30 degrees celsius and the delta between die and hostspot temperature was 20 - 25c maximum.
> 
> Now I use a waterblock and liquid metal and the delta between the temperatures is about 10c @ 360w.
> 
> I recommend that you replace your board's thermal paste!
> 
> I don't remember what was the maximum core clock on Time Spy before replacing the thermal compound, but today I can get 2860mhz (2870 on colder days) with 21.7.2 driver.


Did you replace just the paste, or did you replace the thermal pads as well? If also the thermal pads, which thickness did you use?


----------



## MSIMAX

LtMatt said:


> Sorry for the delay forgot about this.
> https://file.io/BgPcViPAxqat


the link is being flagged unsafe  i want to try with more power tool with my 6900xt waterforce


----------



## xR00Tx

Stiff Pwner said:


> Did you replace just the paste, or did you replace the thermal pads as well? If also the thermal pads, which thickness did you use?


At that time I replaced only the thermal paste and kept the original thermal pads. Don't know its thickness, sorry.


----------



## LtMatt

MSIMAX said:


> the link is being flagged unsafe  i want to try with more power tool with my 6900xt waterforce


False positive.


----------



## Hellboy13

Just ordered an sapphire 6900xt lc OEM version with the 120mm aio -









퀘이사존 라데온 RX 6900 XT LC 20종 게임 벤치마크







quasarzone.com





This is the cheapest 6900xt in India (costed about USD 1200). Does any one here own it? Any tips? Also is this a reference pcb design? I plan to put an ek waterblock on it an add it to my 2x 240 rad cpu gpu loop.


----------



## MSIMAX

LtMatt said:


> False positive.


yeah but it wont let me download tried on chrome edge and firefox it brings me to the same google page


----------



## MSIMAX

LtMatt said:


> False positive.


maybe to techpowerup


----------



## Godhand007

Has anyone here done successful LC bios flash on Toxic EE card? I am going to attempt it soon. Any tips, dos/don'ts etc. would be highly appreciated.


----------



## deadfelllow

Godhand007 said:


> Has anyone here done successful LC bios flash on Toxic EE card? I am going to attempt it soon. Any tips, dos/don'ts etc. would be highly appreciated.


Yes, LC bios works well with Toxic EE. Easiest way to flash LC bios into your gpu is HiveOS.

Me and many others flashed Vbios into their Toxic EE and nothing bad happened.


----------



## Godhand007

deadfelllow said:


> Yes, LC bios works well with Toxic EE. Easiest way to flash LC bios into your gpu is HiveOS.
> 
> Me and many others flashed Vbios into their Toxic EE and nothing bad happened.


I was going to use this method. Can you explain the HiveOS method?


----------



## jonRock1992

Godhand007 said:


> I was going to use this method. Can you explain the HiveOS method?


You can use any Linux live os to flash the vBIOS. I personally use the Budgie Ubuntu fork to flash it.


----------



## Godhand007

jonRock1992 said:


> You can use any Linux live os to flash the vBIOS. I personally use the Budgie Ubuntu fork to flash it.


Any specific reason for using Budgie Ubuntu or HiveOS or the OS version mentioned here ? 

Also, I saw few posts which mentions that not having USB-c port in your GPU causes some issues. I think it was one of your posts. Then there was an issue with M/B Q-Code reading 55. How did you resolve both of these issues?

Thanks for the replies in advance.


----------



## jonRock1992

Godhand007 said:


> Any specific reason for using Budgie Ubuntu or HiveOS or the OS version mentioned here ?
> 
> Also, I saw few posts which mentions that not having USB-c port in your GPU causes some issues. I think it was one of your posts. Then there was an issue with M/B Q-Code reading 55. How did you resolve both of these issues?
> 
> Thanks for the replies in advance.


The issue is unresolved. It's just an ASUS bios issue. Can't be fixed on our end. I just like the Budgie desktop environment. That's why I use that distro.


----------



## Godhand007

jonRock1992 said:


> The issue is unresolved. It's just an ASUS bios issue. Can't be fixed on our end. I just like the Budgie desktop environment. That's why I use that distro.


So it's a specific mobo/manufacturer issue. I am on an MSI PRO Z690-A. Hopefully, my mobo doesn't suffer from those issues.


----------



## jonRock1992

Godhand007 said:


> So it's a specific mobo/manufacturer issue. I am on an MSI PRO Z690-A. Hopefully, my mobo doesn't suffer from those issues.


As far as I know it's just certain ASUS motherboards.


----------



## EyeCU247

Hellboy13 said:


> Just ordered an sapphire 6900xt lc OEM version with the 120mm aio -
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 퀘이사존 라데온 RX 6900 XT LC 20종 게임 벤치마크
> 
> 
> 
> 
> 
> 
> 
> quasarzone.com
> 
> 
> 
> 
> 
> This is the cheapest 6900xt in India (costed about USD 1200). Does any one here own it? Any tips? Also is this a reference pcb design? I plan to put an ek waterblock on it an add it to my 2x 240 rad cpu gpu loop.


I have this card, but do not know what type of PCB it is. Very curious to know how the tear down goes (pictures?) and confirmation on the EK waterblock you use and its fit.


----------



## J7SC

...not sure if this has been mentioned here yet, but I find drivers 22.4.1. and 22.3.1 have YouTube 4K problems in Win 10 Pro and Win 11 Pro (updated tonight). 22.2.1. is fine on my setup, in both OS.


----------



## Godhand007

jonRock1992 said:


> As far as I know it's just certain ASUS motherboards.


So I have been trying this command and I just get a screen hang. What am I doing wrong?

sudo ./amdvbflash -p 0 236400.rom


----------



## Godhand007

Godhand007 said:


> So I have been trying this command and I just get a screen hang. What am I doing wrong?
> 
> sudo ./amdvbflash -p 0 236400.rom


Tried another bios and got an SSID mismatch.


----------



## Godhand007

Godhand007 said:


> Tried another bios and got an SSID mismatch.


I think my flash was successful using this {_sudo ./amdvbflash -p -fs -fp 0 LC_XTXH.rom_} command. Completed a full TS run without any issues. Does this mean everything went as expected ?


----------



## jonRock1992

Godhand007 said:


> I think my flash was successful using this {_sudo ./amdvbflash -p -fs -fp 0 LC_XTXH.rom_} command. Completed a full TS run without any issues. Does this mean everything went as expected ?
> 
> View attachment 2558411


Looks like it flashed correctly


----------



## J7SC

Godhand007 said:


> I think my flash was successful using this {_sudo ./amdvbflash -p -fs -fp 0 LC_XTXH.rom_} command. Completed a full TS run without any issues. Does this mean everything went as expected ?
> 
> View attachment 2558411


 Nice ! I forgot if your card is an XYXH or XTX ?? Didn't think the latter could be flashed with LC


----------



## Alexxxx#€

Could you add the steps and the bios that you installed would be of great help thank you!!! i have an xfx 6900xt


----------



## wermad

Decided to reuse my P8 and the layout was perfect. Waiting on a cpu 360 aio to finish this one up. Card has been stellar so far 👌 (crappy pic btw)


----------



## Godhand007

jonRock1992 said:


> Looks like it flashed correctly


I guess so. Two peculiar things though, I had to *switch my display port on card after the flash to get display output* (I thought I had bricked one of the bioses/biosii). Another one is the extra 2 MHz on memory, what's up with that?


----------



## Enzarch

Hellboy13 said:


> Just ordered an sapphire 6900xt lc OEM version with the 120mm aio -
> 
> 
> 
> 
> 
> 
> 
> 
> 퀘이사존 라데온 RX 6900 XT LC 20종 게임 벤치마크
> 
> 
> 
> 
> 
> 
> 
> quasarzone.com
> 
> 
> 
> 
> This is the cheapest 6900xt in India (costed about USD 1200). Does any one here own it? Any tips? Also is this a reference pcb design? I plan to put an ek waterblock on it an add it to my 2x 240 rad cpu gpu loop.





EyeCU247 said:


> I have this card, but do not know what type of PCB it is. Very curious to know how the tear down goes (pictures?) and confirmation on the EK waterblock you use and its fit.


I have one of these with an EK block on it, yes it is a reference PCB layout


----------



## Godhand007

J7SC said:


> Nice ! I forgot if your card is an XYXH or XTX ?? Didn't think the latter could be flashed with LC


I understand you confusion. I had a XTX (Reference 6900XT) earlier. I purchased a 6900XT Toxic EE later on when I got a good deal on it.


----------



## Godhand007

Alexxxx#€ said:


> Could you add the steps and the bios that you installed would be of great help thank you!!! i have an xfx 6900xt


I followed this *BIOS FLASH *method mentioned by @weleh. I had to change the flash command from _$ sudo amdvbflash -p 0 biosnamehere.rom_ to_ sudo ./amdvbflash -p -fs -fp 0 biosnamehere.rom. _
Another thing that was changed was downloading _amdvbflash_linux_4.71_ instead of _3.20 version._
You can get the LC OEM bios from_ here._

*Important*: It seems that there are some Asus mobos which have issues with bios flashing on these cards. Also, make sure you follow usual rules/dos/don'ts before going for a bios flash.


----------



## jonRock1992

Godhand007 said:


> I followed this *BIOS FLASH *method mentioned by @weleh. I had to change the flash command from _$ sudo amdvbflash -p 0 biosnamehere.rom_ to_ sudo ./amdvbflash -p -fs -fp 0 biosnamehere.rom. _
> Another thing that was changed was downloading _amdvbflash_linux_4.71_ instead of _3.20 version._
> You can get the LC OEM bios from_ here._
> 
> *Important*: It seems that there are some Asus mobos which have issues with bios flashing on these cards. Also, make sure you follow usual rules/dos/don'ts before going for a bios flash.


Yeah you just need to add a -f in the command to force flash the bios. That's why it didn't work the first time.


----------



## Godhand007

jonRock1992 said:


> Yeah you just need to add a -f in the command to force flash the bios. That's why it didn't work the first time.


I wasn't sure that forcing it would be a good idea, especially since it was not mentioned in the guide. I just went on adding -f* based on various mismatch errors that I got.


----------



## Godhand007

Now I want to see what kind of perf benefit I can get from this mem oc along with ~2850 Mhz clocks. Unfortunately, my PSU went bad and I am on a 750W Corsair for the time being so can't push the card too much.


----------



## deadfelllow

Godhand007 said:


> Now I want to see what kind of perf benefit I can get from this mem oc along with ~2850 Mhz clocks. Unfortunately, my PSU went bad and I am on a 750W Corsair for the time being so can't push the card too much.


Its ok for 750W. I have 750W Corsair as well. My system shutdowns when it reaches 610W GPU power xD. So its okay to push 500-550Watts gpu only


----------



## jonRock1992

deadfelllow said:


> Its ok for 750W. I have 750W Corsair as well. My system shutdowns when it reaches 610W GPU power xD. So its okay to push 500-550Watts gpu only


My system would also randomly shut down when I was using a 750W PSU and greatly overclocking my 6900 XTX-H. I'm using a 1000W PSU now with no issues.


----------



## wermad

Good to know. I was thinking of retiring my G1300 for an rgb 750w. I'll keep the evga since I want to get a 5950x


----------



## J7SC

wermad said:


> Decided to reuse my P8 and the layout was perfect. Waiting on a cpu 360 aio to finish this one up. Card has been stellar so far 👌 (crappy pic btw)
> View attachment 2558725


...lots and lots of room in a Core P8 


Spoiler














 


jonRock1992 said:


> My system would also randomly shut down when I was using a 750W PSU and greatly overclocking my 6900 XTX-H. I'm using a 1000W PSU now with no issues.


Personally, I wouldn't run anything less than a good 1000 W - GPUs (and CPUs) that are heavily dependent on boost algorithms - like most new-gens these days - have a much higher chance of 'micros-spikes', especially with full-on PL. I use 1300W platinum PSUs for each of the systems in the spoiler above.


----------



## rodac

rodac said:


> Whichever one you have got, you will not regret it, the fastest GPU except for Ray Tracing. I will post again soon to provide more insight.


Maybe that is obvious for your guys, but after I purchased one of the best overclocked 3090TI on the market, I snatched the last one, an Asus 3090TI OC Liquid Cooled, I expected up to 20% of improvement.
I had of course read the TweakTown review about it and thought that maybe it would turn out better for me and refused to fully believe what I had read.





ASUS ROG Strix LC GeForce RTX 3090 Ti: Overclocking (OC)


It's time to hurt the ASUS ROG Strix LC GeForce RTX 3090 Ti OC Edition, with as much manual OC that I can throw at it. Let's go!




www.tweaktown.com




Damned, he was right, there is hardly any overclocking with the Asus tuner software and just a couple of hours later after running a few 3Dmark test, this GPU is slower than the Sapphire 6900.
Best score I could get with full sliders to the right was 11500 graphic points for Time Spy Extreme and I could get around 11200 at 390 Watts using MorePowerTool with the Sapphire.

The 1440px scores are lower and 4K just lands in the best case 400 points more than the Sapphire. The radiator is indeed too small and the air coming out is hotter than with the Toxic.
The only area where that GPU wins is Ray Tracing but 2000 pounds for ray tracing only is not a sensible option given that I cannot see any improvement in Ray Tracing in Cyberpunk.

So I had to reluctantly send it back, it looks good and so promising though, sad in a way but great in that the Sapphire is really one of the fastest GPU than you can get for gaming.
Maybe if I had been able to utilize the larger RAM, with some rendering software, I would have had a different opinion but that is another story.
In the best case, I could get a 5% average improvement at 4K and that is really trying hard to score in favour of the Nvidia, nothing like the expected 20% shown in some other benchmarks.

There you go, the Sapphire liquid cooled Extreme is simply the best GPU for gaming up to now.

So I will have to wait like everyone else  even if I wanted to treat myself for my birthday until.. well we do not know really, until something better comes up, will that be 40,50,60% faster ? Nobody knows but looks very promising given the GPU competition.


----------



## EyeCU247

Enzarch said:


> I have one of these with an EK block on it, yes it is a reference PCB layout


Thanks!


----------



## CS9K

EyeCU247 said:


> Thanks!


Likewise, I somehow lucked out and have the "PowerColor"-sold reference RX 6900 XT LC, and I too put it into an EK block for reference RX 6900 XT's, and it's rockin' all the time <3


----------



## Godhand007

deadfelllow said:


> Its ok for 750W. I have 750W Corsair as well. My system shutdowns when it reaches 610W GPU power xD. So its okay to push 500-550Watts gpu only


I have a UPS with power monitoring, System load goes upto 750W+ during heavy testing with ~460 watts on GPU and then PC shuts down if keep increasing the load. I have fast SSDs and other system components which the PSU has to deal with.


----------



## Godhand007

wermad said:


> Good to know. I was thinking of retiring my G1300 for an rgb 750w. I'll keep the evga since I want to get a 5950x


Don't do it man. But it in all seriousness, given the power spikes many of these cards suffer from and next gen GPUs going up-to 600w plus, you need to have a decent PSU.


----------



## CS9K

Godhand007 said:


> Don't do it man. But it in all seriousness, given the power spikes many of these cards suffer from and next gen GPUs going up-to 600w plus, you need to have a decent PSU.


This. A lot of people, especially those used to only building PC's in the 20-teens, are going to find out real fast about properly powerful PSU's, if they haven't already.


----------



## J7SC

CS9K said:


> This. A lot of people, especially those used to only building PC's in the 20-teens, are going to find out real fast about properly powerful PSU's, if they haven't already.


...it is getting crazier  re. PL...
I'm not sure what scared NVidia re. the upcoming AMD RDNA3, but apparently, 3090 Ti Ampere chips and PCBs are pin-compatible with RTX 4K / Ada Lovelace, and there are already at least two 3090 Ti that come with not one but _two_ of the new up-to-600W connectors - ideal testing ground for next gen. Personally, I hope AMD counters with RDNA3 mGPU 'tiles' (I still have a 8990 form yesteryear).

Speaking of 3090 Ti, I respectfully disagree with @rodac 's view re. Strix 3090 Ti vs. 6900XT LC. As per above, I happen to run heavily custom-cooled Strix 3090 450W / 520W and 6900 XT 330W / 450W, quite literally in the same case...I've posted some results before of the 6900XT re. clocks, at times above 2800 MHz _effective _for benchies and mid-2700s for gaming. Both are superb cards, and the AMD is a.) more cost-effective purchase at dollar-per fps and b.) more efficient at watt-per-fps.

Still, a well running and well-cooled 3090 (never mind 3090 Ti) beats the 6900XT at _4K_ (where I game) and also with ray-traced titles, though not by a heck of a lot. Also, using the Asus Tweak software instead of MSI AB may be part of the problem, and additionally, I think that in spite of a superb PCB and components, the Asus 3090 Ti LC seems gimped on the cooling - 240 mm AIO for 450W + ?

At the end of the day, this current 'refresh' competition between Red and Green is becoming a side-show, IMO --- RDNA3 and RTX4k/Ada Lovelace should be the new main event later in the year....better stock up on big new PSUs before everyone else will try to do the same


----------



## wermad

Brings back memories of quad:
Gtx 480 (scary power)
Gtx 580
6970s
7970s lightning
290x
295x2 (scary power)

Dual:
Vega 64s Nitro
2080TI Gaming X


----------



## zwer54

Hi all,

Recently I was playing a bit with OC and TimeSpy 24326 graphic score is max I was able to get so far. I am using 6900XT Toxic Extreme LC with a custom water block on it as stock pump was a bit noisy. 

Now, my question is how can I get past 2800MHz on it? For an example if I set it to 2830 MHz and run TimeSpy it will crash the moment test was about to begin. In More Power Tool I was only bringing up max power. 

Also, whatever voltage I set with afterburner or via radeon software, for an example 1.175V or less, after running TimeSpy in GPUZ I can see max voltage was 1.2V. I've tried to set max voltage to 1175 in MPT and in afterburner and again, GPUZ shows it was 1200 during the whole test...

In MPT under Frequency tab GFX max by default is set to 2719 and if I change this to for an example 2730, it will crash at the beginning of TimeSpy...

Any MPT advices about the voltages and max MHz I would appreciate.

Thanks


----------



## Godhand007

zwer54 said:


> Hi all,
> 
> Recently I was playing a bit with OC and TimeSpy 24326 graphic score is max I was able to get so far. I am using 6900XT Toxic Extreme LC with a custom water block on it as stock pump was a bit noisy.
> 
> Now, my question is how can I get past 2800MHz on it? For an example if I set it to 2830 MHz and run TimeSpy it will crash the moment test was about to begin. In More Power Tool I was only bringing up max power.
> 
> Also, whatever voltage I set with afterburner or via radeon software, for an example 1.175V or less, after running TimeSpy in GPUZ I can see max voltage was 1.2V. I've tried to set max voltage to 1175 in MPT and in afterburner and again, GPUZ shows it was 1200 during the whole test...
> 
> In MPT under Frequency tab GFX max by default is set to 2719 and if I change this to for an example 2730, it will crash at the beginning of TimeSpy...
> 
> Any MPT advices about the voltages and max MHz I would appreciate.
> 
> Thanks
> 
> View attachment 2558847


You need more voltage to stabilize the clocks and an LC OEM bios flash for extra mem speed.


----------



## zwer54

Godhand007 said:


> You need more voltage to stabilize the clocks and an LC OEM bios flash for extra mem speed.


but how do I add voltage?


----------



## Godhand007

zwer54 said:


> but how do I add voltage?


Search for *Increase core and SOC voltage* here (use google translate). Proceed at your own risk.


----------



## zwer54

Godhand007 said:


> Search for *Increase core and SOC voltage* here (use google translate). Proceed at your own risk.


The maximum voltages in the MPT are fixed in the bios and can only be increased by hardware modifications. They are for the individual map models:

6900 XDXH: 1200 mV

Well, this gpu already pushes to 1.2V without touching MPT...


Godhand007 said:


> Search for *Increase core and SOC voltage* here (use google translate). Proceed at your own risk.


Either I am not smart enough either is not 100% clear so I'll pass...


----------



## J7SC

@CS9K @LtMatt ...had a free 30 min or so between work projects...thought I play a quick round of FS2020...as is often the case, yet another mega patch to install first...BUT

...but post patch install, it seemed even smoother so I kicked in MSI AB sensor overlay. 5950x hit max effective 5.04 GHz and more importantly, post-latest-patch it seems to use more cores/threads than before...which would be great news as it used to just hammer a couple of cores. Have you noticed an improvement in your setup re. more distributed CPU usage ?


----------



## Godhand007

zwer54 said:


> The maximum voltages in the MPT are fixed in the bios and can only be increased by hardware modifications. They are for the individual map models:
> 
> 6900 XDXH: 1200 mV
> 
> Well, this gpu already pushes to 1.2V without touching MPT...
> 
> 
> Either I am not smart enough either is not 100% clear so I'll pass...


You just had to go a little bit further. But if you are not sure, better to avoid it until you understand the process properly.










*Credit *: [Guide] - Navi 21 Max Overclocking Tutorial [6800 XT / 6900 XT]


----------



## deadfelllow

zwer54 said:


> The maximum voltages in the MPT are fixed in the bios and can only be increased by hardware modifications. They are for the individual map models:
> 
> 6900 XDXH: 1200 mV
> 
> Well, this gpu already pushes to 1.2V without touching MPT...
> 
> 
> Either I am not smart enough either is not 100% clear so I'll pass...





Godhand007 said:


> You just had to go little bit further. But if you are not sure, better to avoid it until you understand the process properly.
> 
> View attachment 2558879
> 
> 
> *Credit *: [Guide] - Navi 21 Max Overclocking Tutorial [6800 XT / 6900 XT]


Once you load your bios to MPT. You need the adjust settings. Image below. You need tick TEMP_DEPENDENT_VMIN for constant voltage. And i guess +2800 mhz requires more than 390 gpu power. Vmin Low / High is gpu voltage. Set it to 1250 both of them. Give 450 Power limit 420 GFX and 60 SOC. And try XD but what is your temps btw?


----------



## rodac

J7SC said:


> ...it is getting crazier  re. PL...
> I'm not sure what scared NVidia re. the upcoming AMD RDNA3, but apparently, 3090 Ti Ampere chips and PCBs are pin-compatible with RTX 4K / Ada Lovelace, and there are already at least two 3090 Ti that come with not one but _two_ of the new up-to-600W connectors - ideal testing ground for next gen. Personally, I hope AMD counters with RDNA3 mGPU 'tiles' (I still have a 8990 form yesteryear).
> 
> Speaking of 3090 Ti, I respectfully disagree with @rodac 's view re. Strix 3090 Ti vs. 6900XT LC. As per above, I happen to run heavily custom-cooled Strix 3090 450W / 520W and 6900 XT 330W / 450W, quite literally in the same case...I've posted some results before of the 6900XT re. clocks, at times above 2800 MHz _effective _for benchies and mid-2700s for gaming. Both are superb cards, and the AMD is a.) more cost-effective purchase at dollar-per fps and b.) more efficient at watt-per-fps.
> 
> Still, a well running and well-cooled 3090 (never mind 3090 Ti) beats the 6900XT at _4K_ (where I game) and also with ray-traced titles, though not by a heck of a lot. Also, using the Asus Tweak software instead of MSI AB may be part of the problem, and additionally, I think that in spite of a superb PCB and components, the Asus 3090 Ti LC seems gimped on the cooling - 240 mm AIO for 450W + ?
> 
> At the end of the day, this current 'refresh' competition between Red and Green is becoming a side-show, IMO --- RDNA3 and RTX4k/Ada Lovelace should be the new main event later in the year....better stock up on big new PSUs before everyone else will try to do the same


Good to hear some feed back (did not see your comment earlier). So that is really interesting to hear that you own both GPUs. Clearly your custom cooled solutions are in a different league than an AIO and this may get help pushing those GPUs even further. I have unfortunately no experience of custom nor standard cooling and I have not even attempted buying those few GPUs already set up out of the box for custom cooling, I do not even know how to connect up the pipes etc.... a very niche market for sure, willing to learn, really looks like plumbing heaven for sure.
It does not look like we disagree though, yes the 3090ti is definitely a bit better in the 4K department only but by a small margin. The Asus Tweak is exactly what I used indeed, I could nearly max the frequency but exactly as the reviewer said in the article, once you push the memory too far, it is not longer stable, so yes as everyone already knows in this forum, the Nvidias do not have as much overcloking leeway as the AMDs.
Yes exactly they are quite close, that is what I gave it all up, I knew that whatever I would try, it would not justify that kind of cash and financially I cannot afford dishing out 2K on a GPU for every single iteration , so it had to go back but if money was not an issue, then of course I would have kept it. Yes that radiator is really undersized, and given the target consumers who are going to buy this (basically us) we do not like it undersized, and they also have that same radiator on the 6900XT version of the GPU ;-) So impatient to see what is coming next.
How did you manage to run both GPUs on the same motherboard ? Isn't one of the GPU bottlenecked by a PCI slot that has fewer lanes or else I am getting mixed with Intel versus AMD architecture.


----------



## zwer54

deadfelllow said:


> Once you load your bios to MPT. You need the adjust settings. Image below. You need tick TEMP_DEPENDENT_VMIN for constant voltage. And i guess +2800 mhz requires more than 390 gpu power. Vmin Low / High is gpu voltage. Set it to 1250 both of them. Give 450 Power limit 420 GFX and 60 SOC. And try XD but what is your temps btw?
> 
> View attachment 2558885


Well, thanks for that. I was not bothering with min voltage and that is actually the one that does the trick. So now once I've set them both to 1250, I can see my psu goes almost 100W higher than before but the result is around the same which doesn't make any sense... Even with less overclock, it still pulls a lot more power from PSU but result remains the same as with much less wattage...
Managed to hit 803W from psu, as per GPUZ gpu pulled 477W and max temp 65C for the gpu and 95C for hotspot.

Edit: Now I set it to 1.3V, managed to pull 845W from the psu, 504W from gpu, but the score is lower... I give up.


----------



## deadfelllow

zwer54 said:


> Well, thanks for that. I was not bothering with min voltage and that is actually the one that does the trick. So now once I've set them both to 1250, I can see my psu goes almost 100W higher than before but the result is around the same which doesn't make any sense... Even with less overclock, it still pulls a lot more power from PSU but result remains the same as with much less wattage...
> Managed to hit 803W from psu, as per GPUZ gpu pulled 477W and max temp 65C for the gpu and 95C for hotspot.
> 
> Edit: Now I set it to 1.3V, managed to pull 845W from the psu, 504W from gpu, but the score is lower... I give up.
> View attachment 2558897


XTXH cards are throttling at 95C thats why your score is same before.









I scored 20 110 in Time Spy


AMD Ryzen 5 5600X, AMD Radeon RX 6900 XT x 1, 16384 MB, 64-bit Windows 10}




www.3dmark.com





This one was 1250Voltage 2760 - 2860 clocks and 2150 mem i guess. Gpu power was 450, EDC 420 Soc was 60


----------



## CS9K

J7SC said:


> @CS9K @LtMatt ...had a free 30 min or so between work projects...thought I play a quick round of FS2020...as is often the case, yet another mega patch to install first...BUT
> 
> ...but post patch install, it seemed even smoother so I kicked in MSI AB sensor overlay. 5950x hit max effective 5.04 GHz and more importantly, post-latest-patch it seems to use more cores/threads than before...which would be great news as it used to just hammer a couple of cores. Have you noticed an improvement in your setup re. more distributed CPU usage ?
> View attachment 2558878


I have not watched task manager to see if more cores were being utilized. In DX11 mode, there _shouldn't_ be more cores utilized as the engine itself only has four threads. I know that the SU9 beta had not seen any DX12 improvements either, but, I will check DX11 soon and update my post here.


----------



## J7SC

CS9K said:


> I have not watched task manager to see if more cores were being utilized. In DX11 mode, there _shouldn't_ be more cores utilized as the engine itself only has four threads. I know that the SU9 beta had not seen any DX12 improvements either, but, I will check DX11 soon and update my post here.


...DX12, 11 - same settings and flight path in spoiler...IMO, used to be 'even worse' re. core utilization but still 'some ways' to go 🥴 


Spoiler


----------



## CS9K

J7SC said:


> ...DX12, 11 - same settings and flight path in spoiler...IMO, used to be 'even worse' re. core utilization but still 'some ways' to go 🥴
> 
> 
> Spoiler
> 
> 
> 
> 
> View attachment 2558925


The results look the same to me, if you look at the big picture. both screenshots show four threads getting hammered (your CPU is bouncing four threads between 8 threads of four cores using CPPC).

ASOBO have not made any tangible DX12 improvements so far, and thus, DX12 remains unreliable to use. DX12 mode exists as DX11 wearing a jacket with DX12 written on it, and it shows.

Sim Update 10 should be the first beta/release with DX12-specific optimizations.


----------



## J7SC

CS9K said:


> The results look the same to me, if you look at the big picture. both screenshots show four threads getting hammered (your CPU is bouncing four threads between 8 threads of four cores using CPPC).
> 
> ASOBO have not made any tangible DX12 improvements so far, and thus, DX12 remains unreliable to use. DX12 mode exists as DX11 wearing a jacket with DX12 written on it, and it shows.
> 
> Sim Update 10 should be the first beta/release with DX12-specific optimizations.


I play both DX12 and DX11, but actually prefer DX 11 as I find DX12 a bit dark at anything but midday such dawn / dusk compared to the same setting / time w/ DX11, though for outright night scenes I prefer DX12. Also clouds are generated differently between the two versions. 

But I wholeheartedly agree, the underlying engine could use an overhaul (engine swap ?)...looks like Microsoft as the purveyor of DX12 just wanted the ability to say 'see, now our FS2020 release is DX12'...


----------



## Godhand007

zwer54 said:


> Well, thanks for that. I was not bothering with min voltage and that is actually the one that does the trick. So now once I've set them both to 1250, I can see my psu goes almost 100W higher than before but the result is around the same which doesn't make any sense... Even with less overclock, it still pulls a lot more power from PSU but result remains the same as with much less wattage...
> Managed to hit 803W from psu, as per GPUZ gpu pulled 477W and max temp 65C for the gpu and 95C for hotspot.
> 
> Edit: Now I set it to 1.3V, managed to pull 845W from the psu, 504W from gpu, but the score is lower... I give up.
> View attachment 2558897


It is possible that you are hitting power limits causing throttling, specially during GT2. Remember, P=VI. If you increase the voltage you will increase the power consumption (all else being equal). I vaguely remember my GPU going way beyond 500w during GT2. Other's can correct me if this assessment is wrong.


----------



## zwer54

Godhand007 said:


> It is possible that you are hitting power limits causing throttling, specially during GT2. Remember, P=VI. If you increase the voltage you will increase the power consumption (all else being equal). I vaguely remember my GPU going way beyond 500w during GT2. Other's can correct me if this assessment is wrong.


Well, a lot of things would make sense, but 1 thing I don't understand. If total power output of psu was 710W at max and the benchmark score was higher than the score when total power output of my psu hit 845W max and average was around 100W above before. So where did all that power go if performance didn't increase?

Right now I'm lazy but I have conductonaut liquid metal ready, just need to get will to drain the system and put that on to see if hotspot temp will drop...


----------



## jonRock1992

Godhand007 said:


> It is possible that you are hitting power limits causing throttling, specially during GT2. Remember, P=VI. If you increase the voltage you will increase the power consumption (all else being equal). I vaguely remember my GPU going way beyond 500w during GT2. Other's can correct me if this assessment is wrong.


GT2 will definitely pull more than 500W if the voltage and clocks are high enough. I used a 600W PL to achieve my Timespy GPU score of 26153.


----------



## wermad

My pics are terrible. I got everything moved to the TT P8 and I should be getting my Asus LC360 today (or Monday) to get rid of that cooler's pinch on the tubes. 

System: 9900K, 32GB 3600 Corsair, MSI Z390 Godlike, running a TCL 4k 55" tv.

Plans: find a 5950X to use with my X570 Godlike I have collecting dust


----------



## J7SC

rodac said:


> Good to hear some feed back (did not see your comment earlier). (...) ;-) So impatient to see what is coming next.
> *How did you manage to run both GPUs on the same motherboard ? Isn't one of the GPU bottlenecked by a PCI slot that has fewer lanes or else I am getting mixed with Intel versus AMD architecture.*


...no, it's a single TT Core P8 case, but with > dual X570 mobos...3950X / 6900XT and 5950X / 3090 Strix. Everything is water-cooled. This may sound expensive, and it was, but basically, I upgrade every two to three years for a set of four machines ( 2 + 2) as I'm in the software related business - so work + play combos, with respective back-ups.

...my previous primary work-horse before this upgrade cycle last year was an Asus workstation X79 mobo w/ dual GTX 980 Classifieds (4 GB VRAM) and getting on in years...the previous play machine was (is) a TR 2950X w/ 2x 2080 Tis (> here). The former is now mostly a file server as I upgraded everything to big 4K monitors (work + play), and the GTX 980s simply don't cut it at that resolution. The 2950X/ 2x 2080 Ti combo now powers the big TV in our master bedroom; I still use it for select gaming when there's nothing worthwhile to watch on tv .

I lucked out with both the 3090 Strix and 6900XT GigaOC (regular XTX, but w/ 3x8 pin PCB). I wasn't specifically looking for them, just something with a min of 16 GB of VRAM - but they were available (at MSRP) at the place we buy a lot of work related system parts, so I pounced, not least as they were_ the only_ latest gen available at that time.

As to custom water-cooling, I used to do a lot of HWBot, including subzero, so it is second nature for me...besides, I love building complex water-cooling setups. What is more, there's a dual purpose behind it: One the one hand, putting massive cooling on the latest CPUs and GPUs with aggressive boost algorithms which have temps as a major input certainly helps with the benchies as the systems won't heat-soak. On the other hand, when you run up to 1350x63mm multi-D5 pumps and push-pull fans, the systems are near silent as a result of all that 'overkill'. That is important to me as they typically run next to me for 11+ hrs a day. As to my point about the AIO rad re. the 3090 Ti LC, have a look at the spoiler pic...on the left is a 360 AIO single-core 24mm rad, on the right is a 480 triple core 63mm rad - when it gets to boost algorithms via temps, it should be clear which is preferable...😁


Spoiler: rad comp















Below are a few quick Superposition runs with both the custom-cooled 6900XT and 3090, respectively. Incidentally, those are not the best results I have, but a good indication of cooling-vs-clocks. The 6900XT for example only has a mild MPT setting, with down-clocking and down-volting. I couldn't run either at those speeds with stock cooling, keeping in mind that the 6900XT is just a plain-vanilla XTX...when air-cooled, it actually managed a consistent 2700 MHz effective GPU speed, but only with three fans wailing at over 3800 rpm 

Anyway, that is why I pointed out the gimped-cooling nature of the otherwise excellent Asus 390 Ti LC in the earlier post. But I fully agree that you returned it - we're getting quite close to RDNA3 and RTX 4000 / Ada L. and it might be better to keep your GPU budget open for one of those...


----------



## Godhand007

Guys, Quick question, I need a PSU since my RM1000i went bad and Corsair does not have a replacement for this model. I am thinking about going with ANTEC signature platinum 1300. Does it sound like a good option? I want to go for 1600i but it is not going to be available for at least 2 months where I live. I will miss the software monitoring capabilities of RM1000i though.


----------



## J7SC

Godhand007 said:


> Guys, Quick question, I need a PSU since my RM1000i went bad and Corsair does not have a replacement for this model. I am thinking about going with ANTEC signature platinum 1300. Does it sound like a good option? I want to go for 1600i but it is not going to be available for at least 2 months where I live. I will miss the software monitoring capabilities of RM1000i though.


...the Antec 1300 might be a nice PSU, I have three of the earlier 1300 W HPC Platinum (some up to a decade), noting that you can connect any two of them via their proprietary OC-Link for 2600 W (quad XOC GPUs anyone ). At least with the older Antec 1300, the innards are by Delta. I finally took one of the Antecs off line last week as one rail started to droop a bit much - still within spec, but barely. Then again, it had a hard life for almost ten years. FYI, I now run 2x Seasonic Prime Platinum PX1300s for the latest dual mobo build. Also very nice PSUs with near-zero droop and 12 year warranty, and currently priced at US$ 280 or less.

@CS9K ...per your request...
As mentioned, the results and settings below are not the full-meal-deal as this one is at 1.218V, FCLK 2100, MPT power at 450W/400A, plus the 15% slider, and it downclocks and down-volts per MPT feature set and HWI / Radeon software pics...no problem using this for gaming. 

For crazy bench runs (not shown), I go up to 1.231v and no downclock / - volt  >>> briefly


----------



## Godhand007

J7SC said:


> ...the Antec 1300 might be a nice PSU, I have three of the earlier 1300 W HPC Platinum (some up to a decade), noting that you can connect any two of them via their proprietary OC-Link for 2600 W (quad XOC GPUs anyone ). At least with the older Antec 1300, the innards are by Delta. I finally took one of the Antecs off line last week as one rail started to droop a bit much - still within spec, but barely. Then again, it had a hard life for almost ten years. FYI, I now run 2x Seasonic Prime Platinum PX1300s for the latest dual mobo build. Also very nice PSUs with near-zero droop and 12 year warranty, and currently priced at US$ 280 or less.


So, I guess I might have to pull the trigger on Antec 1300. Still, not an ideal option but looks decent considering price and performance. 1600i would have cost at least double of Antec.


----------



## wermad

Done for now ($hi7 pic btw)


----------



## rodac

J7SC said:


> ...no, it's a single TT Core P8 case, but with > dual X570 mobos...3950X / 6900XT and 5950X / 3090 Strix. Everything is water-cooled. This may sound expensive, and it was, but basically, I upgrade every two to three years for a set of four machines ( 2 + 2) as I'm in the software related business - so work + play combos, with respective back-ups.
> 
> ...my previous primary work-horse before this upgrade cycle last year was an Asus workstation X79 mobo w/ dual GTX 980 Classifieds (4 GB VRAM) and getting on in years...the previous play machine was (is) a TR 2950X w/ 2x 2080 Tis (> here). The former is now mostly a file server as I upgraded everything to big 4K monitors (work + play), and the GTX 980s simply don't cut it at that resolution. The 2950X/ 2x 2080 Ti combo now powers the big TV in our master bedroom; I still use it for select gaming when there's nothing worthwhile to watch on tv .
> 
> I lucked out with both the 3090 Strix and 6900XT GigaOC (regular XTX, but w/ 3x8 pin PCB). I wasn't specifically looking for them, just something with a min of 16 GB of VRAM - but they were available (at MSRP) at the place we buy a lot of work related system parts, so I pounced, not least as they were_ the only_ latest gen available at that time.
> 
> As to custom water-cooling, I used to do a lot of HWBot, including subzero, so it is second nature for me...besides, I love building complex water-cooling setups. What is more, there's a dual purpose behind it: One the one hand, putting massive cooling on the latest CPUs and GPUs with aggressive boost algorithms which have temps as a major input certainly helps with the benchies as the systems won't heat-soak. On the other hand, when you run up to 1350x63mm multi-D5 pumps and push-pull fans, the systems are near silent as a result of all that 'overkill'. That is important to me as they typically run next to me for 11+ hrs a day. As to my point about the AIO rad re. the 3090 Ti LC, have a look at the spoiler pic...on the left is a 360 AIO single-core 24mm rad, on the right is a 480 triple core 63mm rad - when it gets to boost algorithms via temps, it should be clear which is preferable...😁
> Below are a few quick Superposition runs with both the custom-cooled 6900XT and 3090, respectively. Incidentally, those are not the best results I have, but a good indication of cooling-vs-clocks. The 6900XT for example only has a mild MPT setting, with down-clocking and down-volting. I couldn't run either at those speeds with stock cooling, keeping in mind that the 6900XT is just a plain-vanilla XTX...when air-cooled, it actually managed a consistent 2700 MHz effective GPU speed, but only with three fans wailing at over 3800 rpm
> 
> Anyway, that is why I pointed out the gimped-cooling nature of the otherwise excellent Asus 390 Ti LC in the earlier post. But I fully agree that you returned it - we're getting quite close to RDNA3 and RTX 4000 / Ada L. and it might be better to keep your GPU budget open for one of those...


Thanks @J7SC for your input, very interesting, actually, I am also in the software field myself, for additional flexibility, I did install enterprise soft on my own machines, cannot run this 24/7 though due to electricity bill as this hardware is not expensed but my own. Very keen on this silent solution like yours, those AIOs, including the Sapphire Toxic EE are rather noisy, the Asus is noisier probably because the pump is more powerful due to the smaller radiator. 

The AIO used for my CPU is also very unpleasant (kraken nzxt), sounds like a race car (suddenly accelerates) as CPU spikes for a short time at 4.9.,really noisy
I need to look into powerful custom liquid solution like yours, I do not mind the maintenance.

Yes, I should wait for the next gen, but I have this gut feeling that Nvidia and AMD are going to keep 'milking us' as others would put it.
They may segment their next generation chips into smaller power increase and end up releasing many models with around a 30% increase for each release, even for their top end GPU.
Isn't it what they are already doing ?

Great idea to have 2 MBs in the same box, potentially I could do this as well, but my box can only take a regular and mini ATX at the same time, not 2full size MBs.
I like the idea of having that 2 in one solution, also a space saver, I have currently 2 large size cases and it is not a pretty sight.

Let me know if there are any good tutorial online to help get into custom watercooling, this looks like my kind of sport.
That radiator of yours sure looks nice and big. That may keep me busy until those next gen comes out.
I recall that back in 2017, I also returned an Nvidia 1080TI with an AIO as it was way too noisy.


----------



## Alexxxx#€

Hello! I'm trying to flash my xfx black [email protected] xtxh rom but the first time it was unsuccessful I lost the sam and the maximum frequency was 500mhz and it only worked when windows booted now I'm going to try again, but first I wanted to know if anyone knows how to download the xfx limited edition bios running at 2490mhz.





XFX Speedster MERC 319 AMD Radeon™ RX 6900 XT Limited Black Gaming Graphics Card with 16GB GDDR6, AMD RDNA™ 2


RX-69XTACSD9




www.xfxforce.com


----------



## J7SC

rodac said:


> Thanks @J7SC for your input, very interesting, actually, I am also in the software field myself, for additional flexibility, I did install enterprise soft on my own machines, cannot run this 24/7 though due to electricity bill as this hardware is not expensed but my own. Very keen on this silent solution like yours, those AIOs, including the Sapphire Toxic EE are rather noisy, the Asus is noisier probably because the pump is more powerful due to the smaller radiator.
> 
> The AIO used for my CPU is also very unpleasant (kraken nzxt), sounds like a race car (suddenly accelerates) as CPU spikes for a short time at 4.9.,really noisy
> I need to look into powerful custom liquid solution like yours, I do not mind the maintenance.
> 
> Yes, I should wait for the next gen, but I have this gut feeling that Nvidia and AMD are going to keep 'milking us' as others would put it.
> They may segment their next generation chips into smaller power increase and end up releasing many models with around a 30% increase for each release, even for their top end GPU.
> Isn't it what they are already doing ?
> 
> Great idea to have 2 MBs in the same box, potentially I could do this as well, but my box can only take a regular and mini ATX at the same time, not 2full size MBs.
> I like the idea of having that 2 in one solution, also a space saver, I have currently 2 large size cases and it is not a pretty sight.
> 
> Let me know if there are any good tutorial online to help get into custom watercooling, this looks like my kind of sport.
> That radiator of yours sure looks nice and big. That may keep me busy until those next gen comes out.
> I recall that back in 2017, I also returned an Nvidia 1080TI with an AIO as it was way too noisy.


Thanks ...re. water-cooling tutorials, there are simply too many to list, not to mention that it's a topic also fraught with 'fan-boyism' re. brands, rad, pump and fan models etc. There's the > main thread at OCN (only 5.5k+ pages ) and a whole pile of other resources. This YouTube channel by a South-Korean company that builds a lot of workstations apart from gamers is also worth a look or three...

With all that in mind, I have put together some very basic tips in the spoiler below re. how I build these things and some components I use most of the time, including in the 6900XT setup. As to dual mobo-single case systems, the fundamental advantage is foot print. Even though you need really big cases ('eeATX'), you still save valuable desk real estate...On my main setup (spoiler), space is critical as my main work-play area has a 40 inch and 48 inch (OLED) monitor as well. BTW, not withstanding the higher cost and extra care OLED takes, it is really worth it, IMO, if you spend a lot of time in front of monitors on a given day. Both mobos / GPUs are connected to both monitors for extra flexibility.



Spoiler



Advance planning is everything with custom loops. How much tubing (hard or soft) do you need (add 30% extra on top) ? How many and what type of fittings (I use mostly Koolance and XSPC)? What liquids...and on and on.

Some quick tips:

--- Thoroughly flush every new part (block, pump, tubes, rads) even when new. I NEVER had a new rad from any manufacturer that didn't have some 'crud' in it from the manufacturing process. Typically, I thoroughly flush everything in a bathtub with tap water until I don't see any more residue on the bathtub floor. Then I do two more flushes with tap water, followed by the a couple of final, thorough flushes with distilled / deionized water.

--- Early on, you have to make a decision whether to mount everything (rads, pumps, reservoirs) in the case, or have a separate cooling area (I have done both). When it gets to different metals, your loop should only have copper/brass and or nickel-coated peripherals (so no aluminum rads). 

--- 'To get great performance AND super quiet systems, OVERSIZE the cooling. The 3950X / 6900XT section of Raven_A has 1200x63mm rad space (dual and triple core), the 5950X / 3090 section has 1350x63mm rad space (dual and triple core). With dual and triple core rads, you need fans that have enough static pressure...I always use 'push - pull' setups that accomplish that without being loud. The single-case Raven_A has 46 x 120mm Arctic fans...this time around, instead of using the older but hi-po (louder) GentleTyphoon 3k rpm servo fans I have, I opted for the Arctic P12 pwm pst fans (five-pack for only C$35 for the no-RGB type). I run them at their max of around 1850 rpm, and in push-pull, they are highly effective but quiet. I don't actually hear the fans, I just hear the gentle 'whoosh' of the air rushing through the expansive rad surface area as that number of fans moves a lot of air - the whole purpose of oversizing. With this kind of cooling, heat-soaking is not an issue, and the fans never rev up or down, whether on a demanding work project, gaming or benching.

--- Whichever and wherever you mount your rads, make sure that air bubbles which get invariable trapped can move out through the outlets (so outlets at the highest spot as bubbles want to move up). For maintenance purposes, also go for thick rads that have an extra drain-plug at the bottom (much easier down the line).

--- Custom loop sequence is important....always make sure you have the reservoir just before and above the pumps - most pumps (below) tend to last a very long time as long as they are always lubricated by the cooling liquid and only have one moving part, spinning. In addition, once everything has been test-fitted and tubes cut, I prefill the rads in a backwards order - this makes filling and subsequently bleeding the loop much easier. Have plenty of towels etc under everything so no spillage ever gets onto the electronic components such as the mobo, PSU or GPU. Even if those components get wet, it's not the end of the world as long as you don't turn the main system on and clean up the spillage.

--- Pumps - now there's a big topic with a lot of dissention. I exclusively use D5 pumps, and always in at least dual (serial) sequence. While a single D5 is probably enough, as they have a lot of theoretical flow given that they have a larger internal diameter than for example DDC style pumps (which are also good), in practice, the larger internal diameter is more susceptible to brief pressure drops via trapped air etc. Dual D5s on the other hand solve that...I even run some complex systems w/ multi GPUs and multiple rads etc that utilize triple D5s, but dual D5s are in the sweet spot for me. Dual D5s also give you 'fail-over' in case of an unlikely failure of one pump, besides, they go well with my 'oversize cooling approach'...and what is more, dual D5s can be adjusted to run at lower individual speeds for silent operation.

--- Whether hard or soft tubing (soft is easier for starters and/or complex multi-component systems), I use the larger 1/2 inch inner 3/4 inch outer diameter tubing and fittings - well taken care of by the dual(+) D5s re. extra flow requirements. Try to avoid sharp bends wherever possible. For ease of maintenance, I also build in Quick-Disconnects (I use Koolance QD4s, pic below) which also lave a low flow restriction to boot. QDs are also useful if you change a component, ie get a new air-cooled GPU, which you later on want to add to your custom loop.

--- Cooling liquid is another one of those topics that can end up in a lot of dissention. For me, anything with coarser (or any) particulates in them such as pastels are off limits. Use RGB instead to get the desired lighting effects. Pre-mix typically comes with the required anti-fungal and anti-corrosive additives. You can also just use deionized (or distilled) water with a few (per respective instructions) drops of concentrate additives - a much cheaper solution btw, not least as premix is usually 95%++ deionized / distilled water anyways.

--- Reservoirs...there are some really big, fancy ones available - but if space is an issue as it would be with a dual mobo, dual loop build, go for the smaller ones. I've bene using the Swiftech Micro Res2 (pic below) ever since I started custom cooling...they work great, and are actually among the best solutions to bleed a loop of air bubbles due to a special feature inside. In addition, I also use 'XSPC Ion' reservoirs. The reservoirs should have been 4 and 6 screw-in type plugs for extra mounting options (per above, whatever option, the reservoir main outlet should be just ahead and above the ump/s).

--- When it is all ready to go and the loop is filled up for the first time, put paper towels under every fitting for a test run. I always use the PSU power from a testbench to drive the pumps of a new loop / system so that I don't have to turn the main mobo etc on before leak-testing (some PSUs, such as many Seasonics, also come with special plug that allows for the new system's PSU to just power the circuit the pumps are on). I typically leak-test a new loop for at least 3 - 4 hrs. If there are no leaks, then I run it for a few more hours via the 'surrogate' PSU but with the reservoir fill plug opened a bit (be careful about surges which might spill some liquid out when turning that on / off) to help bleed some excess air out. Finally, after a week or so of regular use, everything should have settled, and while the system is off, you might want to make sure that everything (ie fittings, blocks) are mounted correctly and tightly. Then, you could leave it alone for 12 - 24 months and just enjoy...). Keep an eye on your reservoir cooling level (warm, cold) and also cooling liquid colour over time...if a clear liquid gets opaque / milky, it's time to investigate...

--- I am sure there are many other things folks would do differently or add...such as using an air pressure pump to test out the loop re. leaks, and also add a flow meter into the loop. All good points, but in the decade or so of building custom loops for work and play, I can usually tell if something isn't working right...and the final test is of course using HWInfo to check your temps for CPU, GPU etc during heavy stress testing. If it ain't broke, don't fix it...

--- As a final tip, some 'paranoia': I wear a mask and gloves when working with cleaned and still-open loop components...no sense cleaning everything and then touching or sneezing on open components...


----------



## CS9K

J7SC said:


> Thanks ...re. water-cooling tutorials, there are simply too many to list, not to mention that it's a topic also fraught with 'fan-boyism' re. brands, rad, pump and fan models etc. There's the > main thread at OCN (only 5.5k+ pages ) and a whole pile of other resources. This YouTube channel by a South-Korean company that builds a lot of workstations apart from gamers is also worth a look or three...
> 
> With all that in mind, I have put together some very basic tips in the spoiler below re. how I build these things and some components I use most of the time, including in the 6900XT setup. As to dual mobo-single case systems, the fundamental advantage is foot print. Even though you need really big cases ('eeATX'), you still save valuable desk real estate...On my main setup (spoiler), space is critical as my main work-play area has a 40 inch and 48 inch (OLED) monitor as well. BTW, not withstanding the higher cost and extra care OLED takes, it is really worth it, IMO, if you spend a lot of time in front of monitors on a given day. Both mobos / GPUs are connected to both monitors for extra flexibility.
> 
> 
> 
> Spoiler
> 
> 
> 
> Advance planning is everything with custom loops. How much tubing (hard or soft) do you need (add 30% extra on top) ? How many and what type of fittings (I use mostly Koolance and XSPC)? What liquids...and on and on.
> 
> Some quick tips:
> 
> --- Thoroughly flush every new part (block, pump, tubes, rads) even when new. I NEVER had a new rad from any manufacturer that didn't have some 'crud' in it from the manufacturing process. Typically, I thoroughly flush everything in a bathtub with tap water until I don't see any more residue on the bathtub floor. Then I do two more flushes with tap water, followed by the a couple of final, thorough flushes with distilled / deionized water.
> 
> --- Early on, you have to make a decision whether to mount everything (rads, pumps, reservoirs) in the case, or have a separate cooling area (I have done both). When it gets to different metals, your loop should only have copper/brass and or nickel-coated peripherals (so no aluminum rads).
> 
> --- 'To get great performance AND super quiet systems, OVERSIZE the cooling. The 3950X / 6900XT section of Raven_A has 1200x63mm rad space (dual and triple core), the 5950X / 3090 section has 1350x63mm rad space (dual and triple core). With dual and triple core rads, you need fans that have enough static pressure...I always use 'push - pull' setups that accomplish that without being loud. The single-case Raven_A has 46 x 120mm Arctic fans...this time around, instead of using the older but hi-po (louder) GentleTyphoon 3k rpm servo fans I have, I opted for the Arctic P12 pwm pst fans (five-pack for only C$35 for the no-RGB type). I run them at their max of around 1850 rpm, and in push-pull, they are highly effective but quiet. I don't actually hear the fans, I just hear the gentle 'whoosh' of the air rushing through the expansive rad surface area as that number of fans moves a lot of air - the whole purpose of oversizing. With this kind of cooling, heat-soaking is not an issue, and the fans never rev up or down, whether on a demanding work project, gaming or benching.
> 
> --- Whichever and wherever you mount your rads, make sure that air bubbles which get invariable trapped can move out through the outlets (so outlets at the highest spot as bubbles want to move up). For maintenance purposes, also go for thick rads that have an extra drain-plug at the bottom (much easier down the line).
> 
> --- Custom loop sequence is important....always make sure you have the reservoir just before and above the pumps - most pumps (below) tend to last a very long time as long as they are always lubricated by the cooling liquid and only have one moving part, spinning. In addition, once everything has been test-fitted and tubes cut, I prefill the rads in a backwards order - this makes filling and subsequently bleeding the loop much easier. Have plenty of towels etc under everything so no spillage ever gets onto the electronic components such as the mobo, PSU or GPU. Even if those components get wet, it's not the end of the world as long as you don't turn the main system on and clean up the spillage.
> 
> --- Pumps - now there's a big topic with a lot of dissention. I exclusively use D5 pumps, and always in at least dual (serial) sequence. While a single D5 is probably enough, as they have a lot of theoretical flow given that they have a larger internal diameter than for example DDC style pumps (which are also good), in practice, the larger internal diameter is more susceptible to brief pressure drops via trapped air etc. Dual D5s on the other hand solve that...I even run some complex systems w/ multi GPUs and multiple rads etc that utilize triple D5s, but dual D5s are in the sweet spot for me. Dual D5s also give you 'fail-over' in case of an unlikely failure of one pump, besides, they go well with my 'oversize cooling approach'...and what is more, dual D5s can be adjusted to run at lower individual speeds for silent operation.
> 
> --- Whether hard or soft tubing (soft is easier for starters and/or complex multi-component systems), I use the larger 1/2 inch inner 3/4 inch outer diameter tubing and fittings - well taken care of by the dual(+) D5s re. extra flow requirements. Try to avoid sharp bends wherever possible. For ease of maintenance, I also build in Quick-Disconnects (I use Koolance QD4s, pic below) which also lave a low flow restriction to boot. QDs are also useful if you change a component, ie get a new air-cooled GPU, which you later on want to add to your custom loop.
> 
> --- Cooling liquid is another one of those topics that can end up in a lot of dissention. For me, anything with coarser (or any) particulates in them such as pastels are off limits. Use RGB instead to get the desired lighting effects. Pre-mix typically comes with the required anti-fungal and anti-corrosive additives. You can also just use deionized (or distilled) water with a few (per respective instructions) drops of concentrate additives - a much cheaper solution btw, not least as premix is usually 95%++ deionized / distilled water anyways.
> 
> --- Reservoirs...there are some really big, fancy ones available - but if space is an issue as it would be with a dual mobo, dual loop build, go for the smaller ones. I've bene using the Swiftech Micro Res2 (pic below) ever since I started custom cooling...they work great, and are actually among the best solutions to bleed a loop of air bubbles due to a special feature inside. In addition, I also use 'XSPC Ion' reservoirs. The reservoirs should have been 4 and 6 screw-in type plugs for extra mounting options (per above, whatever option, the reservoir main outlet should be just ahead and above the ump/s).
> 
> --- When it is all ready to go and the loop is filled up for the first time, put paper towels under every fitting for a test run. I always use the PSU power from a testbench to drive the pumps of a new loop / system so that I don't have to turn the main mobo etc on before leak-testing (some PSUs, such as many Seasonics, also come with special plug that allows for the new system's PSU to just power the circuit the pumps are on). I typically leak-test a new loop for at least 3 - 4 hrs. If there are no leaks, then I run it for a few more hours via the 'surrogate' PSU but with the reservoir fill plug opened a bit (be careful about surges which might spill some liquid out when turning that on / off) to help bleed some excess air out. Finally, after a week or so of regular use, everything should have settled, and while the system is off, you might want to make sure that everything (ie fittings, blocks) are mounted correctly and tightly. Then, you could leave it alone for 12 - 24 months and just enjoy...). Keep an eye on your reservoir cooling level (warm, cold) and also cooling liquid colour over time...if a clear liquid gets opaque / milky, it's time to investigate...
> 
> --- I am sure there are many other things folks would do differently or add...such as using an air pressure pump to test out the loop re. leaks, and also add a flow meter into the loop. All good points, but in the decade or so of building custom loops for work and play, I can usually tell if something isn't working right...and the final test is of course using HWInfo to check your temps for CPU, GPU etc during heavy stress testing. If it ain't broke, don't fix it...
> 
> --- As a final tip, some 'paranoia': I wear a mask and gloves when working with cleaned and still-open loop components...no sense cleaning everything and then touching or sneezing on open components...
> 
> View attachment 2559085
> 
> 
> 
> 
> View attachment 2559087
> 
> 
> View attachment 2559088
> 
> 
> View attachment 2559089
> 
> 
> View attachment 2559092
> 
> 
> View attachment 2559093


Also never be afraid to break out the ShiatCAD™ in MS Paint/GIMP and go at it, @rodac. Proper visualization of how things are going to go down, plus a proper count of fittings, angle-adapters, and layout will go SO far with how the final work of art comes together in the end!


----------



## rodac

Tonight, I followed a link from an article in a blog posted at Overclockers UK









Yes. You Can Play Cyberpunk 2077 in VR! - Overclockers UK


You can play Cyberpunk 2077 in virtual reality! In this blog we’re breaking down everything you need for a trip to Night City in VR.




www.overclockers.co.uk





I read that it lets you play main games like Cyberpunk in VR.
I had to give this a go and amazingly this works perfect as if the developers had intended us to use VR.
The developer Luke Ross, charges a monthly subscription of 8 pounds, so you do not get that for free of course.
A bit of tinkering is required though.
Simply amazing 9 out of 10. I am just curious to know how many of you already knew this. I fully recommend it, it is not for the average gamer with an average machine but then this community is certainly not for average folks.


----------



## wermad

My card developed a rattle. It's not the pump. I can hear that and it's tolerable. It's gotten louder these last few days where it's audible from where I sit. I tried putting the card horizontal and it doesn't make a difference. It almost feels like the heatsink or shroud is rattling. It's a week old, so I'm returning it for a new one. Hopefully I don't have more issues. I've seen one video of a really loud pump, but mine is a rattle noise. It's not coil whine from what I experienced in the past. The rattle is present at idle and it happens every 2-3 seconds. Might be a good time to switch to the TT view 51 to ease the tube run on the P8 while I wait.


----------



## J7SC

wermad said:


> My card developed a rattle. It's not the pump. I can hear that and it's tolerable. It's gotten louder these last few days where it's audible from where I sit. I tried putting the card horizontal and it doesn't make a difference. It almost feels like the heatsink or shroud is rattling. It's a week old, so I'm returning it for a new one. Hopefully I don't have more issues. I've seen one video of a really loud pump, but mine is a rattle noise. It's not coil whine from what I experienced in the past. The rattle is present at idle and it happens every 2-3 seconds. Might be a good time to switch to the TT view 51 to ease the tube run on the P8 while I wait.


...if it is not the pump or coil whine, and the rattle is intermittent (irregular?), it might be related to the fan as the only moving parts are the pump and fan assembly. Anyway, better to let the vendor deal w/ it via RMA for a card this new.


----------



## wermad

J7SC said:


> ...if it is not the pump or coil whine, and the rattle is intermittent (irregular?), it might be related to the fan as the only moving parts are the pump and fan assembly. Anyway, better to let the vendor deal w/ it via RMA for a card this new.


At idle the Toxic stops all the fans, including the gpu one. 

I undid the card and redid the vertical mounting. Ran AFterburner to ramp up the fans and it seems to have subsided. I'll give it a couple of days before returning it. I can't for the life of me find any pump control on the Trixx software and the card is wired with a 5/6 pin setup, so I can't hook the fans/pump to the mb. I'll reach out to Sapphire and ask if there's any way to control the pump. 

Best way I can describe the noise is like a HDD disk noise. But it's definitely not the pump. I've not used an hdd in forever. I feel like the pump vibrations is causing something else to vibrate or resonate.


----------



## rodac

wermad said:


> At idle the Toxic stops all the fans, including the gpu one.
> 
> I undid the card and redid the vertical mounting. Ran AFterburner to ramp up the fans and it seems to have subsided. I'll give it a couple of days before returning it. I can't for the life of me find any pump control on the Trixx software and the card is wired with a 5/6 pin setup, so I can't hook the fans/pump to the mb. I'll reach out to Sapphire and ask if there's any way to control the pump.
> 
> Best way I can describe the noise is like a HDD disk noise. But it's definitely not the pump. I've not used an hdd in forever. I feel like the pump vibrations is causing something else to vibrate or resonate.


That must be very frustrating, that is the best 6900 there is out there. I will try to help though, simply using my recent experience.
I still believe that this is a pump noise, not the normal operating noise you should be hearing, that noise is very very likely to happen because an air bubble is trapped at the pump level. This is likely to happen during transportation.
It just happened to me.
I recently purchased an Asus liquid cooled 3090TI that has an AIO block with 2 fans. Immediately after starting up, I heard an *irregular** noise that sounds like that HDD rattling, a noise that I believe is like that one you are describing, I found this very stressing.
Then I ignored this noise for some time and went on with the benchmarking of that new GPU, found it to perform at best 5% better than the Sapphire Extreme.
I decided to RMA on the ground that the performance was not within my expectations and also because the noise coming from the pump was not normal.
As it appears that the most common issue with AIO is air bubbles at the pump level, I read that this kind of noise can be resolved.
I went on removing it to put it back in the box and just before I did, I switched on the PC again and while the GPU was up and running I moved the hose and the attached radiator gently up and then down, there was more unpleasant noise and eventually, the noise became that regular soft humming that you would expect from any AIO pump.
When an air bubble is trapped, it produced that sort of scary intermittent irregular noise.
Of course, the radiator should be mounted above the GPU, not below.
Unable to post that mp4 file as the forum has banned this file extension.


----------



## wermad

I tried software and performance mode and the rattle is not that bad.

I can hear the pump and the rattle happens every few seconds.

I did a bit more research and found some folks say it's coil or pump vibrations. Ill do a bit more testing before sending her off to the retailer

Edit: rattle is very low when mounted horizontal. Gonna send it back and email sapphire support on this when mounted vertically. Pump sounds the same in either position


----------



## Godhand007

Question about *LC OEM bios*, I have it flashed on my Toxic but when I try to OC the memory, it seems that it won't even reach 60mhz above the 2310Mhz. I actually start seeing some frames missing during gameplay or TS even at 2370 Mhz. What are the experiences that guys who have flashed the bios have had with mem overclock?

Also, I tried increasing the voltage for mem to 1.425 from 1.400 but it didn't seem to help.


----------



## 99belle99

Anyone in Europe looking for a reference 6900 XT, it is still available to buy direct from AMD right now. This is the first time in well over a year it is still in stock after a drop. Usually they are sold out in seconds after they go live.


----------



## J7SC

99belle99 said:


> Anyone in Europe looking for a reference 6900 XT, it is still available to buy direct from AMD right now. This is the first time in well over a year it is still in stock after a drop. Usually they are sold out in seconds after they go live.


...probably because the rumoured update (ie. 6950XT) is supposed to hit in less than 10 days ?


----------



## Mezar Kurin

Hi,

Jmped onto here as just got a Sapphire RT 6900 XT Toxic after months of waiting from stock and also the price to to be resonable - looking for info and to track down exactly how to set the 3 position BIOS switch. 

Have conflicting information - from the little I can fnd online Quite mode is with the switch to the left (towards the connectors), middle is performance and far right is switchable via software - also seen referances to the card being supplied with performance mode selected and when I got the card it was to the far left - little confused so help welcome.

I would use TriXX to check but, and this is probably self inflicted as run Windows 11 Dev build, but with the latest version of Adrenalin on my build TriXX runs but does not see 1/2 the card metrics or allow Turbo Boost let alone what Bios is active.When I did have a working TriXX and just get the choice between Primary and Secondary - which is the Performamce BIOS?

The Windows update site does provided a newer driver and this allows TriXX to work but then Adrenalin complains the the configuration is unsupported.....

I seem to need to run Adrenalin as need a game profile that prevents the card from dropping down to 500MHz and causing stuttering in some games.

All te above aside it is a great card and is around 50% faster than my 2080Ti

Thanks


----------



## rodac

Mezar Kurin said:


> Hi,
> 
> Jmped onto here as just got a Sapphire RT 6900 XT Toxic after months of waiting from stock and also the price to to be resonable - looking for info and to track down exactly how to set the 3 position BIOS switch.
> 
> Have conflicting information - from the little I can fnd online Quite mode is with the switch to the left (towards the connectors), middle is performance and far right is switchable via software - also seen referances to the card being supplied with performance mode selected and when I got the card it was to the far left - little confused so help welcome.
> 
> I would use TriXX to check but, and this is probably self inflicted as run Windows 11 Dev build, but with the latest version of Adrenalin on my build TriXX runs but does not see 1/2 the card metrics or allow Turbo Boost let alone what Bios is active.When I did have a working TriXX and just get the choice between Primary and Secondary - which is the Performamce BIOS?
> 
> The Windows update site does provided a newer driver and this allows TriXX to work but then Adrenalin complains the the configuration is unsupported.....
> 
> I seem to need to run Adrenalin as need a game profile that prevents the card from dropping down to 500MHz and causing stuttering in some games.
> 
> All te above aside it is a great card and is around 50% faster than my 2080Ti
> 
> Thanks


I own this gpu but only ever used windows 10. expected that windows 11 will potentially cause driver issues which is why i did not upgrade








TOXIC AMD Radeon™ RX 6900 XT Extreme Edition


<p style="margin-bottom: 0.0001pt;"><span style="color: #131313;">TOXIC AIO Cooling Technology - One Click TOXIC BOOST Up to 2730 MHz</span></p>




www.sapphiretech.com












It is safe to have option 1 enabled all the time as this is not the most powerful mode. 
The most powerful mode is enabled in TriXX and is called the toxic boost, this is a click only mode that stretches the GPU further

I cannot comment on the Adrenaline issue with Windows 11 as I have not experienced this issue. I did have issues back 6 months ago with Adrenaline but all of them were fixed.
This GPU is very good, even tried to upgrade to a 3090TI and was disappointed by the Nvidia performance versus that 6900


----------



## alceryes

Godhand007 said:


> So, I guess I might have to pull the trigger on Antec 1300. Still, not an ideal option but looks decent considering price and performance. 1600i would have cost at least double of Antec.


That Antec is a great PSU - you won't be disappointed.


----------



## alceryes

J7SC said:


> ...probably because the rumoured update (ie. 6950XT) is supposed to hit in less than 10 days ?


I'm waiting for the monster (aka RX 7900 XT) to be released end of the year. The rumored specs are so good they're scary.
Do I need it? No. Am I getting it? Probably


----------



## J7SC

alceryes said:


> I'm waiting for the monster (aka RX 7900 XT) to be released end of the year. The rumored specs are so good they're scary.
> Do I need it? No. Am I getting it? Probably


I think you're right re. the monster 7900XT...s.th. about RDNA3 seems to have scared NVidia so they're meandering to the bar stool with up to 2x 600W power connector GPUs. Hopefully, AMD has a sense of humour and there will be a 7990XT single PCB dual GPU card, using AMD's recent patent for mGPU integration, making those appear as a single unit to the drivers / Win 11.


----------



## Mezar Kurin

rodac said:


> I own this gpu but only ever used windows 10. expected that windows 11 will potentially cause driver issues which is why i did not upgrade
> 
> 
> 
> 
> 
> 
> 
> 
> TOXIC AMD Radeon™ RX 6900 XT Extreme Edition
> 
> 
> <p style="margin-bottom: 0.0001pt;"><span style="color: #131313;">TOXIC AIO Cooling Technology - One Click TOXIC BOOST Up to 2730 MHz</span></p>
> 
> 
> 
> 
> www.sapphiretech.com
> 
> 
> 
> 
> View attachment 2559282
> 
> It is safe to have option 1 enabled all the time as this is not the most powerful mode.
> The most powerful mode is enabled in TriXX and is called the toxic boost, this is a click only mode that stretches the GPU further
> 
> I cannot comment on the Adrenaline issue with Windows 11 as I have not experienced this issue. I did have issues back 6 months ago with Adrenaline but all of them were fixed.
> This GPU is very good, even tried to upgrade to a 3090TI and was disappointed by the Nvidia performance versus that 6900


Thanks for the info, feel daft now and compleatly missed the switch position stuff on that page, though not where I would have expected to find it - the manual (what there is of it) would have been the logical place.


----------



## alceryes

J7SC said:


> I think you're right re. the monster 7900XT...s.th. about RDNA3 seems to have scared NVidia so they're meandering to the bar stool with up to 2x 600W power connector GPUs. Hopefully, AMD has a sense of humour and there will be a 7990XT single PCB dual GPU card, using AMD's recent patent for mGPU integration, making those appear as a single unit to the drivers / Win 11.


It's looking that way.
According to some reports, NVIDIA hadn't planned to push that much juice through their 4090/4080 cards until they found out what AMD was up to. Once they found out the potential performance of the 7900 they had no choice but to crank up the watts to at least be competitive. Most sources say it still won't be enough and NVIDIA's top card will be relegated to second or maybe even _third place_ (not counting RT).

Thing is, I'm also on an old Z370 platform. Even though it performs like a champ, I might want to upgrade everything BUT my GPU to an AM5 platform first. Decisions, decisions.


----------



## J7SC

alceryes said:


> It's looking that way.
> According to some reports, NVIDIA hadn't planned to push that much juice through their 4090/4080 cards until they found out what AMD was up to. Once they found out the potential performance of the 7900 they had no choice but to crank up the watts to at least be competitive. Most sources say it still won't be enough and NVIDIA's top card will be relegated to second or maybe even _third place_ (not counting RT).
> 
> Thing is, I'm also on an old Z370 platform. Even though it performs like a champ, I might want to upgrade everything BUT my GPU to an AM5 platform first. Decisions, decisions.


...I'm not sure that the GPU factory PL race in 2022 is 'environmentally sound', but it nevertheless is a ton of fun to forecast. 1x or even 2x 600W by the end of the year - really ? As to CPU upgrades, AM5 could be good (ditto for Intel's next competing play then), but for now, the 6900XT in my 3950X daily work horse still makes me grin, even if it is slower at RT and 4K than my 3090 dedicated game system. As long as my GPUs can 'fill' my monitors at max settings, I'm happy. 

My 6900XT is the regular XTX version, albeit with a 3x8 pin PCB and very good cooling that takes advantage of its lowly ASIC value . It is really via MPT - which reminds me of those > European Advent calendars before Christmas - that the 6900XT comes to life every time I open another little door in MPT for a 'chocolate treat' (per below, I nibbled a bit more tonight, though still not using max clocks or all the MPT options). 'Upgrading' to a 6950XT makes zero sense to me at this stage...the only thing I would like is to finally dump that disgusting, artificial 2150MHz VRAM limit as I keep on posting about. 

Overall, my 6900XT is more fun to fiddle with (and thus more engaging) than the 3090 which apart from 1 of 3 vBios choices (up to 1kw) is 'set and done' unless I want to get into custom -smi command line and curve mods etc, though its 4k / RT performance makes up for the lack of tuning engagement. Port Royal w/ray tracing is obviously not the strong suit of RDNA2 yet the results below place it somewhere between a RTX 3070 and 3080 already...


----------



## alceryes

J7SC said:


> Port Royal w/ray tracing is obviously not the strong suit of RDNA2 yet the results below place it somewhere between a RTX 3070 and 3080 already...


What's your TIme Spy (not extreme) graphics score with that tricked out 6900 XT?


----------



## J7SC

alceryes said:


> What's your TIme Spy (not extreme) graphics score with that tricked out 6900 XT?


...I rarely run TimeSpy (usually do 4K or ray tracing instead so that I can compare it to the 3090), but here are a couple of quick runs...ambient was 25 C, will try to do proper runs soon.


----------



## PG705

Do you think it will be possible to flash a 6950 XT bios on the 6900 XT? I would like to get 18 Gbp/s memory speed on my Strix LC TOP.


----------



## jonRock1992

PG705 said:


> Do you think it will be possible to flash a 6950 XT bios on the 6900 XT? I would like to get 18 Gbp/s memory speed on my Strix LC TOP.


I'm hoping so lol. I just need a 6950XT GPU to release without a USB-C output, and then I'm good to go. As far as I know, the 6950XT is basically just the 6900XTLC.


----------



## alceryes

J7SC said:


> ...I rarely run TimeSpy (usually do 4K or ray tracing instead so that I can compare it to the 3090), but here are a couple of quick runs...ambient was 25 C, will try to do proper runs soon.
> View attachment 2559442


Thanks. This helps me.
I think I should be happy that my reference 6900 XT get's 22k graphics score. I've only done the backplate thermal pad mod and have it undervolted so it runs nice and cool.


----------



## PG705

jonRock1992 said:


> I'm hoping so lol. I just need a 6950XT GPU to release without a USB-C output, and then I'm good to go. As far as I know, the 6950XT is basically just the 6900XTLC.


Yeah, it's the XTXH die with fast memory, so probably the same as the 6900 XT LC. 

As I'm completely unexperienced with Linux, is it possible to write the LC BIOS with MPT?


----------



## alceryes

PG705 said:


> Yeah, it's the XTXH die with fast memory, so probably the same as the 6900 XT LC.
> 
> As I'm completely unexperienced with Linux, is it possible to write the LC BIOS with MPT?


Hmmm, I'm not sure the flashing will work. Many times mfgs. lock out flashing between different models.

Cheers to the first one that does it and either bricks it or creates a performance king! 🍻
(bigger stones than I)


----------



## PG705

alceryes said:


> Hmmm, I'm not sure the flashing will work. Many times mfgs. lock out flashing between different models.
> 
> Cheers to the first one that does it and either bricks it or creates a performance king! 🍻
> (bigger stones than I)


Hmm, okay. I hope that it will be possible to flash the LC or 6950 XT BIOS with MPT or at least in Windows in the future. I feel like the 6900 XT is bandwidth starved at high core clocks.


----------



## J7SC

jonRock1992 said:


> I'm hoping so lol. I just need a 6950XT GPU to release without a USB-C output, and then I'm good to go. As far as I know, the 6950XT is basically just the 6900XTLC.





PG705 said:


> Yeah, it's the XTXH die with fast memory, so probably the same as the 6900 XT LC.
> 
> As I'm completely unexperienced with Linux, is it possible to write the LC BIOS with MPT?





alceryes said:


> Hmmm, I'm not sure the flashing will work. Many times mfgs. lock out flashing between different models.
> 
> Cheers to the first one that does it and either bricks it or creates a performance king! 🍻
> (bigger stones than I)


...I'm sincerely hoping that each vendors' 6950XT model is basically the same as their current 6900XT, only w/ 10% higher factory PL (Hello, MPT) and faster VRAM - the latter being my only remaining nemesis on my current card. It does have 'dual bios', but it is also my main work machine setup, so I won't be the first to try to flash the relevant 6950XT vbios, but may be the second...


----------



## PG705

J7SC said:


> ...I'm sincerely hoping that each vendors' 6950XT model is basically the same as their current 6900XT, only w/ 10% higher factory PL (Hello, MPT) and faster VRAM - the latter being my only remaining nemesis on my current card. It does have 'dual bios', but it is also my main work machine setup, so I won't be the first to try to flash the relevant 6950XT vbios, but may be the second...


Haha I’m exactly in the same boat as you. MPT is great for extra power, but faster memory speed would be great.


----------



## alceryes

@J7SC @PG705 
Do you just have the VRAM set at 2150MHz? Are you running fast memory timings too?


----------



## PG705

alceryes said:


> @J7SC @PG705
> Do you just have the VRAM set at 2150MHz? Are you running fast memory timings too?


Yep. It’s an improvement over stock of course, but more is better


----------



## J7SC

alceryes said:


> @J7SC @PG705
> Do you just have the VRAM set at 2150MHz? Are you running fast memory timings too?


Yes, fast timings + 2150 - tested step by step with increased efficiency and fps etc right up to the 2150 fast-timing limit, if only I could get beyond that...


----------



## PG705

Okay, so I managed to install Ubuntu on a USB stick. I put the LC bios in the same folder as the amdvbflash file, but when I run $ sudo ./amdvbflash -i, it says: ‘adapter not found’.
How can I solve this? I’m a complete noob with this OS, I was just using the tutorial from @weleh on page 218. Anyone who can help me?


----------



## jonRock1992

You need "-f -p 0 biosname.rom" after "sudo ./amdvbflash " replace the 0 with the index number of your GPU. So if it's the only GPU in your system then just use 0. Don't put -i in the command.

So basically just use "sudo ./amdvbflash -f -p 0 biosname.rom" if there is only one GPU in your system. And obviously change "biosname" to whatever your bios file name is. If you don't have a dual bios switch, you better make absolutely sure that your GPU/motherboard combo is compatible with it.


----------



## PG705

jonRock1992 said:


> You need "-f -p 0 biosname.rom" after "sudo ./amdvbflash " replace the 0 with the index number of your GPU. So if it's the only GPU in your system then just use 0. Don't put -i in the command.
> 
> So basically just use "sudo ./amdvbflash -f -p 0 biosname.rom" if there is only one GPU in your system. And obviously change "biosname" to whatever your bios file name is. If you don't have a dual bios switch, you better make absolutely sure that your GPU/motherboard combo is compatible with it.


Thanks for your reply. My card (Strix LC with XTXH) should be compatible with the LC bios and it has a double BIOS switch.
When I enter the command you mentioned, it still gives: ‘adapter not found’ but this time with an error code: ‘0FL01’. Seems like my GPU isn’t detected?


----------



## jonRock1992

PG705 said:


> Thanks for your reply. My card (Strix LC with XTXH) should be compatible with the LC bios and it has a double BIOS switch.
> When I enter the command you mentioned, it still gives: ‘adapter not found’ but this time with an error code: ‘0FL01’. Seems like my GPU isn’t detected?


Maybe my command is wrong? Or your GPU has a different index number? Not entirely sure. It's been awhile since I've attempted a flash.


----------



## PG705

jonRock1992 said:


> Maybe my command is wrong? Or your GPU has a different index number? Not entirely sure. It's been awhile since I've attempted a flash.


I can try using DOS if that works too?


----------



## jonRock1992

PG705 said:


> I can try using DOS if that works too?


Has to be Linux I believe


----------



## PG705

jonRock1992 said:


> Has to be Linux I believe


Okay. If I use p 1 instead of p 0, it also doesn’t work. Do I need to install gpu drivers somehow maybe?


----------



## 99belle99

PG705 said:


> Okay. If I use p 1 instead of p 0, it also doesn’t work. Do I need to install gpu drivers somehow maybe?


I don't know why -i is not listing your card as that command lists all cards. Are you running Ubuntu through the graphics card or a iGPU in a Intel CPU?


----------



## tommyd2k

weleh said:


> What is it that you want to know?
> What is it that you are asking?
> 
> I'm sorry but with your wall of text, I couldn't really understand what you want to know.
> 
> I have massive experience with EVC , MPT and 6900XT so if you can put it bluntly and simply, I can give you a hand.


Sorry I didn't reply sooner. I want to hook up the evc2 again. I need to know what settings are ok to adjust. All I've really done with it was copy the settings off the EVC2 forum. I couldn't find any guides on using it safely. I did figure out how to get the voltage higher, and doing so was getting results. Then I changed something that made my card shut down and it wouldn't power on again for 15 minutes. I haven't used the EVC2 since then.


----------



## PG705

99belle99 said:


> I don't know why -i is not listing your card as that command lists all cards. Are you running Ubuntu through the graphics card or a iGPU in a Intel CPU?


I believe through my gpu. The cable to my monitor is connected to my gpu and in Ubuntu the graphics adapter is the 6900 XT.


----------



## jonRock1992

PG705 said:


> I believe through my gpu. The cable to my monitor is connected to my gpu and in Ubuntu the graphics adapter is the 6900 XT.


Try a different os. I use Ubuntu Budgie.


----------



## alceryes

J7SC said:


> Yes, fast timings + 2150 - tested step by step with increased efficiency and fps etc right up to the 2150 fast-timing limit, if only I could get beyond that...


Mine is unstable with Time Spy at fast timings AND 2150MHz.
But even with other benchmarks that I am able to pass I get better scores with my VRAM at fast timings and only 2100MHz than fast timings and 2150MHz.

It could also be cause I'm undervolting. I think the voltage you give it affects VRAM too, right? (like on an offset)


----------



## J7SC

alceryes said:


> Mine is unstable with Time Spy at fast timings AND 2150MHz.
> But even with other benchmarks that I am able to pass I get better scores with my VRAM at fast timings and only 2100MHz than fast timings and 2150MHz.
> 
> It could also be cause I'm undervolting. I think the voltage you give it affects VRAM too, right? (like on an offset)


...normally, I set VRAM to 2150 w/ fast timings and HWInfo reports 2138 - 2150 effective VRAM speed after a bench.


----------



## PG705

jonRock1992 said:


> Try a different os. I use Ubuntu Budgie.


I will try soon, thanks.


----------



## PG705

alceryes said:


> Mine is unstable with Time Spy at fast timings AND 2150MHz.
> But even with other benchmarks that I am able to pass I get better scores with my VRAM at fast timings and only 2100MHz than fast timings and 2150MHz.
> 
> It could also be cause I'm undervolting. I think the voltage you give it affects VRAM too, right? (like on an offset)


I run my memory at 1.4v, which enables to run 2150 + FT stable, at least for my card.


----------



## alceryes

PG705 said:


> I run my memory at 1.4v, which enables to run 2150 + FT stable, at least for my card.


You're adjusting the memory voltage with MPT, right?


----------



## PG705

alceryes said:


> You're adjusting the memory voltage with MPT, right?


Yes I am. Also changed VDDCI DPM 3 from 850 mV to 900 mV. Basically the same as with the LC bios.


----------



## PG705

One thing I noticed with my Strix LC card is the high hotspot temp. As it’s a XTXH card, the temp limit is 95 degrees before it begins to throttle. However, in HWInfo64 it shows that the max hotspot temp measured is often a little above 95 degrees, like 98. In MPT you can disable this hotspot temp limit. Do you think that’s safe? The standard XTX limit is 110 degrees. The max I’ve seen it hit was 102 degrees with this limit disabled.


----------



## GTANY

PG705 said:


> Yes I am. Also changed VDDCI DPM 3 from 850 mV to 900 mV. Basically the same as with the LC bios.


Thank you : with your settings and RAM at 2150 Mhz, I don't see any performance downgrade anymore compared to 2100 Mhz.

If you disable the hotspot temperature limit, you must be careful : avoid benchmarking and monitor this temperature with HWinfo for example. In games, my watercooled Powercolor 6900 XT XTXH hotspot does not exceed 80°C. With your Strix LC, it should not exceed 100°C in games which is OK, even if it is on the hot side.


----------



## PG705

GTANY said:


> Thank you : with your settings and RAM at 2150 Mhz, I don't see any performance downgrade anymore compared to 2100 Mhz.
> 
> If you disable the hotspot temperature limit, you must be careful : avoid benchmarking and monitor this temperature with HWinfo for example. In games, my watercooled Powercolor 6900 XT XTXH hotspot does not exceed 80°C. With your Strix LC, it should not exceed 100°C in games which is OK, even if it is on the hot side.


Thanks. My Strix really has a large delta between edge and hotspot temp, and it could definitely use a repaste. However, I don't want to do that as it voids my warranty. I will carefully watch my temps during normal gaming and check if nothing gets too hot.


----------



## PG705

The 6950 XT seems to have a different Navi 21 chip, not the XTXH, but the KXTX. Not sure how that chip will overclock compared to the XTXH. Also the memory chips run at 1.35v unlike the 6900 XT LC which runs at 1.4v. This means they are probably different memory chips than found on the LC or the normal XTXH. 

Interesting anyway.

Link: AMD Radeon RX 6950XT features Navi 21 KXTX GPU, supports Hynix and Samsung 18Gbps memory - VideoCardz.com


----------



## alceryes

PG705 said:


> The 6950 XT seems to have a different Navi 21 chip, not the XTXH, but the KXTX. Not sure how that chip will overclock compared to the XTXH. Also the memory chips run at 1.35v unlike the 6900 XT LC which runs at 1.4v. This means they are probably different memory chips than found on the LC or the normal XTXH.
> 
> Interesting anyway.
> 
> Link: AMD Radeon RX 6950XT features Navi 21 KXTX GPU, supports Hynix and Samsung 18Gbps memory - VideoCardz.com


This probably means that a cross BIOS flash to other 6900 XT cards is impossible.


----------



## J7SC

The 6950X has the 18 Gbps VRAM (higher nominal rating) and a higher base PL at ~ 335 W. That said, the actual 'raw' chip can be updated with micro-code by the factory (not by us, though) after binning, whether it becomes a 6900XTX, XTXH, or 6950 KXTX.


----------



## PG705

J7SC said:


> The 6950X has the 18 Gbps VRAM (higher nominal rating) and a higher base PL at ~ 335 W. That said, the actual 'raw' chip can be updated with micro-code by the factory (not by us, though) after binning, whether it becomes a 6900XTX, XTXH, or 6950 KXTX.


What do you think the performance of the KXTX will be like compared to the XTXH?


----------



## jonRock1992

PG705 said:


> The 6950 XT seems to have a different Navi 21 chip, not the XTXH, but the KXTX. Not sure how that chip will overclock compared to the XTXH. Also the memory chips run at 1.35v unlike the 6900 XT LC which runs at 1.4v. This means they are probably different memory chips than found on the LC or the normal XTXH.
> 
> Interesting anyway.
> 
> Link: AMD Radeon RX 6950XT features Navi 21 KXTX GPU, supports Hynix and Samsung 18Gbps memory - VideoCardz.com


Damn. There goes my hopes of flashing the 6950 XT vBIOS.


----------



## J7SC

PG705 said:


> What do you think the performance of the KXTX will be like compared to the XTXH?


...well, that depends - there are of course leaks such as depicted below (usual grain of salt, please) and fyi, those used the 5800X3D for the CPU, but the 6950XT will be faster if only because a higher PL and better VRAM speeds. Still, who knows if there are other improvements (ie. on ray tracing) or whether this is just a hopped-up, 'less restricted' 6900XT w/ new micro-code. MPT can get your 6900XT close, but presumably, MPT will also be applicable/adapted for the 6950XT.

source


----------



## CS9K

jonRock1992 said:


> Damn. There goes my hopes of flashing the 6950 XT vBIOS.


Having hopes of bios cross-flashing was a vain hope to begin with, unfortunately. The fact that _some_ LC cards can cross-flash with RDNA2, is a minor miracle in and of itself, given that the bios and its safeguards have not been bypassed yet.

Until someone cracks the bios, SMU, and driver cross-checks, we get what we get. Not a bad thing, given that we can raise the power limit with MPT, though I do empathize with the vocal minority that want to raise voltage for bench runs, etc.


----------



## J7SC

CS9K said:


> Having hopes of bios cross-flashing was a vain hope to begin with, unfortunately. The fact that _some_ LC cards can cross-flash with RDNA2, is a minor miracle in and of itself, given that the bios and its safeguards have not been bypassed yet.
> 
> Until someone cracks the bios, SMU, and driver cross-checks, we get what we get. Not a bad thing, given that we can raise the power limit with MPT, though I do empathize with the vocal minority that want to raise voltage for bench runs, etc.


...there's also the earlier vbios leak comparison (per @PG705 's post above) - seems to me that the Samsung and Hynix VRAM SKU numbers are identical, just as I suspected  ...all about microcode, IMO.


----------



## alceryes

PG705 said:


> What do you think the performance of the KXTX will be like compared to the XTXH?


Its leaked Port Royal score is lower than my lowly XTX's Port Royal score. 
Course this is the base performance. Who knows how much performance all you Frankenstein's here will be able to get out of it.


----------



## J7SC

alceryes said:


> Its leaked Port Royal score is lower than my lowly XTX's Port Royal score.
> Course this is the base performance. Who knows how much performance all you Frankenstein's here will be able to get out of it.


Yeah, the Port Royal and Time Spy Graphics scores of the 6950XT leak (if accurate and representative) are telling - similar to a 6900XTXH w/ 1.2v, PL up w/ an additional 10% factory stock, and VRAM chastity belt off . Of course there could be all kinds of 'super-turbo extreme' versions adding to all that with 6950XT as well, along with MPT. Still, apart from the VRAM, nothing that couldn't really be already done with a regular 6900XT + MPT, IMO, but we'll find out for sure next week.


----------



## CS9K

J7SC said:


> Yeah, the Port Royal and Time Spy Graphics scores of the 6950XT leak (if accurate and representative) are telling - similar to a 6900XTXH w/ 1.2v, PL up w/ an additional 10% factory stock, and VRAM chastity belt off . Of course there could be all kinds of 'super-turbo extreme' versions adding to all that with 6950XT as well, along with MPT. Still, apart from the VRAM, nothing that couldn't really be already done with a regular 6900XT + MPT, IMO, but we'll find out for sure next week.


This. And a lot of stock-setting benchmarks will look like this to us, but to the masses, it will look like AMD actually showed up this time.

It's a bad joke that AMD was SO very conservative with their power limits with RDNA 2's initial release, given how hard Nvidia pushed all of their Ampere GPU's out of the box. 

RDNA 2 GPU's have generally had the same overclocking headroom as past-generation Nvidia and AMD GPU's, where Ampere relies almost completely on undervolting because of the spicy stock firmware that Nvidia shipped them with.

I digress, AMD is kicking ass with both hardware and drivers for RDNA 2, and I look forward to RDNA 3 with a LOT of "cautious optimism" 🧡


----------



## Henrik9979

Hey maybe somebody got some good advice to help me push my water cooled








































MSI RX 6900 xt trio X just a bit further?

My highest score in timespy is 22.788

I am limited with a Ryzen 3900x and PCIE 3.0 btw


----------



## kairi_zeroblade

Henrik9979 said:


> I am limited with a Ryzen 3900x and PCIE 3.0 btw


you're not (won't be) limited by the 3900X, what motherboard are you using BTW?? YES (probably) you need to upgrade your motherboard between the lines of a B550 or X570 to take advantage of your GPU, not sure on the mileage for OC (as this one's silicon quality dependent)


----------



## Henrik9979

kairi_zeroblade said:


> you're not (won't be) limited by the 3900X, what motherboard are you using BTW?? YES (probably) you need to upgrade your motherboard between the lines of a B550 or X570 to take advantage of your GPU, not sure on the mileage for OC (as this one's silicon quality dependent)


I have a b450m MSI mortar max. 
I want to know if I can adjust flck and some orther parameters Futher.


----------



## Counterassy14

Henrik9979 said:


> Hey maybe somebody got some good advice to help me push my water cool
> MSI RX 6900 xt trio X just a bit further?
> 
> My highest score in timespy is 22.788
> 
> I am limited with a Ryzen 3900x and PCIE 3.0 btw


Just out of curiosity, what are your temps (including hotspot) and what is your loop looking like?


----------



## Henrik9979

Counterassy14 said:


> Just out of curiosity, what are your temps (including hotspot) and what is your loop looking like?


----------



## alceryes

Henrik9979 said:


> I am limited with a Ryzen 3900x and PCIE 3.0 btw


It's possible that you are slightly limited by PCIe 3.0 but, as long as the slot is running at x16 speed, it would only be a few FPS. I'm still on an old Z370 (PCIe 3.0 x16), getting the benchmark scores I'm getting. The 3900x will limit your FPS at 1080p and a bit at 1440p. Once you get to 2160p though, you are limited by the GPU, not the CPU.


----------



## Henrik9979

alceryes said:


> It's possible that you are slightly limited by PCIe 3.0 but, as long as the slot is running at x16 speed, it would only be a few FPS. I'm still on an old Z370 (PCIe 3.0 x16), getting the benchmark scores I'm getting. The 3900x will limit your FPS at 1080p and a bit at 1440p. Once you get to 2160p though, you are limited by the GPU, not the CPU.


What is your score in timespy extreme


----------



## alceryes

Henrik9979 said:


> What is your score in timespy extreme


Time Spy Extreme graphics score is 10552.
I run on a 1440p monitor so there is scaling going on. UL Benchmarks _says_ that the scaling has a negligible impact on scores but, just something to keep in mind.


----------



## alceryes

Henrik9979 said:


> View attachment 2559914


The cat alone is gonna give you a 5% uplift in scores.


Also, just an FYI, I know it probably can't be helped due to case constraints, but your rear and top fans are definitely creating vorticies over your CPU area. They are basically fighting for the same air. This is especially true with the side panel on, less so if you keep it off.


----------



## Henrik9979

alceryes said:


> The cat alone is gonna give you a 5% uplift in scores.
> 
> 
> Also, just an FYI, I know it probably can't be helped due to case constraints, but your rear and top fans are definitely creating vorticies over your CPU area. They are basically fighting for the same air. This is especially true with the side panel on, less so if you keep it off.


The fan configuration isn't ideal I know. Both the rear fan and 2 front fans blows hot air into the case only the top fan blows hot air out of the case. When I do extreme overclocking I don't have any panels on.
For every day use with mild overclocking the temps are fine.
I can't turn the rear fan around because it have a build in pump. The second pump sits on the CPU. Yes it has 2 pumps and they help each other, I have tested it.
My water cooling solution is basically free, I just got some faulty aio's, toke them apart, cleaned them and used them for my loop.
The only part I bought was the water block for my 6900 xt.


----------



## J7SC

alceryes said:


> Its leaked Port Royal score is lower than my lowly XTX's Port Royal score.
> Course this is the base performance. Who knows how much performance all you Frankenstein's here will be able to get out of it.


...yeah, my lowly XTX scores ~1100 higher in PortRoyal, and that's with a 3950X instead of a 5800X3D (makes a difference). The 'factory' PL increase by 6950XT over 6900XT is responsible for most of the 'improvements' in scores (along with VRAM, see below)...


Spoiler














 


CS9K said:


> This. And a lot of stock-setting benchmarks will look like this to us, but to the masses, it will look like AMD actually showed up this time.
> 
> It's a bad joke that AMD was SO very conservative with their power limits with RDNA 2's initial release, given how hard Nvidia pushed all of their Ampere GPU's out of the box.
> 
> RDNA 2 GPU's have generally had the same overclocking headroom as past-generation Nvidia and AMD GPU's, where Ampere relies almost completely on undervolting because of the spicy stock firmware that Nvidia shipped them with.
> 
> I digress, AMD is kicking ass with both hardware and drivers for RDNA 2, and I look forward to RDNA 3 with a LOT of "cautious optimism" 🧡


...as already discussed before, this RDNA2 'refresh' is mostly an exercise to clear out some $tock / TSMC capacity before the real next big thing: RDNA3. I also believe that AMD will come calling with mGPU tiles on single PCBs (likely for enterprise-level, possibly for top consumer), because of this:


Spoiler: Hopper



@WCCFTECH












There's also the question of artificial limits via micro-code...I really hate that. Sure, a vendor wants to protect against abuse via extra voltage and inadequate cooling etc, but the much-maligned VRAM limit distinguishing the 6x50XT from the 6x00XT WITH the same VRAM SKUs says it all - an extra money grab.

Now, this vid below is a bit of a jump as it refers to micro-code shenanigans on CPUs, but it is also applicable to GPUs. In the 'good old days', ie. GTX600 series, I saved the vbios into a folder, renamed it .txt, got Notepad out and made my own mods in about 5 min. 

We - the consumers - are losing more an more control of our own machines in our own home via corporations 'infiltrating', certainly on the software side, but even on the hardware side ...$s, and then big data for more $s...but I digress...


----------



## alceryes

Henrik9979 said:


> The fan configuration isn't ideal I know. Both the rear fan and 2 front fans blows hot air into the case only the top fan blows hot air out of the case. When I do extreme overclocking I don't have any panels on.
> For every day use with mild overclocking the temps are fine.
> I can't turn the rear fan around because it have a build in pump. The second pump sits on the CPU. Yes it has 2 pumps and they help each other, I have tested it.
> My water cooling solution is basically free, I just got some faulty aio's, toke them apart, cleaned them and used them for my loop.
> The only part I bought was the water block for my 6900 xt.


Cool. I totally didn't catch that the rear fan is also intake. That's not that bad then.

What's your Time Spy Extreme score? Other scores of mine are in my sig.


----------



## Henrik9979

alceryes said:


> Cool. I totally didn't catch that the rear fan is also intake. That's not that bad then.
> 
> What's your Time Spy Extreme score? Other scores of mine are in my sig.


You can watch my YouTube video a little further up the thread. After I made the video I started playing with fclk and fclkboostfrequency.
Now my timespy extreme score is 11.263


----------



## alceryes

Henrik9979 said:


> You can watch my YouTube video a little further up the thread. After I made the video I started playing with fclk and fclkboostfrequency.
> Now my timespy extreme score is 11.263


Nice.
I may get around to playing with MPT later this summer but I would need to put on a good aftermakket cooler to really make it shine. Currently just running on a slight OC and undervolt.


----------



## PG705

Guys, do you think I should repaste my card? It's the Strix LC TOP version and especially the hotspot temps are quite high when overclocked. When running ~400W+ the junction hits 95 degrees and higher. Highest I've seen it was 104 with the hotspot limit removed in MPT (450W PL).

I have repasted a GPU once before, but that didn't have all sorts of thermal pads on the GPU that could easily tear like the Strix has, plus it was a used card that was relatively cheap and didn't have warranty anymore. My Strix still has warranty till late 2024.

What would you do?


----------



## CS9K

J7SC said:


> There's also the question of artificial limits via micro-code...I really hate that. Sure, a vendor wants to protect against abuse via extra voltage and inadequate cooling etc, but the much-maligned VRAM limit distinguishing the 6x50XT from the 6x00XT WITH the same VRAM SKUs says it all - an extra money grab.
> 
> Now, this vid below is a bit of a jump as it refers to micro-code shenanigans on CPUs, but it is also applicable to GPUs. In the 'good old days', ie. GTX600 series, I saved the vbios into a folder, renamed it .txt, got Notepad out and made my own mods in about 5 min.
> 
> We - the consumers - are losing more an more control of our own machines in our own home via corporations 'infiltrating', certainly on the software side, but even on the hardware side ...$s, and then big data for more $s...but I digress...


You've got a point, but only up to a point.

Manufacturers have to protect themselves from Joe Dipshit Gamer who would happily blow up their GPU and expect an RMA. Sure there's probably 50 of us for every one Dipshit, but that still adds up quick. Nvidia and AMD took away easy voltage control because of that, and I'm glad they did because it helped out with churn. Sadly, mining came along and **** all over everything _again_, so that didn't help with costs having to deal with _those_ RMA's.

It sucks not having control of memory speeds past 2150MHz on RX 6900 XT XTX models, but in the grand scheme of things, it doesn't matter to gaming performance. And if one wants that control only for benchmarks, there _are_ GPU's with memory control past the stock OC limits. Gotta pay to play.


----------



## rodac

PG705 said:


> Guys, do you think I should repaste my card? It's the Strix LC TOP version and especially the hotspot temps are quite high when overclocked. When running ~400W+ the junction hits 95 degrees and higher. Highest I've seen it was 104 with the hotspot limit removed in MPT (450W PL).
> 
> I have repasted a GPU once before, but that didn't have all sorts of thermal pads on the GPU that could easily tear like the Strix has, plus it was a used card that was relatively cheap and didn't have warranty anymore. My Strix still has warranty till late 2024.
> 
> What would you do?


if you know what you are doing (have the competency), and you are willing to take up the risk and void your warranty, then yes re-paste. It worked for me.


----------



## Henrik9979

PG705 said:


> Guys, do you think I should repaste my card? It's the Strix LC TOP version and especially the hotspot temps are quite high when overclocked. When running ~400W+ the junction hits 95 degrees and higher. Highest I've seen it was 104 with the hotspot limit removed in MPT (450W PL).
> 
> I have repasted a GPU once before, but that didn't have all sorts of thermal pads on the GPU that could easily tear like the Strix has, plus it was a used card that was relatively cheap and didn't have warranty anymore. My Strix still has warranty till late 2024.
> 
> What would you do?


Buy a razor blade, and carefully get it under the warranty sticker and place it next to the screw. I have done it multiple times. If you one day need to return the card, put the sticker back.


----------



## J7SC

CS9K said:


> You've got a point, but only up to a point.
> 
> Manufacturers have to protect themselves from Joe Dipshit Gamer who would happily blow up their GPU and expect an RMA. Sure there's probably 50 of us for every one Dipshit, but that still adds up quick. Nvidia and AMD took away easy voltage control because of that, and I'm glad they did because it helped out with churn. Sadly, mining came along and **** all over everything _again_, so that didn't help with costs having to deal with _those_ RMA's.
> 
> It sucks not having control of memory speeds past 2150MHz on RX 6900 XT XTX models, but in the grand scheme of things, it doesn't matter to gaming performance. And if one wants that control only for benchmarks, there _are_ GPU's with memory control past the stock OC limits. Gotta pay to play.


As I had already indicated, I realize that vendors have to protect themselves re. RMAs by 'Joe D G'  , but that doesn't really apply in this instance as we're talking about the same vendors and same VRAM SKUs now miraculously going beyond the 2150. IMO, it is artificial market segmentation which is a waste in macro economic terms.

As to 'gotta pay to play,' I think I'm > no stranger to that - though I opted for a multi-track approach for the best of both worlds. Nor am I dissatisfied with my 6900XT 's track at Superpos_4K at almost 19K now, and TS Graphics at well over 24K. I just like the ability to tune _my own_ hardware I paid for, after all, that's what I get to do with the 3090 and all the other GPUs around here...


----------



## CS9K

J7SC said:


> As I had already indicated, I realize that vendors have to protect themselves re. RMAs by 'Joe D G'  , but that doesn't really apply in this instance as we're talking about the same vendors and same VRAM SKUs now miraculously going beyond the 2150. IMO, it is artificial market segmentation which is a waste in macro economic terms.
> 
> As to 'gotta pay to play,' I think I'm > no stranger to that - though I opted for a multi-track approach for the best of both worlds. Nor am I dissatisfied with my 6900XT 's track at Superpos_4K at almost 19K now, and TS Graphics at well over 24K. I just like the ability to tune _my own_ hardware I paid for, after all, that's what I get to do with the 3090 and all the other GPUs around here...


I feel you. Things are what the are for now I suppose.


----------



## Henrik9979

I don't know if it already have been discussed, but of someone want to try get more speed out of the memory you can enable "fast timing 2" which is even tighter timings.

But you have to lower the memory clock to 1900 MHz.

I can get around the same score with 2150mhz fast timing as 1900 MHz fast timing 2.

So if someone want to test which parameters that help the stability then here you go.

The highest memclock I could get stable was 1930 MHz with 1500mv on MVDD


----------



## J7SC

CS9K said:


> I feel you. Things are what the are for now I suppose.


...true, unfortunately, but not the end of the world. Every GPU I have goes through a series of tests w/ various tools re. best GPU and VRAM speed, meaning 'most efficient' / highest fps, rather than highest clock. I raised VRAM speed in 20 MHz steps all the way to 2150 with 'normal timings' and average fps of the test suite was still increasing. Then I repeated it w/ 'fast timings'...2150 w/ fast timings improved fps further and had the best overall results with the Samsung VRAM and the extra cooling. Other older tools showed 2260 MHz f/t for VRAM to be the ideal setting, albeit with GPU in safe mode, given the artificial limits past 2150. Anyway, that is that.

I did load my age-old fav - NFS: Most Wanted - onto the 6900XT system tonight, rather than the 3090. Even with all the eye candy on at 4096 x 2160 on the C1 and the latest graphics and content updates for the game, it barely raised GPU temps. While it doesn't have the visual wow factor of the latest crop of games, NFS Most Wanted is still an absolute hoot re. engaging action


----------



## PG705

rodac said:


> if you know what you are doing (have the competency), and you are willing to take up the risk and void your warranty, then yes re-paste. It worked for me.


It's very tempting to be able to push 450W through it without overheating, giving me stable clockspeeds and slightly quieter fans. However, I'm not sure I have the competency. Previous card I repasted was a 2080 Ti Seahawk X, which was really easy to repaste. Not sure on this AiO Strix card.


----------



## PG705

Henrik9979 said:


> I don't know if it already have been discussed, but of someone want to try get more speed out of the memory you can enable "fast timing 2" which is even tighter timings.
> 
> But you have to lower the memory clock to 1900 MHz.
> 
> I can get around the same score with 2150mhz fast timing as 1900 MHz fast timing 2.
> 
> So if someone want to test which parameters that help the stability then here you go.
> 
> The highest memclock I could get stable was 1930 MHz with 1500mv on MVDD


Isn't it better to have slightly more bandwidth?


----------



## Godhand007

PG705 said:


> One thing I noticed with my Strix LC card is the high hotspot temp. As it’s a XTXH card, the temp limit is 95 degrees before it begins to throttle. However, in HWInfo64 it shows that the max hotspot temp measured is often a little above 95 degrees, like 98. In MPT you can disable this hotspot temp limit. Do you think that’s safe? The standard XTX limit is 110 degrees. The max I’ve seen it hit was 102 degrees with this limit disabled.


Can you share the MPT screenshot of the _throttling limit _setting?


----------



## GTANY

PG705 said:


> Guys, do you think I should repaste my card? It's the Strix LC TOP version and especially the hotspot temps are quite high when overclocked. When running ~400W+ the junction hits 95 degrees and higher. Highest I've seen it was 104 with the hotspot limit removed in MPT (450W PL).
> 
> I have repasted a GPU once before, but that didn't have all sorts of thermal pads on the GPU that could easily tear like the Strix has, plus it was a used card that was relatively cheap and didn't have warranty anymore. My Strix still has warranty till late 2024.
> 
> What would you do?


For a power limit of 450 W (Time Spy), my hotspot temperature is around 90 - 95°C : the higher the power limit, the higher the difference between GPU and hotspot temperatures. 

95°C hotspot for 400 W seems normal for the Strix LC : the difference between GPU and hotspot temperature should be around 35°C - 40°C. Don't forget : the Strix LC is a low-end watercooling card with only a 240 mm radiator.

In games, you never reach 450 W, the worse is 400 W. Generally speaking, 370 W with 1.2 V GPU. Consequently, you don't need to repaste the card if you play only.


----------



## llDevilDriverll

Hi,
I bought laptop with 6800m and 6800m run 2000 on memory. Anyone know how to increase frequency to 2100 using MRT? no way to increase it in drivers


----------



## RichieRich25

trying to increase benchmark numbers but I think I hit my max. Not sure if there is anything I can work on besides increase power limit. 
Some stats here
5950x, 6900xt red devil, 32 gb of ram cl 15 3800, 1000 watt evga g6 psu, asus crosshair hero viii x570.


----------



## Godhand007

Hey Guys,

*Need urgent inputs from regarding PSU issue that I am facing*. I just bought an ANTEC SP 1300 PSU and I am getting shutdowns while trying MPT with these settings (refer screenshots below). This happens during TS GT2 and also while using OCCT power and OCCT 3d tests. Now it obviously seems like an issue related with defective power supply but I am not sure. I had the same issue with my RM1000i which I sent for RMA. 

The unique thing is that in both cases, the PSU worked fine for few runs and then It started having shutdowns. If I reduce the power consumption of GPU everything works as expected. But I have a doubt that the issue might be due to the card i.e. is my card damaging multiple power supplies at higher PL/Voltage (TempDepVmin). I also have a corsair CX750f which works fine with normal PL limits which is expected given it's lower wattage.

I have ANTEC SP with me for few more days. What should I try to confirm the culprit here?


----------



## Henrik9979

Godhand007 said:


> Hey Guys,
> 
> *Need urgent inputs from regarding PSU issue that I am facing*. I just bought an ANTEC SP 1300 PSU and I am getting shutdowns while trying MPT with these settings (refer screenshots below). This happens during TS GT2 and also while using OCCT power and OCCT 3d tests. Now it obviously seems like an issue related with defective power supply but I am not sure. I had the same issue with my RM1000i which I sent for RMA.
> 
> The unique thing is that in both cases, the PSU worked fine for few runs and then It started having shutdowns. If I reduce the power consumption of GPU everything works as expected. But I have a doubt that the issue might be due to the card i.e. is my card damaging multiple power supplies at higher PL/Voltage (TempDepVmin). I also have a corsair CX750f which works fine with normal PL limits which is expected given it's lower wattage.
> 
> I have ANTEC SP with me for few more days. What should I try to confirm the culprit here?
> 
> View attachment 2560038


Are you using a single power cable from PSU to GPU and splitting it to the GPUs power connectors?
Because then you should try using a cable for each power connector.
1 cable is only rated for 150w


----------



## Godhand007

Henrik9979 said:


> Are you using a single power cable from PSU to GPU and splitting it to the GPUs power connectors?
> Because then you should try using a cable for each power connector.
> 1 cable is only rated for 150w


No, Individual PCI 8/6 pin for each connector.


----------



## Henrik9979

Godhand007 said:


> No, Individual PCI 8/6 pin for each connector.


I don't know if it could be the issue but do you know if you are running multi rail or single rail on you PSU?
My corsair hx1200 has a switch on the back to choose between single and multi rail.

Single rail allow all 1200w to be drawn through a single connection. Multi rail splits the 1200w between like 5 rails and limit each rail to 240w.


----------



## J7SC

Godhand007 said:


> No, Individual PCI 8/6 pin for each connector.


Antec (and other) are sensitive to which 12v PCIe plug at the back of the PSU is used. Try posting a power-hungry run with HWInfo open re. regular as well as GPU 12v in 'current, min, max' HWInfo mode. Also, is your GPU a 2x or 3x 8 pin ? Sensitive PSUs are programmed not to allow much beyond spec >> 150W +- (w/ a bit of play room) per 8 pin PCIe connection.


----------



## PG705

Godhand007 said:


> Can you share the MPT screenshot of the _throttling limit _setting?


My PC is currently apart, waiting for a CPU upgrade. So I can't share a screenshot.

However, the throttling limit setting is under the menu 'more' which is the one on the far right. There is a button called 'throttler' if I recall correctly and then you will see hotspot temperature (something like that). Disable that and your clocks won't decrease when you are at the hotspot temp limit.


----------



## PG705

GTANY said:


> For a power limit of 450 W (Time Spy), my hotspot temperature is around 90 - 95°C : the higher the power limit, the higher the difference between GPU and hotspot temperatures.
> 
> 95°C hotspot for 400 W seems normal for the Strix LC : the difference between GPU and hotspot temperature should be around 35°C - 40°C. Don't forget : the Strix LC is a low-end watercooling card with only a 240 mm radiator.
> 
> In games, you never reach 450 W, the worse is 400 W. Generally speaking, 370 W with 1.2 V GPU. Consequently, you don't need to repaste the card if you play only.


Okay, thanks for your insight. I will leave it the way it is then.


----------



## RichieRich25

I'm sure it's been covered but it's hard to d through 348 pages. In a nutshell I've been using kryonaut for the GPU. Amazing results but after about a month I notice that the temps just creep back up. Hot spot temps are 35+ from GPU when a month and half ago it was 20+. Is it just kryonaut or am I doing something wrong. Is there a other paste I should use?


----------



## J7SC

RichieRich25 said:


> I'm sure it's been covered but it's hard to d through 348 pages. In a nutshell I've been using kryonaut for the GPU. Amazing results but after about a month I notice that the temps just creep back up. Hot spot temps are 35+ from GPU when a month and half ago it was 20+. Is it just kryonaut or am I doing something wrong. Is there a other paste I should use?


...yes, multiple discussions on this in this thread. I used Kryonaut myself before but for the 6900XT, the thicker Gelid GC Extreme is working better (for me, anyways)...no change in temps over the past 9 months or so since I applied it, along with thermal putty for the VRAM.


----------



## RichieRich25

J7SC said:


> ...yes, multiple discussions on this in this thread. I used Kryonaut myself before but for the 6900XT, the thicker Gelid GC Extreme is working better (for me, anyways)...no change in temps over the past 9 months or so since I applied it, along with thermal putty for the VRAM.


I figured it was the kryonaut. The last time I replaced it on the 6900xt. I noticed the paste was so hard and crusty that I was able to remove the paste with one swipe but thought maybe I got a bad batch but now my hotspot is high again . Going to try the gelid extreme. Heard good things


----------



## Scorpion667

Scorpion667 said:


> I have TFX thermal paste arriving shortly and wish to do some A/B testing against my currently applied GC Extreme on 6900XT. Should I just use TimeSpy for comparison?


Sorry for the delay folks. Turns out I lost my initial Gelid numbers during a recent reimage (2021 LTSC!) so I only had bench/temp data 1 month after initial application. I waited 1 month after applying TFX for an even comparison and here are the numbers in temp controlled environment comparing the two head to head.

Notes:
-6900XT Toxic EE (360mm AIO)
-2650min/2750max core, 2150 vMEM, 385w PL, 1080mv
-This card has high HS temps but after 20+ repastes I can confidently say this is a constant not a variable with my sample
-100% FAN
-TIM application was spread + small line on top (pic)
-Ambient 24c temp controlled env (actual measured was 23.8c/23.9c per temp probe by the rad fan intake)
-Unfortunately for Gelid testing I ran 22.1.2 driver where as for TFX I ran 21.10.2 which explains the PR score variance. Didn't realize till after. I have screenshots of the TFX with same driver taken right after TIM application and temps are identical between the 2 drivers. I can provide upon request

The scores are actual hyperlinks to screenshots of each (incl. temps and GPU-Z)

Google Sheets Link


*Benchmark**Gelid Score**Gelid Edge Temp**Gelid HS Temp**TFX Score**TFX Edge Temp**TFX HS Temp*Superposition 4k 1185595188186025188Superposition 4k 2185555190185795188Superposition 4k 3184935188185885187TimeSpy 1251535191252795187TimeSpy 2251445190252645187TimeSpy 3251625190252915188Port Royal 1127425290118285087Port Royal 2127255290117975288Port Royal 3127455190118025187


TL : DR - I think TFX is slightly better for my use case. It's thicker, performs slightly better (1-2c better HS) and holds good temps longer. Gelid started off really good and slowly got worse by a few degrees (~2c HS) over one month where as TFX remained the exact same.


----------



## Godhand007

Henrik9979 said:


> I don't know if it could be the issue but do you know if you are running multi rail or single rail on you PSU?
> My corsair hx1200 has a switch on the back to choose between single and multi rail.
> 
> Single rail allow all 1200w to be drawn through a single connection. Multi rail splits the 1200w between like 5 rails and limit each rail to 240w.


I think ANTEX SP 1300 is a single rail PSU but others can confirm.


----------



## Godhand007

J7SC said:


> Antec (and other) are sensitive to which 12v PCIe plug at the back of the PSU is used. Try posting a power-hungry run with HWInfo open re. regular as well as GPU 12v in 'current, min, max' HWInfo mode. Also, is your GPU a 2x or 3x 8 pin ? Sensitive PSUs are programmed not to allow much beyond spec >> 150W +- (w/ a bit of play room) per 8 pin PCIe connection.


_GPU 12v in 'current, min, max'. _I didn't get this? Could you elaborate?
My GPU has two _3 pins _and one_ 6 pin. _And like I said, this did work fine for a while (maybe days) on my RM1000i and also for few runs on ANTEC but won't even work for few minutes now.


----------



## J7SC

Godhand007 said:


> _GPU 12v in 'current, min, max'. _I didn't get this? Could you elaborate?
> My GPU has two _3 pins _and one_ 6 pin. _And like I said, this did work fine for a while (maybe days) on my RM1000i and also for few runs on ANTEC but won't even work for few minutes now.


HWInfo has 4 columns ('current', 'minimum' etc) and it is worth checking the voltage swings with no load, light load, and full load. As to max watts per PCIe cable, an 8 pin cable has a max rating of ~ 150w. While most PSUs can easily go above that, that is the 'official' spec - so a 2x 8 PCIe means 300W (plus PCIe, up to ~ 75W) while a 3x 8 pin is at 450W (plus PCIe). Again, those are the official specs which doesn't mean a PSU cannot go beyond it, but manufacturers can implement OCP somewhere above that value. It also comes down to the gauge ('thickness') of the PCIe cables and exactly where you plug them in on the back of the PSUs which will often have multiple options. Finally, the GPU itself can trigger OCP.


----------



## Godhand007

J7SC said:


> HWInfo has 4 columns ('current', 'minimum' etc) and it is worth checking the voltage swings with no load, light load, and full load.


This is what I see with OCCT running in power mode and GPU PL at 367w in Wattman. As soon as I increase the PL value, PC would shutdown.











> As to max watts per PCIe cable, an 8 pin cable has a max rating of ~ 150w. While most PSUs can easily go above that, that is the 'official' spec - so a 2x 8 PCIe means 300W (plus PCIe, up to ~ 75W) while a 3x 8 pin is at 450W (plus PCIe). Again, those are the official specs which doesn't mean a PSU cannot go beyond it, but manufacturers can implement OCP somewhere above that value.


Agree, Modern PSUs are able to go over 150W per cable easily. I can vouch for that with my old 6900XT pulling around the same without issues with just two 8 pins. Toxic has two 8 pins and one 6 pin.



> It also comes down to the gauge ('thickness') of the PCIe cables and exactly where you plug them in on the back of the PSUs which will often have multiple options.


The ANTEC SP 1300 and RM1000i both have good reviews in terms of cable quality and they look premium to me as well. I have tried changing slots on PSUs for PCI-E cables but have had no success there as well. 



> Finally, the GPU itself can trigger OCP.


This should have happened from day one but it did not. I remember running GPU close to 550W without issues on RM1000i. The same is true for ANTEC as well till just 1-2 days ago. I was pulling 1000W without issue and this is a brand new power supply. Now it won't go near 900W. It also crashes with just 3d testing on OCCT with high PL of around 500 plus watts.


----------



## alceryes

Godhand007 said:


> _GPU 12v in 'current, min, max'. _I didn't get this? Could you elaborate?
> My GPU has two _3 pins _and one_ 6 pin. _And like I said, this did work fine for a while (maybe days) on my RM1000i and also for few runs on ANTEC but won't even work for few minutes now.


Just for a sanity check, can you reset all GPU MPT/driver OC settings to stock and run the card hard for at least an hour. I'm wondering if you've got some degradation creeping in.

Unfortunately, if it works fine, you've probably got a day's worth of testing ahead of you. Start with no changes to MPT, just OC within the drivers. Test hard for at least an hour. Then make a VERY slight MPT bump. Test hard for at least an hour. Etc., etc.

Do you still have your old PSU? Could your old PSU actually be okay and the GPU have issues?


----------



## Godhand007

alceryes said:


> Just for a sanity check, can you reset all GPU MPT/driver OC settings to stock and run the card hard for at least an hour. I'm wondering if you've got some degradation creeping in.


The card is just two months old. Doubt degradation would kick in this early and this is Toxic EE version which is supposed to be one of the best built 6900XTs out there.



> Unfortunately, if it works fine, you've probably got a day's worth of testing ahead of you. Start with no changes to MPT, just OC within the drivers. Test hard for at least an hour. Then make a VERY slight MPT bump. Test hard for at least an hour. Etc., etc.


It works fine at default and with max power limit in Wattman without issues. Played hours on it just 1-2 days ago.



> Do you still have your old PSU? Could your old PSU actually be okay and the GPU have issues?


That's what I am trying to find out. I have an CX750F right now along with an ANTEC SP 1300. Both work fine without using MPT. I have done a clean install of drivers as well. RM1000i is in RMA. I think there is a possibility that the card is damaging PSUs but then again they work fine at default including max PL limit in Wattman.


----------



## deadfelllow

Who is gonna buy this card and try mpt 









TOXIC AMD Radeon™ RX 6950 XT Limited Edition


OC BIOS Up to 2565 MHz, 16GB/256 bit DDR6. 18 Gbps Effective




www.sapphiretech.com


----------



## PG705

The 6950 XT seems so be a overclocking beast as far as I can tell from the early reviews… but who will buy this card when OC’ed 6900 XT’s beat it and RDNA 3 is coming at the end of the year?


----------



## LtMatt

PG705 said:


> The 6950 XT seems so be a overclocking beast as far as I can tell from the early reviews… but who will buy this card when OC’ed 6900 XT’s beat it and RDNA 3 is coming at the end of the year?


Yep it does seem to overclock well, especially considering its all air cooled cards. 

Who is going to buy one and compare it to a XTXH at 2.8Ghz?

I nominate @jonRock1992.


----------



## No-one-no1

6950xt will probably not clock any higher than the 6900xt, if amd won't allow more volts to the core.
Does anyone have info on the max core V?
(also saw some rumors that mpt might finally be able to allow more V, but could not find any confirmation yet)


----------



## EastCoast

No-one-no1 said:


> 6950xt will probably not clock any higher than the 6900xt, if amd won't allow more volts to the core.
> Does anyone have info on the max core V?
> (also saw some rumors that mpt might finally be able to allow more V, but could not find any confirmation yet)











[Various] 6950/6750xt benchmark Review


6950xt https://www.thefpsreview.com/2022/05/10/msi-radeon-rx-6650-xt-gaming-x-8g-video-card-review/ https://www.techpowerup.com/review/msi-radeon-rx-6650-xt-gaming-x/ https://www.techpowerup.com/review/msi-radeon-rx-6950-xt-gaming-x-trio/...




www.overclock.net




6950xt voltage 1.2V. Clocks are the same from what I've seen so far between the 6900xtxh and 6950xt.
Vram is a lot higher. Can bench at 2400Mhz. Can game around 2350MHz. Defaults at 2248Mhz. I provided pics in the link above.


----------



## Enzarch

Comparison images taken from the igorsLAB review, And I have added my 6900 XT LC for further comparison.
The Vclk, Dclk, and enabling of PPT1, are the most interesting to me, also the fact the memory runs at lower voltage than the LC.

6900XT Gaming X Trio, Vs 6950XT Gaming X Trio, Vs 6900XT LC


----------



## zwer54

The real question is can we flash 6900 XT into 6950 XT?


----------



## Enzarch

zwer54 said:


> The real question is can we flash 6900 XT into 6950 XT?


Very doubtful since they are using another chip designation; KXTX, and we could not flash XTX into XTXH.

I will, however, be trying these 6950 clocks on my LC today


----------



## zwer54

Enzarch said:


> Very doubtful since they are using another chip designation; KXTX, and we could not flash XTX into XTXH.
> 
> I will, however, be trying these 6950 clocks on my LC today


I just tried to match these 6950 details in MPT, most of them were the same already (I have toxic EE) but once I did that, lowest I could set the memory was 2248 mhz and timespy crashed on start... So I guess, they are not using the same memory on 50 models...


----------



## ZealotKi11er

Interesting on increased fclk, vclk and dclk. Would be need to seem if VCN is improved for H264.


----------



## Enzarch

zwer54 said:


> I just tried to match these 6950 details in MPT, most of them were the same already (I have toxic EE) but once I did that, lowest I could set the memory was 2248 mhz and timespy crashed on start... So I guess, they are not using the same memory on 50 models...


No you will not be able to copy memory settings, the 6950 uses 18Gbps memory, not the 16Gbps as on all others (except LC)


----------



## Scorpion667

Godhand007 said:


> The card is just two months old. Doubt degradation would kick in this early and this is Toxic EE version which is supposed to be one of the best built 6900XTs out there.
> 
> 
> It works fine at default and with max power limit in Wattman without issues. Played hours on it just 1-2 days ago.
> 
> 
> That's what I am trying to find out. I have an CX750F right now along with an ANTEC SP 1300. Both work fine without using MPT. I have done a clean install of drivers as well. RM1000i is in RMA. I think there is a possibility that the card is damaging PSUs but then again they work fine at default including max PL limit in Wattman.


I have the exact card as you and experienced shutdowns as well in TS GT2 after few months of ownership. I replaced PSU and no dice. Basically it was thermal shutdowns and repaste fixed it. If your HS temp is low to mid 90s that's likely what's happening.


----------



## RichieRich25

Re did the thermal paste switching from kryonaut to gelid. Same performance with around 30c hot spot difference. I'm running my max power limit at 330 and the tdc at 330. Max core 2750 min 2635 min memory at 2150. Timespy I hit around avg 56c-60 but the hot spot will reach 95c. Is this normal for a red devil(air cooled ). I see wattage hit 380. Cant get any GPU scores over 22800 timespy.


----------



## jonRock1992

LtMatt said:


> Yep it does seem to overclock well, especially considering its all air cooled cards.
> 
> Who is going to buy one and compare it to a XTXH at 2.8Ghz?
> 
> I nominate @jonRock1992.


Lol. I thought about it. They have the 6950 XT Red Devil in stock at my local micro center for $1099. My current water block would probably transfer right over. They even have a Liquid Devil 6950 XT in stock there for $1299.


----------



## J7SC

jonRock1992 said:


> Lol. I thought about it. They have the 6950 XT Red Devil in stock at my local micro center for $1099. My current water block would probably transfer right over. They even have a Liquid Devil 6950 XT in stock there for $1299.


Good prices, for those who don't already have a 6900XT


----------



## RichieRich25

jonRock1992 said:


> Lol. I thought about it. They have the 6950 XT Red Devil in stock at my local micro center for $1099. My current water block would probably transfer right over. They even have a Liquid Devil 6950 XT in stock there for $1299.
> View attachment 2560174


*** in long island NY the reddevil 6950 is 1299 and the liquid is
1499.

Edit it looks like they dropped the prices. It definitely was MSRP this am


----------



## jonRock1992

If I had a buyer lined up for my Red Devil Ultimate, I'd get the 6950 XT Red Devil and put my water block on it.


----------



## J7SC

jonRock1992 said:


> If I had a buyer lined up for my Red Devil Ultimate, I'd get the 6950 XT Red Devil and put my water block on it.


...too close to RDNA3, IMO. Also, apart from the 'often-referenced-by-me' 🥴 2150 VRAM limit, I don't want anything else from my w-cooled 6900XT (for now)...


----------



## PG705

Enzarch said:


> Comparison images taken from the igorsLAB review, And I have added my 6900 XT LC for further comparison.
> The Vclk, Dclk, and enabling of PPT1, are the most interesting to me, also the fact the memory runs at lower voltage than the LC.
> 
> 6900XT Gaming X Trio, Vs 6950XT Gaming X Trio, Vs 6900XT LC
> 
> View attachment 2560151
> View attachment 2560161
> 
> 
> 
> View attachment 2560153
> View attachment 2560154
> 
> 
> 
> View attachment 2560155
> View attachment 2560156
> 
> 
> 
> View attachment 2560157
> View attachment 2560158
> 
> 
> View attachment 2560159
> View attachment 2560160


Do you think those Vclk and Dclk values will also work on normal 6900 XT’s?


----------



## jonRock1992

J7SC said:


> ...too close to RDNA3, IMO. Also, apart from the 'often-referenced-by-me' 🥴 2150 VRAM limit, I don't want anything else from my w-cooled 6900XT (for now)...


Yeah it would probably be a waste of time and effort.


----------



## PanZwu

jonRock1992 said:


> Lol. I thought about it. They have the 6950 XT Red Devil in stock at my local micro center for $1099. My current water block would probably transfer right over. They even have a Liquid Devil 6950 XT in stock there for $1299.
> View attachment 2560174


oh boi , paid 1450€ 3 months ago for my red devil ultimate :/


----------



## wermad

1399 for the aio 6950 xt, wow 😬


----------



## alceryes

Godhand007 said:


> The card is just two months old. Doubt degradation would kick in this early and this is Toxic EE version which is supposed to be one of the best built 6900XTs out there.


Degradation can definitely happen over just two months, especially if you've been pushing the voltages on it.
Maybe it was a poor bin and after heating it up for a few weeks it's performance is suffering.


----------



## sniperpowa

I toasted a 6900xt in a week…


----------



## wermad

I got my replacement 🥳









Edit: wire management is left


----------



## Speed Potato

So does anyone have a confirmation if the reference 6950XT have the same PCB layout as the vanilla reference 6900XT ?
For water-block compatibility. The liquid-devil looks great but a Bykski block is probably good enough for me.


----------



## Enzarch

PG705 said:


> Do you think those Vclk and Dclk values will also work on normal 6900 XT’s?


I tried them on my LC, did not work.


----------



## J7SC

@LtMatt... the prosecution rests


----------



## LtMatt

J7SC said:


> @LtMatt... the prosecution rests


I won't be switching from my Toxic that's for sure, unless one day there is a Toxic Extreme, then... maybe. 

My 6750 XT Red Devil arrives today so interested to see what that can do.


----------



## EastCoast

J7SC said:


> @LtMatt... the prosecution rests


I have 2 issues with that portion of the video.
Issue 1:



As you can see not all 6900xt's are limited to just 2150Mhz on the vram, Which goes towards issue 2.

Issue 2:
He didn't distinguish between the xt, -> xtxh when discussing that. Personally, I never saw the Gigabyte variants to be "top quality."


----------



## J7SC

I have the Gigabyte 6900 XT Gaming OC, albeit heavily w-cooled and MPTized. Even when new and in stock form, it could hit 2700 MHz effective (now over 2800 MHz), though there's always variance from chip to chip, and in stock air-cooled form, it was incredibly loud. What I really like about this model is the 3x8 pin solid PCB, what I really hate is the artificial VRAM limit first gen 6900s had to content with, apart from the 6900 LC ( = 6950XT ?). 

In general, I'm fond of Gigabyte (I still use their 2x 2080 Ti XRT WF WB), though in a pinch, I probably would opt for the Asus Strix line as my first choice when talking in general terms (AMD and NVidia)...


----------



## Godhand007

Scorpion667 said:


> I have the exact card as you and experienced shutdowns as well in TS GT2 after few months of ownership. I replaced PSU and no dice. Basically it was thermal shutdowns and repaste fixed it. If your HS temp is low to mid 90s that's likely what's happening.


I remember seeing yours or a similar post earlier but in my case, the card works fine if the PL limit is kept in check. I saw 97 C on GPU hotspot and card was working fine. Also, in OCCT 3d test with increase PL limits, the PC shuts down immediately without temps reaching above 50 C.


----------



## Godhand007

alceryes said:


> Degradation can definitely happen over just two months, especially if you've been pushing the voltages on it.
> Maybe it was a poor bin and after heating it up for a few weeks it's performance is suffering.


I don't think you got a clear picture of the issue. I was not pushing voltages for two months, everything was mostly on default. I did few bench runs when I purchased the card with voltage increase but that was it. The card does not suffer form any performance issue on default or even at max PL limits in Wattman.


----------



## J7SC

Godhand007 said:


> I don't think you got a clear picture of the issue. I was not pushing voltages for two months, everything was mostly on default. I did few bench runs when I purchased the card with voltage increase but that was it. The card does not suffer form any performance issue on default or even at max PL limits in Wattman.


...when you open MPT and load your bios, do you see one or two 6900XTs in the drop-down menu ? If more than one, then use DDU to uninstall the driver and also uninstall the GPU in device manager.

...if all else fails, you might have to RMA the card since you have switched PSUs etc...it could relate to a component somewhere in the VRM.


----------



## Godhand007

J7SC said:


> ...when you open MPT and load your bios, do you see one or two 6900XTs in the drop-down menu ? If more than one, then use DDU to uninstall the driver and also uninstall the GPU in device manager.


I have tried this but will try it again today.


> ...if all else fails, you might have to RMA the card since you have switched PSUs etc...it could relate to a component somewhere in the VRM.


But what would be the reason for that, as in one that Sapphire would accept. Everything works fine at default and even with max PL limits in Wattman.


----------



## alceryes

Godhand007 said:


> Everything works fine at default and even with max PL limits in Wattman.


Where do the actual failures start to occur? What specific change?


----------



## Godhand007

alceryes said:


> Where do the actual failures start to occur? What specific change?


Refer this post.


----------



## bloot

Massive DX11 performance improvement due to AMD Adrenalin preview driver.


----------



## LtMatt

Does More Power Tool need an update to recognise the new GPUs? I have a 6750 XT installed but MPT still only recognises my 6900 XT? @hellm 

I am using Beta 1.3.8 Beta 2.


----------



## deadfelllow

bloot said:


> Massive DX11 performance improvement due to AMD Adrenalin preview driver.


dafuq

I thought preview driver improvement is only 6x50 series


----------



## Godhand007

Godhand007 said:


> Hey Guys,
> 
> *Need urgent inputs from regarding PSU issue that I am facing*. I just bought an ANTEC SP 1300 PSU and I am getting shutdowns while trying MPT with these settings (refer screenshots below). This happens during TS GT2 and also while using OCCT power and OCCT 3d tests. Now it obviously seems like an issue related with defective power supply but I am not sure. I had the same issue with my RM1000i which I sent for RMA.
> 
> The unique thing is that in both cases, the PSU worked fine for few runs and then It started having shutdowns. If I reduce the power consumption of GPU everything works as expected. But I have a doubt that the issue might be due to the card i.e. is my card damaging multiple power supplies at higher PL/Voltage (TempDepVmin). I also have a corsair CX750f which works fine with normal PL limits which is expected given it's lower wattage.
> 
> I have ANTEC SP with me for few more days. What should I try to confirm the culprit here?
> 
> View attachment 2560038


Quick update on this. It seems that there is some issue with the card (or something else) as I just checked my system with a Cooler Master MWE 1250 V2 PSU and found the same issues. The weird thing is that card works perfectly fine with even a lowly CX750F (including PL limit in Wattman). It just won't take 360/65 TDC limits. I will try an RMA but I doubt it will go anywhere as it works fine at stock limits. I updated/rollbacked my Mobo bios as well but that didn't resolve my issue either. Don't know what's wrong with the card.


----------



## mastertrixter

Godhand007 said:


> Quick update on this. It seems that there is some issue with the card (or something else) as I just checked my system with a Cooler Master MWE 1250 V2 PSU and found the same issues. The weird thing is that card works perfectly fine with even a lowly CX750F (including PL limit in Wattman). It just won't take 360/65 TDC limits. I will try an RMA but I doubt it will go anywhere as it works fine at stock limits. I updated/rollbacked my Mobo bios as well but that didn't resolve my issue either. Don't know what's wrong with the card.


have you tried pushing higher than the 360/65?

my card (red devil 6900xt) doesn’t like anything in the 350-400 range but does fine at 400+.


----------



## RichieRich25

Scorpion667 said:


> Sorry for the delay folks. Turns out I lost my initial Gelid numbers during a recent reimage (2021 LTSC!) so I only had bench/temp data 1 month after initial application. I waited 1 month after applying TFX for an even comparison and here are the numbers in temp controlled environment comparing the two head to head.
> 
> Notes:
> -6900XT Toxic EE (360mm AIO)
> -2650min/2750max core, 2150 vMEM, 385w PL, 1080mv
> -This card has high HS temps but after 20+ repastes I can confidently say this is a constant not a variable with my sample
> -100% FAN
> -TIM application was spread + small line on top (pic)
> -Ambient 24c temp controlled env (actual measured was 23.8c/23.9c per temp probe by the rad fan intake)
> -Unfortunately for Gelid testing I ran 22.1.2 driver where as for TFX I ran 21.10.2 which explains the PR score variance. Didn't realize till after. I have screenshots of the TFX with same driver taken right after TIM application and temps are identical between the 2 drivers. I can provide upon request
> 
> The scores are actual hyperlinks to screenshots of each (incl. temps and GPU-Z)
> 
> Google Sheets Link
> 
> 
> *Benchmark**Gelid Score**Gelid Edge Temp**Gelid HS Temp**TFX Score**TFX Edge Temp**TFX HS Temp*Superposition 4k 1185595188186025188Superposition 4k 2185555190185795188Superposition 4k 3184935188185885187TimeSpy 1251535191252795187TimeSpy 2251445190252645187TimeSpy 3251625190252915188Port Royal 1127425290118285087Port Royal 2127255290117975288Port Royal 3127455190118025187
> 
> 
> TL : DR - I think TFX is slightly better for my use case. It's thicker, performs slightly better (1-2c better HS) and holds good temps longer. Gelid started off really good and slowly got worse by a few degrees (~2c HS) over one month where as TFX remained the exact same.


While I'm not water cooled I was having an issue with hotspot temps and thermal paste. I spent the last 3 days trying to figure it out on my 6900xt red devil. Tried gelid, kryonaut but the same 30 to 40c differential from GPU temp. After my 30th repaste and running out of paste. I had to resort to this syy paste a friend gave me but it proved great in paste builds. Temps improved a bit but still higher than I wanted . I decided to try the. Washer mod and boy what a difference . Dropped hotspot temps by 20c. Running 335 power limit, 335 tdc 55 soc 2650 min 2750 max 2150 memory and GPU temp is 55 max, 85 hot spot . 23107 in timespy gou score. Finally hotspot no more than 20 c difference3d mark score


----------



## EastCoast

deadfelllow said:


> dafuq
> 
> I thought preview driver improvement is only 6x50 series












__ https://twitter.com/i/web/status/1524406217476648962


----------



## J7SC

bloot said:


> Massive DX11 performance improvement due to AMD Adrenalin preview driver.


...nice ! I still have to try out some benching later, but this driver eliminates some stuttering at YouTube 4K the previous driver had introduced (and which wasn't there before).
Also, this driver coming out a day after the 6x50 XT 'refresh' is 'interesting'...hopefully gains are across the RDNA2 range for older models remain...


----------



## deadfelllow

I just installed the preview driver and it works very well on GoW. Previous driver was around %60-70 gpu usage and my fps was like 80-90ish in the same area. Now with the preview driver my fps is around like 135-140.

1440p Max settings fsr off


----------



## LtMatt

2.8Ghz core clock, love it. Temps looking okay too.


----------



## deadfelllow

LtMatt said:


> 2.8Ghz core clock, love it. Temps looking okay too.


hahaha its a long story. I repasted again today  Temps is not good but not bad either so ill take it.


----------



## RichieRich25

Is the preview driver identical as the 22.5.1? Nevermind I see it is not after reading the release notes


----------



## jonRock1992

deadfelllow said:


> I just installed the preview driver and it works very well on GoW. Previous driver was around %60-70 gpu usage and my fps was like 80-90ish in the same area. Now with the preview driver my fps is around like 135-140.
> 
> 1440p Max settings fsr off
> 
> View attachment 2560254


That's great news! I stopped playing that game because the performance was off-putting. Gonna jump back in it now.


----------



## hellm

LtMatt said:


> Does More Power Tool need an update to recognise the new GPUs? I have a 6750 XT installed but MPT still only recognises my 6900 XT? @hellm
> 
> I am using Beta 1.3.8 Beta 2.


1.3.10 final is the latest version, support for all the refresh cards.

MPT scans the registry and doesn't ask for the active card. So, you can find every key that is found in the registry in the drop down menu. You can delete the old key or use DDU to clean the registry for you.

1.3.8b2 should read a 6750 BIOS, but wouldn't recognize the GPU name. The 6750 has the same Device ID as the 6700, so MPT would think it is a 6700 BIOS. Since the GPU name is unknown, there would be no entry in the dropdown for the 6750.
But you could still use the workaround with saving a .reg file.


----------



## Scorpion667

Godhand007 said:


> I remember seeing yours or a similar post earlier but in my case, the card works fine if the PL limit is kept in check. I saw 97 C on GPU hotspot and card was working fine. Also, in OCCT 3d test with increase PL limits, the PC shuts down immediately without temps reaching above 50 C.


It shuts down at 98c but hardware monitoring doesn't poll fast enough to detect it actually hitting said temp. In my case it was reading 93-94c HS then poof PC shutdown. A trick I used to detect it is configure your HWinfo to log temp to file, reproduce the issue (shutdown) then check what values it recorded in the csv file. These specific cards seem to suffer from rapid pumpout effect on TIM. Below is what my factory thermal paste looked like when I took it apart for the first time. I think a good test would be running TS GT2 with the PC close to the AC output feed it cold air and see if the shutdown stops. If you want to try replacing TIM yourself use TFX or Gelid GC Extreme and 1mm nylon washer mod but keep in mind Sapphire claims replacing TIM voids your warranty per below screenshot


----------



## RichieRich25

for what its worth heres a preview driver vs 22.5.1 vs 22.4.1. same mpt settings and overclock settings used. 4.1 in timespy


----------



## Scorpion667

RichieRich25 said:


> While I'm not water cooled I was having an issue with hotspot temps and thermal paste. I spent the last 3 days trying to figure it out on my 6900xt red devil. Tried gelid, kryonaut but the same 30 to 40c differential from GPU temp. After my 30th repaste and running out of paste. I had to resort to this syy paste a friend gave me but it proved great in paste builds. Temps improved a bit but still higher than I wanted . I decided to try the. Washer mod and boy what a difference . Dropped hotspot temps by 20c. Running 335 power limit, 335 tdc 55 soc 2650 min 2750 max 2150 memory and GPU temp is 55 max, 85 hot spot . 23107 in timespy gou score. Finally hotspot no more than 20 c difference3d mark score


Nice!!! Yeah I'm also using 1mm nylon washers. Glad you're up and running. Next time you replace TIM also check the black spacers on the X-plate which push down in the center. I noticed those spacers tend to get pushed outward over time although they're held with a very viscous but malleable glue. I was able to push them back to center by hand. Try TFX next time is possible, the stuff is THICK


----------



## Iarwa1N

RichieRich25 said:


> While I'm not water cooled I was having an issue with hotspot temps and thermal paste. I spent the last 3 days trying to figure it out on my 6900xt red devil. Tried gelid, kryonaut but the same 30 to 40c differential from GPU temp. After my 30th repaste and running out of paste. I had to resort to this syy paste a friend gave me but it proved great in paste builds. Temps improved a bit but still higher than I wanted . I decided to try the. Washer mod and boy what a difference . Dropped hotspot temps by 20c. Running 335 power limit, 335 tdc 55 soc 2650 min 2750 max 2150 memory and GPU temp is 55 max, 85 hot spot . 23107 in timespy gou score. Finally hotspot no more than 20 c difference3d mark score


what is the washer mod?


----------



## wermad

Done 🥵


----------



## Godhand007

mastertrixter said:


> have you tried pushing higher than the 360/65?
> 
> my card (red devil 6900xt) doesn’t like anything in the 350-400 range but does fine at 400+.


No but what does " anything in the 350-400 range " mean? Do you also get shutdowns at 365 TDC for GFX? What is your SOC TDC?


----------



## Godhand007

Scorpion667 said:


> It shuts down at 98c but hardware monitoring doesn't poll fast enough to detect it actually hitting said temp. In my case it was reading 93-94c HS then poof PC shutdown. A trick I used to detect it is configure your HWinfo to log temp to file, reproduce the issue (shutdown) then check what values it recorded in the csv file. These specific cards seem to suffer from rapid pumpout effect on TIM. Below is what my factory thermal paste looked like when I took it apart for the first time. I think a good test would be running TS GT2 with the PC close to the AC output feed it cold air and see if the shutdown stops. If you want to try replacing TIM yourself use TFX or Gelid GC Extreme and 1mm nylon washer mod but keep in mind Sapphire claims replacing TIM voids your warranty per below screenshot
> 
> View attachment 2560266
> 
> 
> View attachment 2560264
> View attachment 2560265


I knew I saw your post earlier somewhere. I have already tried logging with GPU-Z with smallest time resolution possible. Didn't see any issues. And like I said it works fine with normal PL limits, played hours on it and ran OCCT for 30 minutes, no issues. It just won't take any higher TDC values.


----------



## alceryes

Godhand007 said:


> Refer this post.


There are 27 adjustable settings in that screenshot. I think you should drill down through the ones you changed and find the EXACT culprit. The problem is that you'll need to test hard for probably a couple hours to confirm stability in between each change.

Nothing I've seen so far definitivly points to the PSUs being the culprit. I still think it's the card.


----------



## Godhand007

alceryes said:


> There are 27 adjustable settings in that screenshot. I think you should drill down through the ones you changed and find the EXACT culprit. The problem is that you'll need to test hard for probably a couple hours to confirm stability in between each change.
> 
> Nothing I've seen so far definitivly points to the PSUs being the culprit. I still think it's the card.


The only things changed are power limits and TDC values. I think it's already pretty much confirmed the card is the culprit. Just read some related pervious posts.


----------



## RichieRich25

Iarwa1N said:


> what is the washer mod?


It's a plastic or nylon washer that helps you apply more pressure so the heatsink can touch the GPU better. You can order these on eBay or Amazon for cheap. You don't even need to take the GPU apart, just the main gpu screws one by one and put the washers and rescrew back down. I just did my son's red devil 6600x, did not repaste and the hotspot temps dropped by around 8c


----------



## LtMatt

hellm said:


> 1.3.10 final is the latest version, support for all the refresh cards.
> 
> MPT scans the registry and doesn't ask for the active card. So, you can find every key that is found in the registry in the drop down menu. You can delete the old key or use DDU to clean the registry for you.
> 
> 1.3.8b2 should read a 6750 BIOS, but wouldn't recognize the GPU name. The 6750 has the same Device ID as the 6700, so MPT would think it is a 6700 BIOS. Since the GPU name is unknown, there would be no entry in the dropdown for the 6750.
> But you could still use the workaround with saving a .reg file.


Many thanks, downloading now.


----------



## J7SC

FYI, the new Radeon driver (22.10.01.03) works great and is 'speedy'.

...and for Sapphire fans:










Sapphire Radeon RX 6950 XT Nitro+ Pure Review


----------



## LtMatt

So the 6750 XT seems to overclock really well. 2950Mhz set in AMD Software and appears to be stable. Stock clock was 2666Mhz. Stock memory clock is 2258Mhz, only goes up to 2312Mhz. Stock power limit is around 40W higher from the 6700 XT Red Devil. You almost don't need MPT with +15% unless clocking to 2.9Ghz.

Actual core clock in game is 2900-2925Mhz. Less clock drop than on the previous 6700 XT I had which clock dropped by 75Mhz or more. Ignore the overlay I've not updated it to say 6750 XT.

Days Gone 1080P max settings, 156FPS. Stock 3090 FE with max power limit gets 220 FPS in the same spot.
















EDIT - I noticed some small artifacts after extended testing, so dropped the core clock to 2937Mhz max and 2837Mhz min. The is 2900Mhz in game, with a few Mhz fluctuations above and below. It does about 125-150Mhz more than my 6700 XT Red Devil did.


----------



## alceryes

Godhand007 said:


> The only things changed are power limits and TDC values. I think it's already pretty much confirmed the card is the culprit. Just read some related pervious posts.


Got it. Sorry, missed that one.


----------



## Godhand007

alceryes said:


> Got it. Sorry, missed that one.


NP. I won the shi*t lottery with this card. It *doesn't OC well *and *won't even take relatively tame MPT values*. And the card *doesn't even have the decency to die properly so I could RMA it*. It works without issues on default.


----------



## RichieRich25

Godhand007 said:


> NP. I won the shi*t lottery with this card. It *doesn't OC well *and *won't even take relatively tame MPT values*. And the card *doesn't even have the decency to die properly so I could RMA it*. It works without issues on default.


 I'll try your settings on my red devil and see if it works properly. What are you running that causes shut downs. My specs are 5950xt, PSU 1000 EVGA G6, 32 gb t force XTREEM at 3800 mhz. What were your exact settings and what is your overclock at?


----------



## J7SC

...I told my little XTX that if it does not hit at least 19.1K in Superposition 4K, I'll just get the Sapphire Nitro+ Pure 6950XT instead...seems to have worked 😁

MPT 450W/400A, down-clocking and down-volting enabled, DF C-State enabled.


----------



## RichieRich25

J7SC said:


> ...I told my little XTX that if it does not hit at least 19.1K in Superposition 4K, I'll just get the Sapphire Nitro+ Pure 6950XT instead...seems to have worked 😁
> 
> MPT 450W/400A, down-clocking and down-volting enabled, DF C-State enabled.
> View attachment 2560323


Nice! Quick question, my superposition score is always super low and inconsistent are there any settings you change in and software for superposition. Highest I get is 17500 but I use to get 18000 consistently when I was on windows 10 and older drivers. I forgot what I did to get this numbers


----------



## J7SC

RichieRich25 said:


> Nice! Quick question, my superposition score is always super low and inconsistent are there any settings you change in and software for superposition. Highest I get is 17500 but I use to get 18000 consistently when I was on windows 10 and older drivers. I forgot what I did to get this numbers


...nothing special, stock settings and Windows 11 Pro for this run w/ the 3950X work-horse (incidentally, Windows 10 Pro is better; I switched my 5950X / 3090 gamer combo back to Win 10 Pro this week). My XTX card is low ASIC w/1200mm x 62mm dual pump cooling which explains the temps above, even at 400W+. 

To get consistent scores, you need to make sure that there's ample PL available_ if you can control temps_ (ie. Hotspot).


----------



## LtMatt

Now I've got a nice variety of GPUs, I thought I'd test Warzone, 1080P, lowest possible settings, but particle effects high, bullet impacts enabled, cache sun/shadows on.

5800X3D
3800CL14
AMD Software Preview Driver May 2022
Nvidia 512.59
Warzone training area, spawn and don't move, wait 15 seconds and take screenshot (if anyone wants to compare performance)

6750XT Red Devil (overlay is wrong, it is a 6750 XT)









6900 XTXH Toxic Extreme









3090 Founders Edition









Thanks to the 5800X3D, no GPU is majorly bottlenecked here and GPU utilisation is 95% or so across the board. My 3090 overclock can perhaps go further as COD does not draw much power, still dialling it in. 1440P and 2160P results to follow.


----------



## alceryes

J7SC said:


> ...I told my little XTX that if it does not hit at least 19.1K in Superposition 4K, I'll just get the Sapphire Nitro+ Pure 6950XT instead...seems to have worked 😁
> 
> MPT 450W/400A, down-clocking and down-volting enabled, DF C-State enabled.
> View attachment 2560323


Excellent scores!
I usually don't test with Superposition 4K. Just ran through and got 17090. This is with stock cooler and Radeon software adjustments only (no MPT).
It's good to know that I potentially have another 13%+ performance increase if I slap on a good cooler and pump up the juice!

So, I seem to get a single hitch right as test #10 starts. That tanks my min FPS (and thus my score). Anyone else see this?


----------



## J7SC

alceryes said:


> Excellent scores!
> I usually don't test with Superposition 4K. Just ran through and got 17090. This is with stock cooler and Radeon software adjustments only (no MPT).
> It's good to know that I potentially have another 13%+ performance increase if I slap on a good cooler and pump up the juice!
> 
> So, I seem to get a single hitch right as test #10 starts. That tanks my min FPS (and thus my score). Anyone else see this?


Yup,_ that hitch _- happens to me, too...I checked the Superposition 4K results at Unigine's site a while back, and the hitch either happens at scene 10, or 13/14 (my 6900Xt 'likes' the former, while the 3090 'prefers' the latter  ).


----------



## alceryes

J7SC said:


> Yup,_ that hitch _- happens to me, too...I checked the Superposition 4K results at Unigine's site a while back, and the hitch either happens at scene 10, or 13/14 (my 6900Xt 'likes' the former, while the 3090 'prefers' the latter  ).


Okay, so it's normal. Thanks!


----------



## RichieRich25

Godhand007 said:


> NP. I won the shi*t lottery with this card. It *doesn't OC well *and *won't even take relatively tame MPT values*. And the card *doesn't even have the decency to die properly so I could RMA it*. It works without issues on default.


Tried the tdp and the soc your having issues with and no shut downs or temp issues. not sure what power limit you had though.


----------



## J7SC

RichieRich25 said:


> Tried the tdp and the soc your having issues with and no shut downs or temp issues. not sure what power limit you had though.





Godhand007 said:


> NP. I won the shi*t lottery with this card. It *doesn't OC well *and *won't even take relatively tame MPT values*. And the card *doesn't even have the decency to die properly so I could RMA it*. It works without issues on default.


...It's a tough call to make re. warranty rules in your jurisdiction, but sometimes, cards aren't screwed together perfectly to begin with though function well for a while. I would arm up with good, thick thermal paste and decent correctly-sized thermal pads (or thermal putty for that matter) and take the card apart for a repaste job.

What might have happened (pure speculation on my part) is that during earlier MPT-PL runs, paste might have dried out or an incorrectly seated thermal pad exposed a component to more heat (or indeed a deteriorating component on the PCB, such as an input filter). These cards are full of various sensors for the boost algorithms, and it obviously is getting a signal from somewhere that beyond a certain point, there are issues...PL is expressed in Watts which is a measure of heat energy...


----------



## Enzarch

J7SC said:


> ...I told my little XTX that if it does not hit at least 19.1K in Superposition 4K, I'll just get the Sapphire Nitro+ Pure 6950XT instead...seems to have worked 😁
> MPT 450W/400A, down-clocking and down-volting enabled, DF C-State enabled.


BTW, the preview driver gives a significant boost to Superposition, I went from ~18500 before to about 19300 on the preview driver.


----------



## J7SC

Enzarch said:


> BTW, the preview driver gives a significant boost to Superposition, I went from ~18500 before to about 19300 on the preview driver.


Definitely a good driver. As posted before, I was already at ~19k with the old driver, but this one scored higher with 70 MHz lower clock and full down-clock / down-volt. Hopefully, the final WHQL version of this driver is similar.


----------



## Godhand007

RichieRich25 said:


> Tried the tdp and the soc your having issues with and no shut downs or temp issues. not sure what power limit you had though.


It's an issue with the card but refer this post if you want to try few things; Refer this post.


----------



## Godhand007

J7SC said:


> ...It's a tough call to make re. warranty rules in your jurisdiction, but sometimes, cards aren't screwed together perfectly to begin with though function well for a while. I would arm up with good, thick thermal paste and decent correctly-sized thermal pads (or thermal putty for that matter) and take the card apart for a repaste job.


This is out of question. Warranty would be completely void.



> What might have happened (pure speculation on my part) is that during earlier MPT-PL runs, paste might have dried out or an incorrectly seated thermal pad exposed a component to more heat (or indeed a deteriorating component on the PCB, such as an input filter). These cards are full of various sensors for the boost algorithms, and it obviously is getting a signal from somewhere that beyond a certain point, there are issues...PL is expressed in Watts which is a measure of heat energy...


I think you might be right. I will still try an RMA. I have proof of this card shutting down on 4 different PSUs, I might be able to force them to accept RMA based on that.


----------



## DvL Ax3l

Hi guys, I'm following this thread for a while but I didn't find the answers I'm searching, anyway u are nice community here, so I have a reference 6900XT (XTX) no MPT and I score around 21500 in time spy with 2000/2500 MHz core - 2150Mhz FT memory and + 15% PL with 1100mV in wattman, if I want to use MPT, with the stock cooler, how much can I improve? Wich value I have to modify? Anyway no custom loop for me until September/October so I have to wait.


----------



## RichieRich25

Godhand007 said:


> It's an issue with the card but refer this post if you want to try few things; Refer this post.


Yea my card would blow up at 460 pl. I'm air cooled. Not sure what I can push my card too but if I ever go water cooled I'll bump up the pl, waiting
for the 7000 series though


----------



## alceryes

DvL Ax3l said:


> Hi guys, I'm following this thread for a while but I didn't find the answers I'm searching, anyway u are nice community here, so I have a reference 6900XT (XTX) no MPT and I score around 21500 in time spy with 2000/2500 MHz core - 2150Mhz FT memory and + 15% PL with 1100mV in wattman, if I want to use MPT, with the stock cooler, how much can I improve? Wich value I have to modify? Anyway no custom loop for me until September/October so I have to wait.


I'm not sure about the MPT settings but I have a couple suggestions for reference 6900 XTs.
Try lowering your memory to 2100 MHz when using fast timings to see if you get better scores. I foumd that mine ran better (higher benchmark scores) at 2100MHz + FT than at 2150MHz + FT. It was also unstable in some benchmarks at 2150MHz + FT.

I would also do the easy pcb back thermal pad mod - [Official] AMD Radeon RX 6900 XT Owner's Club


----------



## DvL Ax3l

alceryes said:


> I'm not sure about the MPT settings but I have a couple suggestions for reference 6900 XTs.
> Try lowering your memory to 2100 MHz when using fast timings to see if you get better scores. I foumd that mine ran better (higher benchmark scores) at 2100MHz + FT than at 2150MHz + FT. It was also unstable in some benchmarks at 2150MHz + FT.
> 
> I would also do the easy pcb back thermal pad mod - [Official] AMD Radeon RX 6900 XT Owner's Club


I'm using 2150Mhz FT because I didn't see any degradation in scores in time spy and a little boost in superposition around 50/100 points, about the thermal pad mods my VRAM are around 85/95 ºC during long heavy game sessions, anyway the gelids are in my Amazon list 😏


----------



## Godhand007

Open question to all, before I submit my card for RMA , I want to try one more thing. The shutdown temps for TOXIC EE and LC OEM are 103 C. It could be that this temp is being reached for small instance of time which the sensors are not able to pick up. Do any of you know of any XTXH bios which has shutdown temps higher than 103. I remember the shutdown temps for refence card being 118 C.


----------



## Godhand007

Godhand007 said:


> Open question to all, before I submit my card for RMA , I want to try one more thing. The shutdown temps for TOXIC EE and LC OEM are 103 C. It could be that this temp is being reached for small instance of time which the sensors are not able to pick up. Do any of you know of any XTXH bios which has shutdown temps higher than 103. I remember the shutdown temps for refence card being 118 C.


Holy [email protected], I think we know our culprit exactly now. Just flashed a PowerColor bios with 118 C shutdown temps. @Scorpion667 was right all along. Just need to do a little more testing to remove any remaining doubts.


----------



## Scorpion667

Godhand007 said:


> Holy [email protected], I think we know our culprit exactly now. Just flashed a PowerColor bios with 118 C shutdown temps. @Scorpion667 was right all along. Just need to do a little more testing to remove any remaining doubts.
> 
> View attachment 2560386


I wasn't aware the PowerColor XTXH vBIOS works on this card, great find. I though since the PowerColor has a different PCIe power cable setup (3x8pin) that it would brick a toxic EE. You have the Toxic Extreme edition right?

Sorry to hear regarding the issue but yeah something is up with this model and HS temps. You're the 4th person I've seen experience this with toxic EE, myself included. Probably a mix of high thermal density, bad factory thermal paste and bad mounting. My other theory is that both the die and cold plate are concave thus exacerbating TIM pump out


----------



## Godhand007

Scorpion667 said:


> I wasn't aware the PowerColor XTXH vBIOS works on this card, great find. I though since the PowerColor has a different PCIe power cable setup (3x8pin) that it would brick a toxic EE.


I took a chance since the card has dual bios.


> You have the Toxic Extreme edition right?


Yes


> Sorry to hear regarding the issue but yeah something is up with this model and HS temps. You're the 4th person I've seen experience this with toxic EE, myself included. Probably a mix of high thermal density, bad factory thermal paste and bad mounting. My other theory is that both the die and cold plate are concave thus exacerbating TIM pump out


Could be. Now I don't know whether to RMA it or not. Someone from Sapphire's local service center told me that it could be 2 and half months before I get a replacement, or they could send me back the same card again.
The card won't even do 2800Mhz even with 1.287 volts. Truly a shi*ty situation.


----------



## zwer54

So I flashed my 6900XT Toxic EE with 6950XT Toxic LE and then I had to load my original stock bios into MPT and pretty much everything works as before but now I can overclock memory to almost 2500 MHz. However, benchmarks did not improve by much. I guess 2100 with tight latency still scores almost the same.









3dmark.com







www.3dmark.com


----------



## RichieRich25

Just out of curiosity what pl are you air cooled guys using in MPT. I know it depends on how well your particular card cools and silicon lottery but just wondering what's a the typical limit you guys are using and what is your max core clocks in something like timespy


----------



## Godhand007

zwer54 said:


> So I flashed my 6900XT Toxic EE with 6950XT Toxic LE and then I had to load my original stock bios into MPT and pretty much everything works as before but now I can overclock memory to almost 2500 MHz. However, benchmarks did not improve by much. I guess 2100 with tight latency still scores almost the same.
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 3dmark.com
> 
> 
> 
> 
> 
> 
> 
> www.3dmark.com
> 
> 
> 
> 
> 
> View attachment 2560400


Can you please provide a link to the 6950LE bios?


----------



## zwer54

Godhand007 said:


> Can you please provide a link to the 6950LE bios.


VGA Bios Collection: Sapphire RX 6950 XT 16 GB | TechPowerUp


----------



## Godhand007

zwer54 said:


> VGA Bios Collection: Sapphire RX 6950 XT 16 GB | TechPowerUp


Thanks. Did you face any issue with bios update or did it go smoothly?


----------



## zwer54

Godhand007 said:


> Thanks. Did you face any issue with bios update or did it go smoothly?


It is actually extreme card but not sure if air extreme or liquid... Had no issues at all, just went into usb bootable ubuntu and run force flash. Went smooth as butter. But then once you go back to windows, gpu stays in "safe mode" at 500 mhz only. Some of the settings for SoC makes this happening. So I just loaded stock bios from my gpu into MPT and rebooted and then it was all fine and you can bench with ram up to 2480 mhz. But ofc without fast timing.

Also I set the switch to silent bios and flash that one, tested it, switching back to pos 1 boots back on stock gpu as it was.


----------



## J7SC

zwer54 said:


> It is actually extreme card but not sure if air extreme or liquid... Had no issues at all, just went into usb bootable ubuntu and run force flash. Went smooth as butter. But then once you go back to windows, gpu stays in "safe mode" at 500 mhz only. Some of the settings for SoC makes this happening. So I just loaded stock bios from my gpu and rebooted and then it was all fine and you can bench with ram up to 2480 mhz. But ofc without fast timing.
> 
> Also I set the switch to silent bios and flash that one, tested it, switching back to pos 1 boots back on stock gpu as it was.


Thanks - that sounds extremely interesting  ! I' have to find the Gigabyte 6950 Gaming OC bios (checked yesterday at TPU for both verified, unverified) and try that out...the 6900XT also has dual bios and 3x8 pin...may be I get to stop complaining about the 2150 VRAM lock...


----------



## zwer54

J7SC said:


> Thanks - that sounds extremely interesting  ! I' have to find the Gigabyte 6950 Gaming OC bios (checked yesterday at TPU for both verified, unverified) and try that out...the 6900XT also has dual bios and 3x8 pin...may be I get to stop complaining about the 2150 VRAM lock...


Honestly, not worth the hassle. It looks like they just loosen the timings to get higher clocks which ends up the same thing in the end. Probably benefits somewhere and loses somewhere but in the end, there's no much improvement. I am more mad about timespy result as I can't get it to 25k graphic score and I guess temperature is my issue but there's not much more I can do...


----------



## J7SC

zwer54 said:


> Honestly, not worth the hassle. It looks like they just loosen the timings to get higher clocks which ends up the same thing in the end. Probably benefits somewhere and loses somewhere but in the end, there's no much improvement. I am more mad about timespy result as I can't get it to 25k graphic score and I guess temperature is my issue but there's not much more I can do...


...I would think that the 18 GBs VRAM has slower base timings; it probably will come down to the individual VRAM on a specific card and also cooling. With dual vbios, worth a shot anyways.


----------



## alceryes

zwer54 said:


> So I flashed my 6900XT Toxic EE with 6950XT Toxic LE and then I had to load my original stock bios into MPT and pretty much everything works as before but now I can overclock memory to almost 2500 MHz. However, benchmarks did not improve by much. I guess 2100 with tight latency still scores almost the same.


Maybe there's a sweet spot with the VRAM.
Mine is at 2100MHz and FT. If I try to go to 2150MHz and FT I actualy score lower on some benchmarks.


----------



## Godhand007

zwer54 said:


> It is actually extreme card but not sure if air extreme or liquid... Had no issues at all, just went into usb bootable ubuntu and run force flash. Went smooth as butter. But then once you go back to windows, gpu stays in "safe mode" at 500 mhz only. Some of the settings for SoC makes this happening. So I just loaded stock bios from my gpu and rebooted and then it was all fine and you can bench with ram up to 2480 mhz. But ofc without fast timing.
> 
> Also I set the switch to silent bios and flash that one, tested it, switching back to pos 1 boots back on stock gpu as it was.


Nice. I tried that as well but switched back to silent bios when I saw GPU causing the 500MHz issue.


----------



## zwer54

Godhand007 said:


> Nice. I tried that as well but switched back to silent bios when I saw GPU causing the 500MHz issue.


This actually happens from a few parameters regarding SoC in MPT. The ones that are up from 58 to 62. But literally just load your old bios on MPT and then adjust to whatever suits you. I'd say then the only difference is probably in memory timings...


----------



## J7SC

...it really is just about having at least _*the opportunity to check *_higher VRAM frequencies > 2150. As mentioned, I figure the timings are slower w/ the 6950XT/18 G VRAM vbios; still, in my tests on my 6900XT card, 2150 + FT (~ 2140 per HWInfo) consistently produced the best scores, indicating that there's some _potential_ headroom.


----------



## alceryes

J7SC said:


> ...it really is just about having at least _*the opportunity to check *_higher VRAM frequencies > 2150. As mentioned, I figure the timings are slower w/ the 6950XT/18 G VRAM vbios; still, in my tests on my 6900XT card, 2150 + FT (~ 2140 per HWInfo) consistently produced the best scores, indicating that there's some _potential_ headroom.


I'm sure I'd need to dive into MPT to exercise my card more but I'm happy with what I've got, for now.


----------



## jonRock1992

LtMatt said:


> Now I've got a nice variety of GPUs, I thought I'd test Warzone, 1080P, lowest possible settings, but particle effects high, bullet impacts enabled, cache sun/shadows on.
> 
> 5800X3D
> 3800CL14
> AMD Software Preview Driver May 2022
> Nvidia 512.59
> Warzone training area, spawn and don't move, wait 15 seconds and take screenshot (if anyone wants to compare performance)
> 
> 6750XT Red Devil (overlay is wrong, it is a 6750 XT)
> View attachment 2560324
> 
> 
> 6900 XTXH Toxic Extreme
> View attachment 2560325
> 
> 
> 3090 Founders Edition
> View attachment 2560326
> 
> 
> Thanks to the 5800X3D, no GPU is majorly bottlenecked here and GPU utilisation is 95% or so across the board. My 3090 overclock can perhaps go further as COD does not draw much power, still dialling it in. 1440P and 2160P results to follow.


Nice! I'm picking up a 5800X3D in a few days. I'm sick of being CPU bottlenecked in Death Stranding DC.


----------



## jonRock1992

I tried flashing the 6950 XT vbios to my 6900 XTX-H Red Devil. It boots up, but I'm having weird issues. It probably just comes down to this freaking Dark Hero motherboard. I can get the full memory bandwidth of the 6950 XT, BUT Smart Access Memory doesn't work. I believe this is because my motherboard forces CSM to be enabled when this vbios is enabled. I only get video signal in Windows. Can't access my uefi settings because I get error code D6, so I can't just disable CSM and enable SAM.

@zwer54
Were you able to get smart access memory working?


----------



## Scorpion667

Godhand007 said:


> I took a chance since the card has dual bios.
> 
> Yes
> 
> Could be. Now I don't know whether to RMA it or not. Someone from Sapphire's local service center told me that it could be 2 and half months before I get a replacement, or they could send me back the same card again.
> The card won't even do 2800Mhz even with 1.287 volts. Truly a shi*ty situation.


Show Sapphire a screenshot of 109c hotspot temp should be enough to get their attention. Ask what the quickest option is for resolution and evaluate from there. Maybe you can use the card on lower PL until they get a replacement then you can swap it out same day or something. Or sell it.

I just replaced the thermal paste on mine but those pricks apparently void your warranty for it. Yeah some places have laws against void warranty stickers although I think you still need to spend your money to sue them for it which...


----------



## Godhand007

Scorpion667 said:


> Show Sapphire a screenshot of 109c hotspot temp should be enough to get their attention.


But that was with PowerColor bios. They are asking for GPUz logs with 3d load. If I provide them with logs for that, they will instantly know something is wrong since shutdown temps for Sapphire Toxic EE bios is 103C. 



> Ask what the quickest option is for resolution and evaluate from there. Maybe you can use the card on lower PL until they get a replacement then you can swap it out same day or something. Or sell it.


1.I have sent them a reply with GPU z logs showing 96 within few seconds at max fan speed. I told them that I shutdown the PC to prevent damage. Haven't heard a response from them. 
2. About the same day replacement, not happening. 
3. Selling it: I have already posted an off hand sales listing but there is not great demand for AMD cards where I live. If it does sell it would be at 30% loss. That's a lot of money for ~ 2 months of usage.



> I just replaced the thermal paste on mine but those pricks apparently void your warranty for it. Yeah some places have laws against void warranty stickers although I think you still need to spend your money to sue them for it which...
> View attachment 2560494


Exactly, Already have a petition against MSI which I have to file in court.


----------



## LtMatt

jonRock1992 said:


> Nice! I'm picking up a 5800X3D in a few days. I'm sick of being CPU bottlenecked in Death Stranding DC.


I’m sure you won’t regret it Jon.


----------



## LtMatt

Another performance comparison in Warzone, Rebirth, 1080P. Smart Access Memory really helps RDNA2 push extra frames in this game and the 5800X3D gave all GPUs more headroom before becoming CPU limited.

6900 XTXH





6750 XT





3090 FE


----------



## JosiahBradley

Wow, this is a long thread. I finally upgraded GPUs to the 6900 XT (XTXH) of course days before the 6950 XT launch, because why not, similar price. Trying to secure my old place on the Superposition leaderboards for water at least. I doubt I'll ever get into LN2. Currently at 18346 on 4k optimized. But relatively "low" clocks at 2700/2100 1175mv. Also my RAM has always been weird even though its sold as 3800CL14 I could never get it stable really, so running at 1700 FCLK and horrible timings like 24 lol. It's a new 5800X3D installed so I'll hopefully fix it up soon.

Onto Time Spy again and see if I can break 30k finally









UNIGINE Superposition benchmark score


UNIGINE Superpsition detailed score page




benchmark.unigine.com












I scored 20 007 in Time Spy


AMD Ryzen 7 5800X, AMD Radeon RX 6900 XT x 1, 32768 MB, 64-bit Windows 11}




www.3dmark.com


----------



## RichieRich25

RichieRich25 said:


> I figured it was the kryonaut. The last time I replaced it on the 6900xt. I noticed the paste was so hard and crusty that I was able to remove the paste with one swipe but thought maybe I got a bad batch but now my hotspot is high again . Going to try the gelid extreme. Heard good things


Ur right below me in superposition for 6900s. I can't get any higher with my 5950 which leads me to believe that all core clocks give better score than single. 5800 was running all core 4825 with boost going to 4950. My 5950 all core is 47 with boost at 5.1 and I can't even come close to my old score


----------



## deadfelllow

Scorpion667 said:


> Show Sapphire a screenshot of 109c hotspot temp should be enough to get their attention. Ask what the quickest option is for resolution and evaluate from there. Maybe you can use the card on lower PL until they get a replacement then you can swap it out same day or something. Or sell it.
> 
> I just replaced the thermal paste on mine but those pricks apparently void your warranty for it. Yeah some places have laws against void warranty stickers although I think you still need to spend your money to sue them for it which...
> View attachment 2560494


I did all the things you said 1 month ago. And they didnt even attend to change my thermal paste . They said 95c hotspot is typical. And when i say Why wouldnt i paste myself they replied with we are gonna block the warranty XDDDDDDDD

sapphire sux


----------



## RichieRich25

deadfelllow said:


> I did all the things you said 1 month ago. And they didnt even attend to change my thermal paste . They said 95c hotspot is typical. And when i say Why wouldnt i paste myself they replied with we are gonna block the warranty XDDDDDDDD
> 
> View attachment 2560530


That's crazy. This is why I purchase my cards from microcenter. They don't punish you for taking your card apart.


----------



## Godhand007

deadfelllow said:


> I did all the things you said 1 month ago. And they didnt even attend to change my thermal paste . They said 95c hotspot is typical. And when i say Why wouldnt i paste myself they replied with we are gonna block the warranty XDDDDDDDD
> 
> sapphire sux
> 
> View attachment 2560530


Well that sucks. I can bypass them and submit for RMA directly to the local service center but don't know whether they would just send me the same card back after waiting for 2 or more months or a different one. Never going for Sapphire again unless there is no other option. Crap product and crap service.


----------



## DvL Ax3l

Godhand007 said:


> Well that sucks. I can bypass them and submit for RMA directly to the local service center but don't know whether they would just send me the same card back after waiting for 2 or more months or a different one. Never going for Sapphire again unless their is no other option. Crap product and crap service.


Theoretically the warranty label are illegal and you can repaste your card, but u know nobody will sue them... The costs are too high to sustain.
XFX for example does not ivalidate the warranty if you change the cooling solution or repaste the GPU, at least in the USA.
In Italy, I really don't know how to proceed.


----------



## Godhand007

DvL Ax3l said:


> Theoretically the warranty label are illegal, and you can repaste your card, nobody will sue them... The costs are too high to sustain.
> XFX for example does not ivalidate the warranty if you change the cooling solution or repaste the GPU, at least in the USA.
> In Italy, I really don't know how to proceed.


To big of a risk in voiding warranty in case something goes wrong in future. I am thinking of taking a loss of ~30% to 40% and just sell it.


----------



## Godhand007

Question: Has anyone been able to achieve higher core clocks using different bioses/biosii?


----------



## deadfelllow

Has anyone polished gpu Die and Coldplate? 

Any improvement?


----------



## jonRock1992

Godhand007 said:


> Question: Has anyone been able to achieve higher core clocks using different bioses/biosii?


I think people that have flashed the XTLC vBIOS saw gains. So far, for me, the 6950 XT vBIOS gave me worse performance. I haven't messed around with it too much, but without S.A.M. I won't be doing well in Timespy. So it's not worth it to use the 6950 XT vBIOS. I'm going to sell my ASUS Dark Hero so I can get a motherboard that's compatible with the XTLC vBIOS and Red Devil Ultimate GPU.


----------



## Godhand007

jonRock1992 said:


> ... XTLC vBIOS and Red Devil Ultimate GPU.


I have tired both of these and one from AsRock, saw zero gains.


----------



## jonRock1992

Godhand007 said:


> I have tired both of these and one from AsRock, saw zero gains.


Really? Hmmm. Might not change out the mobo then.


----------



## Godhand007

jonRock1992 said:


> Really? Hmmm. Might not change out the mobo then.


Do wait for response from others though. My card is sh*te overlocker. One might see different results based on their card's capability. LC bios does seem to give points in TS because of higher memory frequencies.


----------



## J7SC

jonRock1992 said:


> Really? Hmmm. Might *not change out the mobo* then.





RichieRich25 said:


> Ur right below me in superposition for 6900s. I can't get any higher* with my 5950* which leads me to believe that all core clocks give better score than single. 5800 was running all core 4825 with boost going to 4950. My 5950 all core is 47 with boost at 5.1 and I can't even come close to my old score


I don't think it is the mobo or CPU...At 4K (or 8K), the CPU itself will play less of a role, though decent system RAM certainly will. On the left, Asus CH8 Hero Wifi w/ _3950X_ with 16c/32t enabled, on the right an older screenie for Asus CH8 Dark Hero and _5950X_ with 16c/32t enabled. All that said, you can even disable CCX1 to emulate your prior 5800 tests...more likely, Windows update patches are to blame - or Win 11 itself.


----------



## RichieRich25

J7SC said:


> I don't think it is the mobo or CPU...At 4K (or 8K), the CPU itself will play less of a role, though decent system RAM certainly will. On the left, Asus CH8 Hero Wifi w/ _3950X_ with 16c/32t enabled, on the right an older screenie for Asus CH8 Dark Hero and _5950X_ with 16c/32t enabled. All that said, you can even disable CCX1 to emulate your prior 5800 tests...more likely, Windows update patches are to blame - or Win 11 itself.
> View attachment 2560572


Figured it out. Super strange but running AMDs stats overlay in superposition decreased my score by around 1000 points. Turned off the overlay and scores improved by alot and was able to finally beat my previous high score with the 5950. I still decided to up my tdc and curve optimizer a bit. Here my new highest score. Wonder what's the highest air cooled score out there.








UNIGINE Superposition benchmark score


UNIGINE Superpsition detailed score page




benchmark.unigine.com


----------



## RichieRich25

Was able to get another 200 points by increasing the all core clock on the 5950x. I wont leave like this because even though superposition scores are higher, timespy scores dropped a bit.
5950x all core set to 4775 and the 6900xt set to min 2715 max 2815, anything more than that I crash most likely due to temps
UNIGINE Superposition benchmark score


----------



## EastCoast

Is anyone getting the exact mem core clock that you set it to while gaming? 
Ie: 2150MHz


----------



## RichieRich25

EastCoast said:


> Is anyone getting the exact mem core clock that you set it to while gaming?
> Ie: 2150MHz


No I get stuck at 2138 and will fluctuate to 2140 once ina blue but never higher than 2140.


----------



## EastCoast

RichieRich25 said:


> No I get stuck at 2138 and will fluctuate to 2140 once ina blue but never higher than 2140.


Thanks, I thought it was just me. I haven't looked into memory clocks in a while so I'm just noticing it. But if I recall correctly I do recall it actually reading what you set it to months ago. 
I wonder what changed?


----------



## J7SC

...typically 2138, 2140, 2150 and sometimes 2160 (I wish).

On a related note, when setting 2150, Superpos4k scores about 200 - 300 pts higher with fast timing compared to default timing on my setup. I'm looking forward to get a hold of the 6950XT vbios for my board once they show up on TPU.


----------



## zwer54

This doesn't look good, isn't it? Looks like there's too much pressure in the middle and pushes all the paste out od centre?


----------



## RichieRich25

zwer54 said:


> This doesn't look good, isn't it? Looks like there's too much pressure in the middle and pushes all the paste out od centre?


Doesn't look like enough paste to me. In my experience GPUs usually need a liberal amount of paste. I always spread the past evenly first and cover the whole chip, then I do the dot method, that way if there is pump out I'm still covered.


----------



## zwer54

I just can't get into 25k gpu score... I don't get it. Now I applied liquid metal, hotspot is absolutely amazing now, clocks are high, power is consumed, but the result is not there...


----------



## Godhand007

zwer54 said:


> I just can't get into 25k gpu score... I don't get it. Now I applied liquid metal, hotspot is absolutely amazing now, clocks are high, power is consumed, but the result is not there...


Card is starved of power.


----------



## zwer54

Godhand007 said:


> Card is starved of power.


Well, then not much more I can do since the limit was set to 600W in MPT


----------



## Godhand007

zwer54 said:


> Well, then not much more I can do since the limit was set to 600W in MPT


Oh! I didn't know that. I presume TDC values were set accordingly as well, right?


----------



## zwer54

Godhand007 said:


> Oh! I didn't know that. I presume TDC values were set accordingly as well, right?


yup, 600/600. If I add more voltage it will show more power draw but result doesn't improve...


----------



## Godhand007

zwer54 said:


> yup, 600/600. If I add more voltage it will show more power draw but result doesn't improve...


That is strange. Can you share your adrenaline settings?


----------



## zwer54

Godhand007 said:


> That is strange. Can you share your adrenaline settings?


Ah, I didn't save them. Anyway, I use just MPT and adrenaline, set min clock to 2720, max 2880, in MPT just power limits and set voltage to 1250 min and max (via temp dependent). 
Max I've managed to pull was 507W from gpu and the display on psu hit record high 845W but result didn't improve and actually I can achieve that score with a lot less power and oc so I'm not sure why but I guess it's down to this specific gpu itself.

I was thinking temperature was the issue since hot spot was hitting 90+ but now it stays under 75 and everything is the same...


----------



## Godhand007

zwer54 said:


> Ah, I didn't save them. Anyway, I use just MPT and adrenaline, set min clock to 2720, max 2880, in MPT just power limits and set voltage to 1250 min and max (via temp dependent).
> Max I've managed to pull was 507W from gpu and the display on psu hit record high 845W but result didn't improve and actually I can achieve that score with a lot less power and oc so I'm not sure why but I guess it's down to this specific gpu itself.
> 
> I was thinking temperature was the issue since hot spot was hitting 90+ but now it stays under 75 and everything is the same...


_ 2720, max 2880._ It should be 2780-2880. Every bit counts when you are going for benchmark scores. You also need to disable all the deep sleep options in MPT. Other than that, make sure that nothing is running in the background etc.


----------



## RichieRich25

What would cause me to crash in timespy. I can't push past 2750 . GPU temps avg 54 and hot spot under 83c. Is it due to voltage. GPU pl set to 340 + 15%. I doubt it's temp but not sure. I don't shut down just timespy crashes. Superposition I can go 2815 max


----------



## zwer54

RichieRich25 said:


> What would cause me to crash in timespy. I can't push past 2750 . GPU temps avg 54 and hot spot under 83c. Is it due to voltage. GPU pl set to 340 + 15%. I doubt it's temp but not sure. I don't shut down just timespy crashes. Superposition I can go 2815 max


Probably voltage. What voltage you use? For most cards max is 1175mV while for higher clocks you will need more than that.


----------



## RichieRich25

zwer54 said:


> Probably voltage. What voltage you use? For most cards max is 1175mV while for higher clocks you will need more than that.


Running 1175. Not sure how to fix that it more mpt though. I'm still on the original bios on my 6900xt. Had a feeling it was voltage.


----------



## RichieRich25

I'm guess I would have to flash a different bios to add more voltage on the red devil


----------



## zwer54

RichieRich25 said:


> I'm guess I would have to flash a different bios to add more voltage on the red devil


You should be able to do it via MPT, under power Tab same where you set the power limit, above that is voltage, soc you don't need to touch.

Set it to 1200 and see what happens. Mine goes to 1200 by default but for anything over 2800 I need more...


----------



## mastertrixter

Godhand007 said:


> No but what does " anything in the 350-400 range " mean? Do you also get shutdowns at 365 TDC for GFX? What is your SOC TDC?


I’m out of town and not 100% sure what soc is at. If I set the tdc anywhere between 350-400 desired in mpt it will crash. I’ve tried most values in that range with same result. Im currently set at 450 tdc and it runs like a champ however. My card is a crap clocker anyways but it’s at least stable for 2650 at 450 tdc.


----------



## RichieRich25

zwer54 said:


> You should be able to do it via MPT, under power Tab same where you set the power limit, above that is voltage, soc you don't need to touch.
> 
> Set it to 1200 and see what happens. Mine goes to 1200 by default but for anything over 2800 I need more...


I feel like I tried it before but I forgot what issue I was havin. Trying it now


----------



## RichieRich25

zwer54 said:


> You should be able to do it via MPT, under power Tab same where you set the power limit, above that is voltage, soc you don't need to touch.
> 
> Set it to 1200 and see what happens. Mine goes to 1200 by default but for anything over 2800 I need more...


As soon as I run a benchmark I artifact and black screen at 1200


----------



## deadfelllow

zwer54 said:


> You should be able to do it via MPT, under power Tab same where you set the power limit, above that is voltage, soc you don't need to touch.
> 
> Set it to 1200 and see what happens. Mine goes to 1200 by default but for anything over 2800 I need more...


can you share 3dtimespy link?

Also i achieved this score with 2760-2860 coreclock 2170 fast timings. 600W tdp 400 EDC and 65 soc. Voltage was 1225









I scored 20 110 in Time Spy


AMD Ryzen 5 5600X, AMD Radeon RX 6900 XT x 1, 16384 MB, 64-bit Windows 10}




www.3dmark.com


----------



## zwer54

deadfelllow said:


> can you share 3dtimespy link?
> 
> Also i achieved this score with 2760-2860 coreclock 2170 fast timings. 600W tdp 400 EDC and 65 soc. Voltage was 1225
> 
> 
> 
> 
> 
> 
> 
> 
> 
> I scored 20 110 in Time Spy
> 
> 
> AMD Ryzen 5 5600X, AMD Radeon RX 6900 XT x 1, 16384 MB, 64-bit Windows 10}
> 
> 
> 
> 
> www.3dmark.com


This is the best I achieved. 









I scored 23 645 in Time Spy


Intel Core i7-12700K Processor, AMD Radeon RX 6900 XT x 1, 32768 MB, 64-bit Windows 11}




www.3dmark.com


----------



## jonRock1992

J7SC said:


> ...typically 2138, 2140, 2150 and sometimes 2160 (I wish).
> 
> On a related note, when setting 2150, Superpos4k scores about 200 - 300 pts higher with fast timing compared to default timing on my setup. I'm looking forward to get a hold of the 6950XT vbios for my board once they show up on TPU.


The 6950XT vBIOS is fun to play around with, but it just sucks because Smart Access Memory stopped working after flashing it. But I was playing Death Stranding with it using the 6950 XT powerplay tables, 2875MHz max core clock, and 2400 MHz fast-timings. Seemed stable.


----------



## J7SC

jonRock1992 said:


> The 6950XT vBIOS is fun to play around with, but it just sucks because Smart Access Memory stopped working after flashing it. But I was playing Death Stranding with it using the 6950 XT powerplay tables, 2875MHz max core clock, and *2400 MHz fast-timings. Seemed stable*.


...so far, I only found the corresponding 6950XT vBios on Gigabyte's site, but it is one of those auto-mantled-exe ones...it says 'flash 100%, success', but on reboot, it's back to the old one....I need to get the actual .rom and then try the Ubuntu USB stick method. Then I should be able to tell whether Smart Access Memory is affected as well. 2400 MHz f/t VRAM _with_ Smart Access sounds nice, especially since my card is very well cooled, including the VRAM.


----------



## Godhand007

mastertrixter said:


> I’m out of town and not 100% sure what soc is at. If I set the tdc anywhere between 350-400 desired in mpt it will crash. I’ve tried most values in that range with same result. Im currently set at 450 tdc and it runs like a champ however. My card is a crap clocker anyways but it’s at least stable for 2650 at 450 tdc.


This issue was resolved or at least we found the culprit. The card was shutting down due to temps. When I flashed a bios with higher shutdown temps on it, I was able to see those higher temps which were causing shutdowns. Also, Don't compete with me on card being crap overlocker. I gave my card *1.325 volts*  but it would not pass TS at even ~ 2750 Mhz.


----------



## J7SC

Godhand007 said:


> This issue was resolved or at least we found the culprit. The card was shutting down due to temps. When I flashed a bios with higher shutdown temps on it, I was able to see those higher temps which were causing shutdowns. Also, Don't compete with me on card being crap overlocker. I gave my card *1.325 volts*  but it would not pass TS at even ~ 2750 Mhz.


...


----------



## zwer54

Godhand007 said:


> This issue was resolved or at least we found the culprit. The card was shutting down due to temps. When I flashed a bios with higher shutdown temps on it, I was able to see those higher temps which were causing shutdowns. Also, Don't compete with me on card being crap overlocker. I gave my card *1.325 volts*  but it would not pass TS at even ~ 2750 Mhz.


I would suggest you download this bios:
VGA Bios Collection: Sapphire RX 6900 XT 16 GB | TechPowerUp 

and load that one with MPT and just boost the power limit a bit.
Btw is your gpu 2x8pin or 3x8pin?


----------



## supergt99

jonRock1992 said:


> The 6950XT vBIOS is fun to play around with, but it just sucks because Smart Access Memory stopped working after flashing it. But I was playing Death Stranding with it using the 6950 XT powerplay tables, 2875MHz max core clock, and 2400 MHz fast-timings. Seemed stable.


did you flash 6950xt bios on 6900xt? i tried it on my red devil and the core stays on 500mhz. is there a workaround? or it just doesnt work?


----------



## jonRock1992

supergt99 said:


> did you flash 6950xt bios on 6900xt? i tried it on my red devil and the core stays on 500mhz. is there a workaround? or it just doesnt work?


Yeah on my Red Devil Ultimate. You just gotta load up the bios in MPT, and then it will work like normal. Except I couldn't get SAM to work because my mobo decides to boot up with compatibility support module with the 6950 XT bios enabled. Does SAM still work for you?


----------



## supergt99

ok i think i figured it out, mine is not the ultimate so i have 1.175 voltage. the bios is for 1.200. no sam isnt working for either.


----------



## mastertrixter

Godhand007 said:


> This issue was resolved or at least we found the culprit. The card was shutting down due to temps. When I flashed a bios with higher shutdown temps on it, I was able to see those higher temps which were causing shutdowns. Also, Don't compete with me on card being crap overlocker. I gave my card *1.325 volts*  but it would not pass TS at even ~ 2750 Mhz.


Oh man. That’s horrible…..sorry bud. My card seems to be middle of road compared to most red devil 6900xt. Temps are gnarly good though. Full loop and it never goes above 60c hot spot! Just won’t clock anymore. I tried up to 550w tdc and 1.325v and it doesn’t get any better than 450w tdc at stock volts.


----------



## RichieRich25

supergt99 said:


> ok i think i figured it out, mine is not the ultimate so i have 1.175 voltage. the bios is for 1.200. no sam isnt working for either.


If you figure out how to go past 1.2 please post it


----------



## supergt99

sorry i haven't completely figured it out yet. right not the way i know to raise the volts is temp depend vmin in mpt.


----------



## RichieRich25

supergt99 said:


> sorry i haven't completely figured it out yet. right not the way i know to raise the volts is temp depend vmin in mpt.


Can you post your mpt settings for that ?


----------



## supergt99

This is not my settings right now, but i was able to pull this pic from another board. enable TEMP_DEPENDENT_VMIN
set your voltage in the red box for GFX vmin low and high the same. and reboot. thats all i do for my red devil 6900xt.


----------



## RichieRich25

supergt99 said:


> View attachment 2560677
> 
> 
> This is not my settings right now, but i was able to pull this pic from another board. enable TEMP_DEPENDENT_VMIN
> set your voltage in the red box for GFX vmin low and high the same. and reboot. thats all i do for my red devil 6900xt.


Thank you, I'll try this out but only for benchmarking. Seems like going this route locks you at whatever voltage you set which probably not good for daily use unless I'm mistaken


----------



## J7SC

RichieRich25 said:


> Thank you, I'll try this out but only for benchmarking. Seems like going this route locks you at whatever voltage you set which probably not good for daily use unless I'm mistaken


Here is the > original source for the above, plus some extra related MPT info


----------



## Godhand007

J7SC said:


> ...
> View attachment 2560662


Oh don't be so overdramatic . It was just for 1-2 minutes.


----------



## Godhand007

zwer54 said:


> I would suggest you download this bios:
> VGA Bios Collection: Sapphire RX 6900 XT 16 GB | TechPowerUp


That is my card.



> and load that one with MPT and just boost the power limit a bit.
> Btw is your gpu 2x8pin or 3x8pin?


gpu 2x8pin and 6pin


----------



## zwer54

Godhand007 said:


> That is my card.
> 
> 
> gpu 2x8pin and 6pin


Oh. Well, then is really strange. Mine goes to 2800 without even touching MPT. 
What PSU you have?


----------



## 8800GT

Off-topic, but with MPT and new dx11 drivers, which are actually pretty great, I feel like some outlets need to re-review the 6900 xt. If DLSS and RT are selling points, being able to easily overclock to 2700mhz should be as well.


----------



## RichieRich25

supergt99 said:


> View attachment 2560677
> 
> 
> This is not my settings right now, but i was able to pull this pic from another board. enable TEMP_DEPENDENT_VMIN
> set your voltage in the red box for GFX vmin low and high the same. and reboot. thats all i do for my red devil 6900xt.


So I gave it **** this morning. Set both min and max 1200 and was able to go to 2800 but my scores decreased probably due to heat increase . Maybe like 100 point decrease. I initially put 1200 but in hw monitor doing 3 runs I only needed 1179 each time. So I decided to go 1185. Heat increased by 5c in hotspot and GPU temp increase by around 5c so not too shabby. I'll have to play around so more. Does the min and max need to be at 1185 or can the min stay at the original 800


----------



## Godhand007

zwer54 said:


> Oh. Well, then is really strange. Mine goes to 2800 without even touching MPT.
> What PSU you have?


Not an issue with the PSU. Card is crap. Refer this post.


----------



## supergt99

RichieRich25 said:


> So I gave it **** this morning. Set both min and max 1200 and was able to go to 2800 but my scores decreased probably due to heat increase . Maybe like 100 point decrease. I initially put 1200 but in hw monitor doing 3 runs I only needed 1179 each time. So I decided to go 1185. Heat increased by 5c in hotspot and GPU temp increase by around 5c so not too shabby. I'll have to play around so more. Does the min and max need to be at 1185 or can the min stay at the original 800


I believe the min and max have to be the same for it to work.


----------



## RichieRich25

supergt99 said:


> I believe the min and max have to be the same for it to work.


Yea the min and max need to be the same. Scores are lower going with this method though because it looks like it messes with the memory clock now. Avg memory clock dropped a bit to 2110 avg from 2135.


----------



## J7SC

supergt99 said:


> I believe the min and max have to be the same for it to work.





RichieRich25 said:


> Yea the min and max need to be the same. Scores are lower going with this method though because it looks like it messes with the memory clock now. Avg memory clock dropped a bit to 2110 avg from 2135.


...min / max are not the same on my profiles, w/ good results, though min is raised from stock.


----------



## supergt99

J7SC said:


> ...min / max are not the same on my profiles, w/ good results, though min is raised from stock.


I’ll have to give that a try. I’ve been keeping them the same.


----------



## ZealotKi11er

Finally, some benchmarks from HUB with a OCed 6900 XT beating 3090.


----------



## Luggage

In Sweden atm the Red Devil Ultimate is about 20% cheaper than any other 6900xt or 3080ti - not gonna lie, it’s very tempting despite promising myself to hang on to my 2080ti until next gen…


----------



## RichieRich25

ZealotKi11er said:


> Finally, some benchmarks from HUB with a OCed 6900 XT beating 3090.


I have a friend who is a hardcore Nvidia/Intel fan. He currently has 3090ti with a unlimited power bios, and he still loses to me in every benchmark(besides ray tracing) and game we play and it eats him alive. He's always referring to these reviews.


----------



## Luggage

ZealotKi11er said:


> Finally, some benchmarks from HUB with a OCed 6900 XT beating 3090.


Found any good reviews with OC 6900 XT vs OC 6900 XTXH vs OC 6950XT?


----------



## RichieRich25

J7SC said:


> ...min / max are not the same on my profiles, w/ good results, though min is raised from stock.


Your a life saver. This resulted in my highest score. I put the min at 875 and the max at 1190. Better temps and my timespy gpu score shot up to 25700 with just 2765 max and min at 2665


----------



## jonRock1992

I finally got rid of this problematic Asus Dark Hero and swapped it out for an MSI X570S Carbon Max. Much better board in my opinion. It also lets me use the LC vBIOS with my Red Devil Ultimate! No more stupid overcurrent protection error. I also swapped my 5800X with a 5800X3D.


----------



## J7SC

jonRock1992 said:


> I finally got rid of this problematic Asus Dark Hero and swapped it out for an MSI X570S Carbon Max. Much better board in my opinion. It also lets me use the LC vBIOS with my Red Devil Ultimate! No more stupid overcurrent protection error. I also swapped my 5800X with a 5800X3D.
> View attachment 2560760


Nice looking board & setup ! But please don't talk bad about my fav X570 board, the DarkHero with its lovely DynamicOC feature (spoiler)  ...that said, I do have a couple of MSI boards for other CPUs which are great and have been problem-free for years.


Spoiler: DarthV


----------



## zwer54

Godhand007 said:


> Not an issue with the PSU. Card is crap. Refer this post.


Your hotspot temperature is in the sky and I'd say this is the reason for not being able to clock more. Re-paste the cooler. Also I never needed to add soc TDC, I keep it on 55 as when I tried with 60, there was no difference.


----------



## ptt1982

Luggage said:


> Found any good reviews with OC 6900 XT vs OC 6900 XTXH vs OC 6950XT?


This is what I'm also interested in. I'm curious to see how my 6900XT OC'd to 2.67ghz and unleashed by MPT would perform against 6950XT. So far, when OC'd, my card performs slightly better than a stock 6950XT.


----------



## zwer54

RichieRich25 said:


> Your a life saver. This resulted in my highest score. I put the min at 875 and the max at 1190. Better temps and my timespy gpu score shot up to 25700 with just 2765 max and min at 2665


What driver version are you using?


----------



## RichieRich25

zwer54 said:


> What driver version are you using?


Using the preview drivers so unfortunately it's not a verified driver


----------



## RichieRich25

Here my highest overall score but not highest GPU score. I have to go back to my PC 3dmak folder and find it 








I scored 22 099 in Time Spy


AMD Ryzen 9 5950X, AMD Radeon RX 6900 XT x 1, 32768 MB, 64-bit Windows 11}




www.3dmark.com


----------



## zwer54

RichieRich25 said:


> Using the preview drivers so unfortunately it's not a verified driver











I scored 23 415 in Time Spy


Intel Core i7-12700K Processor, AMD Radeon RX 6900 XT x 1, 32768 MB, 64-bit Windows 11}




www.3dmark.com





I've tried settings you mentioned, I've tried much more, I've tried everything but I cannot get into 25k gpu score... Funny thing is that I can achieve the same gpu score with 100W less and a lot less mhz and voltage so I don't understand... If I add voltage, mhz and power consumption, how the score doesn't improve? (hotspot max was 86 in this test)


----------



## RichieRich25

zwer54 said:


> I scored 23 415 in Time Spy
> 
> 
> Intel Core i7-12700K Processor, AMD Radeon RX 6900 XT x 1, 32768 MB, 64-bit Windows 11}
> 
> 
> 
> 
> www.3dmark.com
> 
> 
> 
> 
> 
> I've tried settings you mentioned, I've tried much more, I've tried everything but I cannot get into 25k gpu score... Funny thing is that I can achieve the same gpu score with 100W less and a lot less mhz and voltage so I don't understand... If I add voltage, mhz and power consumption, how the score doesn't improve? (hotspot max was 86 in this test)


Heres a picture of the settings i used. I messed around with other settings but its turned out to work in my favor. I also tighten my ram settings and i can tighten more but havent gotten around to it yet. I know ram affects timespy score.


----------



## ZealotKi11er

ptt1982 said:


> This is what I'm also interested in. I'm curious to see how my 6900XT OC'd to 2.67ghz and unleashed by MPT would perform against 6950XT. So far, when OC'd, my card performs slightly better than a stock 6950XT.


That is not the point really. Yes if you OCed to 2.67GHz you will easily beat 6950 XT since that runs 335W 24xxMHz.


----------



## alceryes

RichieRich25 said:


> I have a friend who is a hardcore Nvidia/Intel fan. He currently has 3090ti with a unlimited power bios, and he still loses to me in every benchmark(besides ray tracing) and game we play and it eats him alive. He's always referring to these reviews.


People love to hate on AMD.
I truly believe that NVIDIA has a much better (read: more underhanded) PR machine than AMD. One look at the continual banter about AMD's 'bad drivers' (a relic from the early/mid 2000's) showcases this. Both GPU mfgs. have had bad (even horrible) drivers. But AMD has to permanently wear that badge, it seems.

I've switched back and forth between NVIDIA and ATI/AMD so many times that there's no way I can hate on one more than the other. But, if I feel that either of them did a disservice to the consumer, I'll definitely let my feelings be known.


----------



## DvL Ax3l

DvL Ax3l said:


> Hi guys, I'm following this thread for a while but I didn't find the answers I'm searching, anyway u are nice community here, so I have a reference 6900XT (XTX) no MPT and I score around 21500 in time spy with 2000/2500 MHz core - 2150Mhz FT memory and + 15% PL with 1100mV in wattman, if I want to use MPT, with the stock cooler, how much can I improve? Wich value I have to modify? Anyway no custom loop for me until September/October so I have to wait.


Nobody can give me some advice for that?


----------



## RichieRich25

alceryes said:


> People love to hate on AMD.
> I truly believe that NVIDIA has a much better (read: more underhanded) PR machine than AMD. One look at the continual banter about AMD's 'bad drivers' (a relic from the early/mid 2000's) seems showcases this. Both GPU mfgs. have had bad (even horrible) drivers. But AMD has to permanently wear that badge, it seems.


Yea I agree. According to user benchmark amds marketing is top notch lol. I do agree that bad drivers will be forever synonymous with amd but I can't even begin to tell my story about bad drivers and 2080ti kingpin. The 5700xt was a nightmare though I'll agree with that but now its an amazing card


----------



## alceryes

DvL Ax3l said:


> Nobody can give me some advice for that?


Increase your voltage to 1.125mV. Try going down to 2100MHz with FT or stay at 2150MHz and turn off FT. Make an aggressive (maybe a bit loud) fan curve. Watch your temps to make sure they don't get too high. Post some Time Spy links with the new settings.

Edit: And turn off zero RPM fan setting. Your fans need to always be on to better handle the immediate power (TDP) load when benchmarking.


----------



## alceryes

RichieRich25 said:


> Yea I agree. According to user benchmark amds marketing is top notch lol. I do agree that bad drivers will be forever synonymous with amd but I can't even begin to tell my story about bad drivers and 2080ti kingpin. The 5700xt was a nightmare though I'll agree with that but now its an amazing card


Ooooo, that card.
Yeah, the 5700 XT wasn't actually a driver issue. It was a _hardware_ issue. AMD was too aggressive with their default power settings for the thermal solution they used (and recommended to AIBs). The end result was tons and tons of CTDs AT STOCK SETTINGS, if your case airflow wasn't up to snuff or the ambient got a little hot.


----------



## Counterassy14

DvL Ax3l said:


> Nobody can give me some advice for that?


Hi, first check which limits you are running into: temp, wattage, clock/voltage.
If temps and wattage limits are fine and you already maxed out your clocks try increasing voltage a bit.
If you run into wattage limit but not temp limit try increasing that.
If you maxed out your temps than you most likely achieved the most their is.

After this their are very little tweaks you could do: increase flck, memory clock, decrease soc voltage.

edit: obviously keep the clocks as high as they get at any given voltage


----------



## Godhand007

zwer54 said:


> Your hotspot temperature is in the sky and I'd say this is the reason for not being able to clock more.


No, that not it. I have seen this card and a reference one work fine up to 110C.



> Re-paste the cooler. Also I never needed to add soc TDC, I keep it on 55 as when I tried with 60, there was no difference.


This would result in a complete loss of warranty. Not worth it in my opinion.


----------



## RichieRich25

alceryes said:


> Ooooo, that card.
> Yeah, the 5700 XT wasn't actually a driver issue. It was a _hardware_ issue. AMD was too aggressive with their default power settings for the thermal solution they used (and recommended to AIBs). The end result was tons and tons of CTDs AT STOCK SETTINGS, if your case airflow wasn't up to snuff or the ambient got a little hot.


I had a red devil 5700xt and it did run hotter than expected but never past 70c. But the amount of black screen and dev errors made me take it back and get a 2080ti but boy the 2080ti wasn't any better until about a month or two later.


----------



## DvL Ax3l

alceryes said:


> Increase your voltage to 1.125mV. Try going down to 2100MHz with FT or stay at 2150MHz and turn off FT. Make an aggressive (maybe a bit loud) fan curve. Watch your temps to make sure they don't get too high. Post some Time Spy links with the new settings.
> 
> Edit: And turn off zero RPM fan setting. Your fans need to always be on to better handle the immediate power (TDP) load when benchmarking.





Counterassy14 said:


> Hi, first check which limits you are running into: temp, wattage, clock/voltage.
> If temps and wattage limits are fine and you already maxed out your clocks try increasing voltage a bit.
> If you run into wattage limit but not temp limit try increasing that.
> If you maxed out your temps than you most likely achieved the most their is.
> 
> After this their are very little tweaks you could do: increase flck, memory clock, decrease soc voltage.
> 
> edit: obviously keep the clocks as high as they get at any given voltage


So here are some results, TimeSpy score with these settings and hwinfo opened


----------



## alceryes

DvL Ax3l said:


> So here are some results, TimeSpy score with these settings and hwinfo opened


Your graphics score is within spitting distance of mine. There's nothing wrong here.

You'll need better cooling (with more voltage) to go higher. Just because your HS temp is only 88C doesn't mean that it's not impacting your card's performance. Newer GPUs are smart. They keep a strangle hold on the card's performance even if your temps are below the reported max limit. If you could get your temps below 65C you'd see your GPU core ramp up to higher frequency but that would require better cooling AND more juice.

It's a very common misconseption that any temp below the max TDP means that your GPU (or even CPU) is free to use as much voltage and run as fast as you want it to. This is far from the truth.


----------



## RichieRich25

DvL Ax3l said:


> So here are some results, TimeSpy score with these settings and hwinfo opened
> View attachment 2560775
> View attachment 2560774


If you want to overclock more you have to get a little more aggressive with your fan curve and make sure your case cooling is sufficient but otherwise everything looks good


----------



## DvL Ax3l

alceryes said:


> Your graphics score is within spitting distance of mine. There's nothing wrong here.
> 
> You'll need better cooling (with more voltage) to go higher. Just because your HS temp is only 88C doesn't mean that it's not impacting your card's performance. Newer GPUs are smart. They keep a strangle hold on the card's performance even if your temps are below the reported max limit. If you could get your temps below 65C you'd see your GPU core ramp up to higher frequency but that would require better cooling AND more juice.
> 
> It's a very common misconseption that any temp below the max TDP means that your GPU (or even CPU) is free to use as much voltage and run as fast as you want it to. This is far from the truth.





RichieRich25 said:


> If you want to overclock more you have to get a little more aggressive with your fan curve and make sure your case cooling is sufficient but otherwise everything looks good


So pratically MPT it's useless for me, can't OC more with stock cooling, got it, I was thinking raise the power limit to 330W, 35W more than stock, because if I set 2700Mhz it will crash when reach 2680Mhz


----------



## RichieRich25

DvL Ax3l said:


> So pratically MPT it's useless for me, can't OC more with stock cooling, got it, I was thinking raise the power limit to 330W, 35W more than stock, because if I set 2700Mhz it will crash when reach 2680Mhz


have you tried getting more aggressive with your fan curve? and is the airflow in your case good. This how my case is setup and i take the side glass off in the summer because the room does reach high temps. When gaming i dont need much airflow because i dont apply heavy oc, i only go crazy in airflow for becnhmarking. For gaming i run 2675 max and 2575min


----------



## DvL Ax3l

RichieRich25 said:


> have you tried getting more aggressive with your fan curve? and is the airflow in your case good. This how my case is setup and i take the side glass off in the summer because the room does reach high temps. When gaming i dont need much airflow because i dont apply heavy oc, i only go crazy in airflow for becnhmarking. For gaming i run 2675 max and 2575min


I've tried 2350/2650/2112Mhz FT @1.125V with all fans @ max (GPU/CPU AIO/case) with opened case, my timespy score went up about 21650, 100 points more so I've already reached the maximum, at this point better stay with 2000/2500 @ 1.095V (game stable, for timespy I need 1.125V) lower temps, about same score.


----------



## RichieRich25

DvL Ax3l said:


> I've tried 2350/2650/2112Mhz FT @1.125V with all fans @ max (GPU/CPU AIO/case) with opened case, my timespy score went up about 21650, 100 points more so I've already reached the maximum, at this point better stay with 2000/2500 @ 1.095V (game stable, for timespy I need 1.125V) lower temps, about same score.


I see. If your trying to improve timespy scores common practice with these cards are the min clock should be no lore than 100 from max.so 2550 min and 2650 max. I notice in games, mainly battle royale games, that setting the min to far from the max causes my games to stutter sometime. So I go no more than 200 difference in Gaming


----------



## zwer54

RichieRich25 said:


> Heres a picture of the settings i used. I messed around with other settings but its turned out to work in my favor. I also tighten my ram settings and i can tighten more but havent gotten around to it yet. I know ram affects timespy score.


Ram is fine I guess... Tried all your settings but result still the same... I can run the same timings in gear 1 but 4000 mhz, but time spy doesn't change the score...


----------



## RichieRich25

zwer54 said:


> Ram is fine I guess... Tried all your settings but result still the same... I can run the same timings in gear 1 but 4000 mhz, but time spy doesn't change the score...


yea im at the same point where im stuck at 23500-23700. I would love to break 2400 on air but I need more cooling. Might have to move the pc in front of the window ac and see what i can do. I really only went crazy with timespy to mess with my friend who has a 3090ti. his best score is 22300. So im way beyond his reach unless he goes water which hes not. Im for sure getting the next get 7900xt and im going water cooled for sure.


----------



## T[]RK

Hello!

I still unable to buy new GPUs (both RX 6000 or RTX 3000), but for obvious reasons price on RX 6900 XT with Waterblocks go down (~$1500 now), but only on GPUs with waterblocks two brands: GIGABYTE AORUS Xtreme & XFX Speedster ZERO. I DON"T like water in my system, so maybe i can get this card, remove waterblock and find proper (GIGABYTE or XFX) cooler? My question is simple: Are Xtreme and ZERO share PCB with other series? I think i saw Xtreme PCB, but didn't find XFX ZERO yet.


----------



## zwer54

RichieRich25 said:


> yea im at the same point where im stuck at 23500-23700. I would love to break 2400 on air but I need more cooling. Might have to move the pc in front of the window ac and see what i can do. I really only went crazy with timespy to mess with my friend who has a 3090ti. his best score is 22300. So im way beyond his reach unless he goes water which hes not. Im for sure getting the next get 7900xt and im going water cooled for sure.


Well, I am on water and I manage to keep hotspot under 70C but no matter what, graphic score doesn't improve. 
I want to figure out really hard what is capping it as I can get 24600-24700 with 1.2V and 2800 mhz and then I set it to 1.287V and 2900 mhz, hotspot jumps to 86C which is still low and gpu max power go up from 430W to 500W but the score doesn't improve at all... I just don't get it how and why.
While with that overclock, superposition 8K does improve...


----------



## RichieRich25

zwer54 said:


> Well, I am on water and I manage to keep hotspot under 70C but no matter what, graphic score doesn't improve.
> I want to figure out really hard what is capping it as I can get 24600-24700 with 1.2V and 2800 mhz and then I set it to 1.287V and 2900 mhz, hotspot jumps to 86C which is still low and gpu max power go up from 430W to 500W but the score doesn't improve at all... I just don't get it how and why.
> While with that overclock, superposition 8K does improve...


When you up your clocks and voltage do you notice an increase in avg clock speed when you look at all the stats in 3dmark?does increasing your clock speeds increase your avg or are you temperature bound to the same avg no matter how high you go. That's where I'm stuck. I can go 2815 but my avg clock speed actually goes down due to heat


----------



## zwer54

RichieRich25 said:


> When you up your clocks and voltage do you notice an increase in avg clock speed when you look at all the stats in 3dmark?does increasing your clock speeds increase your avg or are you temperature bound to the same avg no matter how high you go. That's where I'm stuck. I can go 2815 but my avg clock speed actually goes down due to heat


Well, yeah, the avg clock goes up and temperatures are still fine.

AMD Radeon RX 6900 XT video card benchmark result - Intel Core i7-12700K Processor,Gigabyte Technology Co., Ltd. Z690 AORUS ELITE DDR4 (3dmark.com)

AMD Radeon RX 6900 XT video card benchmark result - Intel Core i7-12700K Processor,Gigabyte Technology Co., Ltd. Z690 AORUS ELITE DDR4 (3dmark.com)

Have a look at these two results. Ok, memory was not the same but I don't think 30 mhz avg in memory can do so much of a difference...


----------



## RichieRich25

Avg memory clock dropped on me before and it resulted in like 300 points less in GPU score. I also use razer cortex app to clean up any background apps


----------



## RichieRich25

zwer54 said:


> Well, yeah, the avg clock goes up and temperatures are still fine.
> 
> AMD Radeon RX 6900 XT video card benchmark result - Intel Core i7-12700K Processor,Gigabyte Technology Co., Ltd. Z690 AORUS ELITE DDR4 (3dmark.com)
> 
> AMD Radeon RX 6900 XT video card benchmark result - Intel Core i7-12700K Processor,Gigabyte Technology Co., Ltd. Z690 AORUS ELITE DDR4 (3dmark.com)
> 
> Have a look at these two results. Ok, memory was not the same but I don't think 30 mhz avg in memory can do so much of a difference...


I also know that in 3dmark if you disable system monitoring in the settings it will increase your scores but I don't think you can post it on the leader board


----------



## J7SC

RichieRich25 said:


> I also know that in 3dmark if you disable system monitoring in the settings it will increase your scores but I don't think you can post it on the leader board


...I can get past 24.1k easily enough in time spy with a very well-cooled card, but it is all down to not only plenty of watts, but the 'right relationship' between PL watts and amps - the latter's limits also depend on your specific card's PCB design.


----------



## tommyd2k

I feel the need to share this. My 6900XT Red devil (non-XTX-U) has always hit a wall around 23,300k in Time Spy. Using MPT or EVC2sx in all different configs my previous max score was 23793, and that was with wattage I wouldn't use unless benchmarking. 

I've been using the EVC2sx to set 1.2v limit and set Isense duty gain to 8.0, duty nom to 0, and Isense ant to .67.
When I got the EVC2 I could't find many guides on how to use it and with those settings the card could go to 2700Mhz and stay under 300W. Time spy would score around 23300 and I just left it at that for the last 6 months.

I just beat my high score with 23935points (so close to 24k) and I am pretty pumped because I was not expecting that.







Now I gotta figure out what caused the big ass jump. I noticed the loop 2 settings are going back to default and I don't know why or when it happens.













I haven't done any playing around with the settings for a while. I use the factory Powerplay tables, 2710-2610 high low MHz, 1175 voltage slider, 2150MHz - Fast timings and +15 power for 323W. 

OK, so I crossed my fingers and ran it again. Hell Yeah!! 24121!! Max core power 267W
I'm seriously happy right now. I never thought I would get past 24k with my GPU.


----------



## tommyd2k

T[]RK said:


> Hello!
> 
> I still unable to buy new GPUs (both RX 6000 or RTX 3000), but for obvious reasons price on RX 6900 XT with Waterblocks go down (~$1500 now), but only on GPUs with waterblocks two brands: GIGABYTE AORUS Xtreme & XFX Speedster ZERO. I DON"T like water in my system, so maybe i can get this card, remove waterblock and find proper (GIGABYTE or XFX) cooler? My question is simple: Are Xtreme and ZERO share PCB with other series? I think i saw Xtreme PCB, but didn't find XFX ZERO yet.


For under $1500 you can get an XTXU Red devil Ultimate. Pretty sure they have em on amazon right now. Buying a factory watercooled card and modding it to use a factory cooler is a little ridiculous.
Here ya go


Amazon.com


----------



## RichieRich25

tommyd2k said:


> I feel the need to share this. My 6900XT Red devil (non-XTX-U) has always hit a wall around 23,300k in Time Spy. Using MPT or EVC2sx in all different configs my previous max score was 23793, and that was with wattage I wouldn't use unless benchmarking.
> 
> I've been using the EVC2sx to set 1.2v limit and set Isense duty gain to 8.0, duty nom to 0, and Isense ant to .67.
> When I got the EVC2 I could't find many guides on how to use it and with those settings the card could go to 2700Mhz and stay under 300W. Time spy would score around 23300 and I just left it at that for the last 6 months.
> 
> I just beat my high score with 23935points (so close to 24k) and I am pretty pumped because I was not expecting that.
> View attachment 2560846
> 
> Now I gotta figure out what caused the big ass jump. I noticed the loop 2 settings are going back to default and I don't know why or when it happens.
> View attachment 2560842
> View attachment 2560843
> 
> I haven't done any playing around with the settings for a while. I use the factory Powerplay tables, 2710-2610 high low MHz, 1175 voltage slider, 2150MHz - Fast timings and +15 power for 323W.
> 
> OK, so I crossed my fingers and ran it again. Hell Yeah!! 24121!! Max core power 267W
> I'm seriously happy right now. I never thought I would get past 24k with my GPU.
> View attachment 2560850


Still trying to break 24000 but I cant. i just keep getting stuck. Just broke my superposition record and timespy record by like 30 points.
UNIGINE Superposition benchmark score
I scored 22 327 in Time Spy
had to bump up power limit to 360 to achieve these numbers.


----------



## RichieRich25

I have a friend who is running the red devil 6900xt and his 3dmark gets stuck at 2554 core clock no matter what values he enters in amd software. His games all clock like they are suppose to it but if he puts 2600-2700 his boost is locked to 2554. Anybody ever had an issue like this with 3dmark


----------



## deadfelllow

RichieRich25 said:


> I have a friend who is running the red devil 6900xt and his 3dmark gets stuck at 2554 core clock no matter what values he enters in amd software. His games all clock like they are suppose to it but if he puts 2600-2700 his boost is locked to 2554. Anybody ever had an issue like this with 3dmark


Probably power limited


----------



## RichieRich25

deadfelllow said:


> Probably power limited


Definitely not power limited, he's running 325 pl plus 15%. It's almost like he's capped or stuck at that number. It never moves from there during both tests


----------



## zwer54

RichieRich25 said:


> Definitely not power limited, he's running 325 pl plus 15%. It's almost like he's capped or stuck at that number. It never moves from there during both tests


What are his temperatures?


----------



## deadfelllow

RichieRich25 said:


> Definitely not power limited, he's running 325 pl plus 15%. It's almost like he's capped or stuck at that number. It never moves from there during both tests


Its power limited because you said its only 2554 on 3dmark. with my 6900xt ee stock settings 332ppt + %15 pl (2750-2850 clock) my effective clock is not even reaching 2750mhz on 3dmark. But when i set my pl to 400W + %15 it can reach up to 2840mhz at certain scenes on 3dmark.


----------



## RichieRich25

zwer54 said:


> What are his temperatures?


Temps are 58 avg with hot spot not going past 88. But it's locked at the same number no matter what values he puts. If he goes 2400 and 2500 it still goes to 2550 and stays there


----------



## jonRock1992

Upgraded to 5800X3D and flashed the XTLC vBIOS to my 6900 XT RDU. Got a nice little boost in SOTTR benchmark.
1440p before:








1440p after:








4k before:








4k after:


----------



## RichieRich25

deadfelllow said:


> Its power limited because you said its only 2554 on 3dmark. with my 6900xt ee stock settings 332ppt + %15 pl (2750-2850 clock) my effective clock is not even reaching 2750mhz on 3dmark. But when i set my pl to 400W + %15 it can reach up to 2840mhz at certain scenes on 3dmark.


Me and three other of my friends have the same exact card. With the same settings we each see boost go to to around 2660 with his settings, with an avg clock around 2618. With his card on timespy he never moves from 2550. It stays there the whole time with no flucuations. His total power is around 370wat when watching the stats on hwinfo64. Going to have him uninstall 3dmark and reinstall


----------



## Alex999

Guys, i m new to this forum, and a newbie in O.C.
i ll post my system and results and hope someone will help me squeeze a lil bit out of it. (for timespy purposes)
apologiez but i ll copy from my steam profile so it will be easier for me and also my watt/power info.

INFO:
Timespy score GPU 21700-21882 CPU 13900-14550 <- dunno why CPU fluctuates so much, room temp is around 22-24 C
GPU/CPU are 59-71C. never the same
it can push 4.75ghz all core - 1.4v (hwinfo+amd adrenaline) - never tried lowering voltage, i just use command -useallavailablecores in steam for the games which do not push the gpu more than 75% eg: Lost Ark.
also goes 4.95ghz per core (i see all 0-11 cores going at 4.95ghz after a while in hwinfo)
6900xt: power limit 15% / 500min-2725max (peaks at 2450mhz in timespy info), fan curve at 80% 0-100C+

✔𝐌𝐁: Gigabyte Aorus pro V2
✔𝐂𝐏𝐔: AMD Ryzen 5900x 4.95ghz stock with just -30 off curve optimiser / 60C gaming (NZXT x63)
✔𝐆𝐏𝐔: XFX 6900 XT 2705mhz 355W 70C max, manual OC with fan curve low then ramping 50% around 62C.
Stock BIOS Part Number
113-69XB6SSB1-D01 289W - both bios are the same
✔𝐑𝐀𝐌: G.SKILL 3600C16Q32GTZN 4SR x 8gb IF 1800 / 34C DIMM IDLE / 39C gaming
✔𝐒𝐒𝐃: Adata XPG 1TB / 970 EVO 500gb / 2x 1TB QVO 860 / 4TB SSHD / 2TB HDD
✔𝐏𝐒𝐔: Corsair HX 1200 Platinum
✔𝐂𝐚𝐬𝐞: Thermaltake View 51 GT custom 4x200mm + 16x 120mm fans +lian li strimer plus gpu+mobo
✔𝐎𝐒: Windows 10 current
---------------------------------------------------------------------------------------------------------
✔𝐌𝐨𝐧𝐢𝐭𝐨𝐫: G7 Odyssey 27" 1440p 240hz + arzopa 15,6" 60HZ FHD IPS Freesync ON
✔𝐇𝐞𝐚𝐝𝐬𝐞𝐭: Asus Theta / S880DB
✔𝐌𝐨𝐮𝐬𝐞: Roccat Kone XP Custom grip
✔𝐊𝐞𝐲𝐛𝐨𝐚𝐫𝐝: Steelseries Apex Pro
✔G𝐚𝐦𝐞𝐩𝐚𝐝: Xbox One + PS4

i do not run any software besides steam and amd adrenaline. Nzxt causes crashes/ grey screen in games so i have to restart the gpu drivers. never crashes in any game, i can run any bench without issues. (never tried prime95)

Q: can i use MPT safely to change Wattage from 289 > 310 and TDC from 320>340? or more if someone has better knowledge about my bios version.

Errors along the way (been using this system since dec 2020) + GPU in dec 2021 - sometimes i get Network d/c and connect, i reconnect/switch cables and problem solves by itself.
WIN 10 to WIN 11 upgrade never caused problems, but if i install win11 from creationmediatool i always get crash bsod while downloading gpu drivers. so i m back to win 10.
in just 1 game such as Lost Ark -> gpu is never above 75% and 50C, always is like idling and causing stutters in game when i teleport; fps goes down to 1 then slowly to 240fps.So i either use my 2nd mini monitor or i use commands in steam such as -notexturestreaming + -useallavailable and suddenly fps are 150-380 steady. but it turns out only needed is -notexturestreaming as i write/edit this post.

i have a 2nd build which never my gf uses, was built in pandemic so cpu is on the rough side but it did his money very very well (bfv 80% 1080p scale- all low - 60fps)
so you can ask me anything about it running any game without 1660super.
✔𝐌𝐁: MSI MPG x570
✔𝐂𝐏𝐔: 3400G 4.30 ghz all core / APU +200mhz / 49C-55C min max + Wraith Prism (from a previous 3800x)
✔𝐆𝐏𝐔: Gigabyte 1660 Super 61C
✔𝐑𝐀𝐌: Corsair Dominator 3600c18 2xDR x8gb
✔𝐒𝐒𝐃: 2x Intel 660 1TB
✔𝐏𝐒𝐔: CM MWE GOLD 750
✔𝐂𝐚𝐬𝐞: SilentiumPC Signum SG7V EVO TG
✔𝐎𝐒: Windows 10
---------------------------------------------------------------------------------------------------------
✔𝐌𝐨𝐧𝐢𝐭𝐨𝐫: AOC 24" 1080 144
✔𝐇𝐞𝐚𝐝𝐬𝐞𝐭: Razer BlackShark V2 PRO
✔𝐌𝐨𝐮𝐬𝐞: Razer Viper Ultimate
✔𝐊𝐞𝐲𝐛𝐨𝐚𝐫𝐝: BlackWidow V3
✔G𝐚𝐦𝐞𝐩𝐚𝐝: Xbox One + PS4


----------



## J7SC

jonRock1992 said:


> Upgraded to 5800X3D and flashed the XTLC vBIOS to my 6900 XT RDU. Got a nice little boost in SOTTR benchmark.
> 1440p before:
> View attachment 2560890
> 
> 1440p after:
> View attachment 2560891
> 
> 4k before:
> View attachment 2560892
> 
> 4k after:
> View attachment 2560893


Nice ! 
Quick question re. your post. Is that 'AMD FidelityFX CAS' option only available in the full game ? I ask because a couple of days ago, I did a > brief SOTR comparison at 4096x2160 (mild OC, stock or mild vbios) at another thread with the SOTR benchmark from Steam's demo version, which doesn't have that AMD FidelityFX CAS' option. With SOTR 'bundling ray tracing and DLSS for NVidia, I like to get to s.th. similar with AMD.


----------



## deadfelllow

For 4k superposition benchmark this score is good? 6900xtxh

Or what is the baseline for 6900xtxh cards

(ignore gpu min max temp i have gtx 1060 on 2nd pci e slot xD)


----------



## J7SC

deadfelllow said:


> For 4k superposition benchmark this score is good? 6900xtxh
> 
> Or what is the baseline for 6900xtxh cards
> 
> (ignore gpu min max temp i have gtx 1060 on 2nd pci e slot xD)
> 
> View attachment 2560919


Excellent - highest I've seen before w/ single 6900XT ! 3090s on the other hand...


----------



## deadfelllow

J7SC said:


> Excellent - highest I've seen before w/ single 6900XT ! 3090s on the other hand...


This is with preview drivers. Also i changed Max FCLK and Phyclock from MPT idk if it affects my score. Didnt change the voltage it was stock.

My freqs like this


----------



## Godhand007

deadfelllow said:


> For 4k superposition benchmark this score is good? 6900xtxh
> 
> Or what is the baseline for 6900xtxh cards
> 
> (ignore gpu min max temp i have gtx 1060 on 2nd pci e slot xD)
> 
> View attachment 2560919


You should be able to get much more than that with your core clocks settings. I was able to beat this with my ****ty card on 6950XT bios. I think you might be CPU limited, not sure though. Mind you, these core clocks are not GT2 stable (100 Mhz less).


----------



## deadfelllow

Godhand007 said:


> You should be able to get much more than that with your core clocks settings. I was able to beat this with my ****ty card on 6950XT bios. I think you might be CPU limited, not sure though. Mind you, these core clocks are not GT2 stable (100 Mhz less).
> 
> View attachment 2560929


Yes, might be cpu limited cuz whatever i do power draw is not going higher than 420W sadly.(even with 1.25 V)


----------



## CHUNKYBOWSER

Does anyone have an XFX 6900XT Limited Black BIOS?

I just picked one up used and it its showing as a standard XTX chip with lower boost clocks. I've heard of some people getting XFX cards with the wrong BIOS on them. I can't find the Limited Black BIOS on Techpowerup.

Any help? 

Nevermind, I had problems getting the system to boot with the other BIOS, but I finally got it to work. Turns out the second BIOS on the card has the XTX-H BIOS.

What a strange decision from XFX.


----------



## J7SC

Godhand007 said:


> You should be able to get much more than that with your core clocks settings. I was able to beat this with my ****ty card on 6950XT bios. I think you might be CPU limited, not sure though. Mind you, these core clocks are not GT2 stable (100 Mhz less).
> 
> View attachment 2560929


474 W !


----------



## deadfelllow

Godhand007 said:


> You should be able to get much more than that with your core clocks settings. I was able to beat this with my ****ty card on 6950XT bios. I think you might be CPU limited, not sure though. Mind you, these core clocks are not GT2 stable (100 Mhz less).
> 
> View attachment 2560929


With this cpu this far i can go i guess.


----------



## J7SC

Godhand007 said:


> You should be able to get much more than that with your core clocks settings. I was able to beat this with my ****ty card on 6950XT bios. I think you might be CPU limited, not sure though. Mind you, these core clocks are not GT2 stable (100 Mhz less).
> 
> View attachment 2560929





deadfelllow said:


> With this cpu this far i can go i guess.
> 
> View attachment 2560968


I'm getting some popcorn !


----------



## tommyd2k

RichieRich25 said:


> Still trying to break 24000 but I cant. i just keep getting stuck. Just broke my superposition record and timespy record by like 30 points.
> UNIGINE Superposition benchmark score
> I scored 22 327 in Time Spy
> had to bump up power limit to 360 to achieve these numbers.


I'd say grab an evc2sx if you can. I got it last year but I didn't play with it that much and it was sitting in a drawer. The cool thing about it (no pun intended) is with a couple changes the card will hit 2700MHz without raising the wattage. The downside is every time you power off the system (reboots are ok) the changes revert. But, there's scripting so you can make it automatic or one-click with that. 
I'm scoring 24k now with 24-7 settings. I was struggling to get above 23.5k since I got the card a year ago. 
This thread was the extent of my reading on it until just recently. https://www.elmorlabs.com/forum/topic/6900xt-power-limit-tricks 
So my tuning options were limited.


----------



## CHUNKYBOWSER

23,161 Timespy graphics score on my air cooled XTX-H. How is that? What are others getting on air?


----------



## cfranko

is it possible to flash 6950 xt bios on XTX 6900 XT?


----------



## cfranko

supergt99 said:


> ok i think i figured it out, mine is not the ultimate so i have 1.175 voltage. the bios is for 1.200. no sam isnt working for either.


Were you able to flash 6950 XT bios on XTX 6900 xt?


----------



## Sufferage

CHUNKYBOWSER said:


> 23,161 Timespy graphics score on my air cooled XTX-H. How is that? What are others getting on air?
> 
> View attachment 2560974












My XTX non-H on Air, guess you should have some headroom left...


----------



## supergt99

cfranko said:


> Were you able to flash 6950 XT bios on XTX 6900 xt?


cant get it to work. the vram speed seems to work but the GPU goes into lockdown at 500mhz. sam doesnt work either.


----------



## zwer54

Godhand007 said:


> You should be able to get much more than that with your core clocks settings. I was able to beat this with my ****ty card on 6950XT bios. I think you might be CPU limited, not sure though. Mind you, these core clocks are not GT2 stable (100 Mhz less).
> 
> View attachment 2560929


Continuously I'm getting lower scores than you guys, not sure what is all about but just can't pull more from this gpu. 
This one is with 6950 XT bios. But it runs pretty much the same as 6900XT bios except ram clocks.


----------



## J7SC

zwer54 said:


> Continuously I'm getting lower scores than you guys, not sure what is all about but just can't pull more from this gpu.
> This one is with 6950 XT bios. But it runs pretty much the same as 6900XT bios except ram clocks.


I take it yours is a XTXH not XTX, by birth, noting your comment about the 6950XT bios and seeing smart memory / r_BAR enabled ?


----------



## zwer54

J7SC said:


> I take it yours is a XTXH not XTX, by birth, noting your comment about the 6950XT bios and seeing smart memory / r_BAR enabled ?


This is Toxic Extreme by default, I flashed silent bios with 6950 toxic limited edition bios. Once booted into windows, open MPT and load your original 6900XT bios and reboot. Then you can continue overclocking like before but you will be able to OC ram much higher (it won't work with fast timings). Also, Re-Bar works just like before, no issues whatsoever. But I am flashing with almos the same gpu, there's only 1 letter difference in device ID, everything else is pretty much the same, so I guess that's why it works so well together...


----------



## cfranko

zwer54 said:


> This is Toxic Extreme by default, I flashed silent bios with 6950 toxic limited edition bios. Once booted into windows, open MPT and load your original 6900XT bios and reboot. Then you can continue overclocking like before but you will be able to OC ram much higher (it won't work with fast timings). Also, Re-Bar works just like before, no issues whatsoever. But I am flashing with almos the same gpu, there's only 1 letter difference in device ID, everything else is pretty much the same, so I guess that's why it works so well together...


Why does GPU-Z say you have a 6900 xt? Shouldnt it say 6950 XT because you have the 6950 XT bios


----------



## jonRock1992

cfranko said:


> Why does GPU-Z say you have a 6900 xt? Shouldnt it say 6950 XT because you have the 6950 XT bios


It still says 6900 XT for me as well if I use the 6950 XT vBIOS.


----------



## J7SC

zwer54 said:


> This is Toxic Extreme by default, I flashed silent bios with 6950 toxic limited edition bios. Once booted into windows, open MPT and load your original 6900XT bios and reboot. Then you can continue overclocking like before but you will be able to OC ram much higher (it won't work with fast timings). Also, Re-Bar works just like before, no issues whatsoever. But I am flashing with almos the same gpu, there's only 1 letter difference in device ID, everything else is pretty much the same, so I guess that's why it works so well together...


...thanks ! Flashing has to wait a week re. work project (and 'rust' on my Ubuntu procedures  ), but I did find the nearly correct vbios at TPU last night - same vendor model and mostly same PCB, just 6950X...

...btw, do you folks disable Smart Memory / r_BAR in system bios first before flashing, then re-engage it after ?


----------



## jonRock1992

J7SC said:


> ...thanks ! Flashing has to wait a week re. work project (and 'rust' on my Ubuntu procedures  ), but I did find the nearly correct vbios at TPU last night - same vendor model and mostly same PCB, just 6950X...
> 
> ...btw, do you folks disable Smart Memory / r_BAR in system bios first before flashing, then re-engage it after ?


The problem I had was that I couldn't even enter the BIOS when I was using the 6950 XT vBIOS. Probably a Dark Hero issue lol. I'm not using that board anymore, but I haven't tested the 6950 XT vBIOS yet with my new mobo. But with my Dark Hero, the motherboard forced CSM on when the 6950 XT vBIOS was in use; therefore, SAM didn't work.


----------



## J7SC

jonRock1992 said:


> The problem I had was that I couldn't even enter the BIOS when I was using the 6950 XT vBIOS. Probably a Dark Hero issue lol. I'm not using that board anymore, but I haven't tested the 6950 XT vBIOS yet with my new mobo. But with my Dark Hero, the motherboard forced CSM on when the 6950 XT vBIOS was in use; therefore, SAM didn't work.


Tx ! I'm willing to risk it on my Dark Hero because the GPU has a dual bios (= '_don't leave home without it_').


----------



## CHUNKYBOWSER

Does anyone know if it's possible to change the hot spot throttling temperature, even if it requires a BIOS edit? Not sure why the XFX card has a limit of 95 degrees when the rest are 110.


----------



## deadfelllow

CHUNKYBOWSER said:


> Does anyone know if it's possible to change the hot spot throttling temperature, even if it requires a BIOS edit? Not sure why the XFX card has a limit of 95 degrees when the rest are 110.


Your card might be the XTXH instead of XTX. Generally XTXH chips are [email protected]

What is the exact model of your card?


----------



## supergt99

i think theres part of the bios we dont have access to. when we flash the so called "bios" on the cards, its only the things like MPT settings. this is just speculation.


----------



## zwer54

supergt99 said:


> i think theres part of the bios we dont have access to. when we flash the so called "bios" on the cards, its only the things like MPT settings. this is just speculation.


It flashes everything except device ID. My gpu device ID is 73AF and 6950XT that I was flashing has a device ID 73A5. It clearly shows during flashing process that device ID has been flashed as well but when I was flashing back it showed it was flashed from 73AF to 73AF which means the first time it didn't change.
This device ID tells the system if is 6900XT or 6950XT.

And it is not just MPT settings as no MPT settings would allow me to overclock ram over 2150 mhz and now it goes up to 2490.


----------



## Godhand007

zwer54 said:


> Continuously I'm getting lower scores than you guys, not sure what is all about but just can't pull more from this gpu.
> This one is with 6950 XT bios. But it runs pretty much the same as 6900XT bios except ram clocks.


What are your RAM timings?


----------



## J7SC

zwer54 said:


> It flashes everything except device ID. My gpu device ID is 73AF and 6950XT that I was flashing has a device ID 73A5. It clearly shows during flashing process that device ID has been flashed as well but when I was flashing back it showed it was flashed from 73AF to 73AF which means the first time it didn't change.
> This device ID tells the system if is 6900XT or 6950XT.
> 
> And it is not just MPT settings as no MPT settings would allow me to overclock ram over 2150 mhz and now it goes up to 2490.


I'll have to try anyways for myself, but is the flash worth it re, faster VRAM at perhaps slower timings ?


----------



## zwer54

Godhand007 said:


> What are your RAM timings?


Changes in ram settings mainly affect cpu score in timespy but not the gpu score. Running at 4200 16 16 16 36 1T in gear 2. Changing to gear 1 I can reduce a bit of latency but the score doesn't improve by much so not worth...

Ram is performing very well in all benchmarks I tried...


----------



## Alexxxx#€

hello, good morning, does this happen to someone? default radeon wattman settings have been restored the only thing I have modified is the graphics fans and always or almost always it stays by default, I don't know what to do because then the pc is perfect


----------



## Godhand007

zwer54 said:


> Changes in ram settings mainly affect cpu score in timespy but not the gpu score. Running at 4200 16 16 16 36 1T in gear 2. Changing to gear 1 I can reduce a bit of latency but the score doesn't improve by much so not worth...
> 
> Ram is performing very well in all benchmarks I tried...


I know. Was just wondering whether you forgot to turn on XMP, it has happened before with me.


----------



## zwer54

Godhand007 said:


> I know. Was just wondering whether you forgot to turn on XMP, it has happened before with me.


sure, well every try is a good one. Just the graphic score is the problem in my case. It just doesn't give as high scores as for others... I believe it should give a bit more but can't figure out what is the reason...


----------



## 99belle99

zwer54 said:


> sure, well every try is a good one. Just the graphic score is the problem in my case. It just doesn't give as high scores as for others... I believe it should give a bit more but can't figure out what is the reason...


Well be grateful you can hit that score. My reference with stock cooler can only get 23,300 graphics score in TimeSpy.


----------



## zwer54

99belle99 said:


> Well be grateful you can hit that score. My reference with stock cooler can only get 23,300 graphics score in TimeSpy.


As you say, your reference with stock cooler gives 23,300, while mine is toxic extreme edition with a custom water-block and liquid metal and I only get 24.700 graphic score in TimeSpy so I am not grateful 

and the problem is that I can achieve very high clocks and no high temps at all but still no results...


----------



## tommyd2k

RichieRich25 said:


> Me and three other of my friends have the same exact card. With the same settings we each see boost go to to around 2660 with his settings, with an avg clock around 2618. With his card on timespy he never moves from 2550. It stays there the whole time with no flucuations. His total power is around 370wat when watching the stats on hwinfo64. Going to have him uninstall 3dmark and reinstall


Is it happening with other benchmarks? Try Furmark or something see and if it goes higher.


----------



## jonRock1992

J7SC said:


> I'll have to try anyways for myself, but is the flash worth it re, faster VRAM at perhaps slower timings ?


With the LC vBIOS I'm running 2362MHz Fast-timings Level 2, and I have an increase in Shadow of the tomb raider bench even at 1440p.


----------



## No-one-no1

This "Temp V min" trick is so good! 
I did some quick tests, the temperature that triggers the "Vmin Low / High" is the "edge temp" (not hot spot).
In my case on the stock V curve I start getting up to about 48 at higher frequencies >2500MHz, with power limit increased of course.
So putting TVmin to 42 and Hysteresis to 3 gives me a 3 degree margin to trigger Vmin limit. (42 + 3 hysteresis = 45, which is 3 below 48)
Setting Vmin Low to 800 and High to 1200 and I get the magic 1.2V at high frequencies. And as soon as the load stops, it goes back to stock V curve.

Not 100% ideal, but a very nice hack that is fully daily drivable! <3

Now I just need to figure out how much it's worth it to push this old 6900XT, when the 6950XT's are going to be faster in all the benchmarks anyway :/



supergt99 said:


> View attachment 2560677
> 
> 
> This is not my settings right now, but i was able to pull this pic from another board. enable TEMP_DEPENDENT_VMIN
> set your voltage in the red box for GFX vmin low and high the same. and reboot. thats all i do for my red devil 6900xt.


----------



## supergt99

I’ll have to play the temp portion of it. Seems like my clocks go down when I up the voltage? Using 340pl. Should I go higher with the limit?


----------



## rodac

Alexxxx#€ said:


> hello, good morning, does this happen to someone? default radeon wattman settings have been restored the only thing I have modified is the graphics fans and always or almost always it stays by default, I don't know what to do because then the pc is perfect


yes happened to me all the time


----------



## No-one-no1

If the GPU isn't staying at the max set frequency (- about 50MHz or so) you're hitting some limit.

Or cpu / ram is bugging and not sending data consistently to the gpu. But in that case the gpu will clock all the way down to idle state while it waits some something to do again.



supergt99 said:


> I’ll have to play the temp portion of it. Seems like my clocks go down when I up the voltage? Using 340pl. Should I go higher with the limit?


----------



## jonRock1992

I'm not sure if anyone else is having this experience, but fast timings level 2 is completely stable on the XTLC vBIOS, but not on the 6950 XT vBIOS. So the timings must be tighter on the 6950 XT vBIOS.


----------



## criss9527

What do the parameters DcTol(mv) and DcBtcMAX mean?
My (dctor|Dcbtc) is 58. I read the documentation is 48


----------



## ptt1982

Hi guys! I’m back to OC from my undervolting adventures as I noticed that playing Vsynced at 60fps keeps the card very low power even when OC’d.

I wanted to ask if anyone has tried increasing SOC voltage to 1175mv or 1200mv on XTX cards on MPT?

My Red Devil seems to be sensitive to SOC voltage more than the core voltage, but before upping it, I wanted to ask the more experienced peeps around here.


----------



## Gabriel Luchina

Hi, I have a Sapphire 6900 XT SE+, I noticed that recently it started to crash in COD Warzone and I need to restart the PC.

I've formatted Windows (11), updated the BIOS and nothing works. If I put an RTX 2080Ti in its place, I play for hours with no problems.

If I put the RX 6900 XT in a setup I have with AMD, it also works without problems.

This problem I noticed happens on two setups, one with an Intel 10900k and the other with a 12900k.

If I run stress tests like furmark, it runs for hours and no problems, it's just in COD.

I do not know what else to do


----------



## supergt99

is anyone running more than 1200mv dailiy on air?


----------



## Azazil1190

Gabriel Luchina said:


> Hi, I have a Sapphire 6900 XT SE+, I noticed that recently it started to crash in COD Warzone and I need to restart the PC.
> 
> I've formatted Windows (11), updated the BIOS and nothing works. If I put an RTX 2080Ti in its place, I play for hours with no problems.
> 
> If I put the RX 6900 XT in a setup I have with AMD, it also works without problems.
> 
> This problem I noticed happens on two setups, one with an Intel 10900k and the other with a 12900k.
> 
> If I run stress tests like furmark, it runs for hours and no problems, it's just in COD.
> 
> I do not know what else to do


Probably is the game. Combination of amd drivers and the latest cod update.
Do you get crashes only to cod?
Try to restart shaders from the game


----------



## No-one-no1

Did some more quick testing and fooling around. Could not get 2920MHz (set) to work with the temp min V hack. Load just hits the gpu at the stock 1175mV limit and crashes before the temp increases to bump up the volts. I did multiple benchmarks at 2880MHz (set) with 1250mV. Runs everything I tested without any issues.

Dialed that down to 2850MHz (set) with 1250mV, and gamed the whole day today on that.
This is a legacy 6900XT on our ElmTech custom test cooling system to air, with air temp up to 30degrees. Junction temp peaks around 81degrees or so.
Junction temp is super important, the higher that goes, the lower you need to keep your volts to not damage things. So I would not recommend blindly running 1250mV at high temps.

Looking at the scaling, I think I should be able to daily 2950MHz (set) at 1300mV. But I would have to put the temp limit so low that it would effectively always be at 1300mV, and have unnecessarily high powerdraw at idle. :/



supergt99 said:


> is anyone running more than 1200mv dailiy on air?


----------



## Godhand007

Gabriel Luchina said:


> Hi, I have a Sapphire 6900 XT SE+, I noticed that recently it started to crash in COD Warzone and I need to restart the PC.
> 
> I've formatted Windows (11), updated the BIOS and nothing works. If I put an RTX 2080Ti in its place, I play for hours with no problems.
> 
> If I put the RX 6900 XT in a setup I have with AMD, it also works without problems.
> 
> This problem I noticed happens on two setups, one with an Intel 10900k and the other with a 12900k.
> 
> If I run stress tests like furmark, it runs for hours and no problems, it's just in COD.
> 
> I do not know what else to do


Could be due to RAM or CPU instability in combination with a 6900 XT. Remember 6900XT is going to push your system much harder than an RTX 2080ti.


----------



## Counterassy14

Gabriel Luchina said:


> Hi, I have a Sapphire 6900 XT SE+, I noticed that recently it started to crash in COD Warzone and I need to restart the PC.
> 
> I've formatted Windows (11), updated the BIOS and nothing works. If I put an RTX 2080Ti in its place, I play for hours with no problems.
> 
> If I put the RX 6900 XT in a setup I have with AMD, it also works without problems.
> 
> This problem I noticed happens on two setups, one with an Intel 10900k and the other with a 12900k.
> 
> If I run stress tests like furmark, it runs for hours and no problems, it's just in COD.
> 
> I do not know what else to do


Not sure if it would cause this issue but have you tried turning off Windows 11 exploit protection? 
It really messes with your GPU sometimes, especially when you are running other benchmarks as well.
( I had it kick in when playing Destiny and it would cause wierd framerates and crashes)

btw. if anybody has this issue where TimeSpy or Firestrike would suddenly no longer go past 60fps you can thank exploit protection for that. It saves you from stresstesting your PC.


----------



## RichieRich25

Gabriel Luchina said:


> Hi, I have a Sapphire 6900 XT SE+, I noticed that recently it started to crash in COD Warzone and I need to restart the PC.
> 
> I've formatted Windows (11), updated the BIOS and nothing works. If I put an RTX 2080Ti in its place, I play for hours with no problems.
> 
> If I put the RX 6900 XT in a setup I have with AMD, it also works without problems.
> 
> This problem I noticed happens on two setups, one with an Intel 10900k and the other with a 12900k.
> 
> If I run stress tests like furmark, it runs for hours and no problems, it's just in COD.
> 
> I do not know what else to do


Cod has always been a frustrating experience when it comes to stability. My 2080ti and 5700xt we're a nightmare when it came to dev errors in that game. I have to say I've never crashed in warzone since the 6900xt, but I have a friend that has damn near the same PC as me besides having a 3900x vs my 5950 and he can't even load warzone. The only things you have to check is overclocked on CPU , and ram stability on the Intel PCs, make sure on the 6900xt that each power slot is occupied by its own pcie cable and not daisy chained, and that the power supply is sufficient enough on Intel's


----------



## D1g1talEntr0py

RichieRich25 said:


> Cod has always been a frustrating experience when it comes to stability. My 2080ti and 5700xt we're a nightmare when it came to dev errors in that game. I have to say I've never crashed in warzone since the 6900xt, but I have a friend that has damn near the same PC as me besides having a 3900x vs my 5950 and he can't even load warzone. The only things you have to check is overclocked on CPU , and ram stability on the Intel PCs, make sure on the 6900xt that each power slot is occupied by its own pcie cable and not daisy chained, and that the power supply is sufficient enough on Intel's


PCs are so weird like that. For the longest time I could not even launch Horizon Zero Dawn. It would just freeze while loading the game. Not one problem with any other games. Turns out, it was this BIOS setting in the RAM security section called "Data Scramble". Obviously, I disabled it because why wouldn't I play with a setting I know nothing about. LOL. Thankfully, I figured it out because that game was great.


----------



## EastCoast

@D1g1talEntr0py @RichieRich25 @Gabriel Luchina 

There is something odd with Vanguard. Everytime the match ends and I'm at the vote screen temps would start to go up. Anyone else notice this?


----------



## RichieRich25

EastCoast said:


> @D1g1talEntr0py @RichieRich25 @Gabriel Luchina
> 
> There is something odd with Vanguard. Everytime the match ends and I'm at the vote screen temps would start to go up. Anyone else notice this?


Yes, if you don't run vsync or some type of fps lock. On the vote screen I'll see a massive increase in fps which in return increases temps for me. I notice the same thing in destiny while flying in orbit. I'll see something like 400 fps And my hotspot will go from 55 to like 80 instantly


----------



## EastCoast

RichieRich25 said:


> Yes, if you don't run vsync or some type of fps lock. On the vote screen I'll see a massive increase in fps which in return increases temps for me. I notice the same thing in destiny while flying in orbit. I'll see something like 400 fps And my hotspot will go from 55 to like 80 instantly


I have the frames capped and it still heats up. I will try vsync then. Thanks.


----------



## PanZwu

jonRock1992 said:


> Upgraded to 5800X3D and flashed the XTLC vBIOS to my 6900 XT RDU. Got a nice little boost in SOTTR benchmark.
> 1440p before:
> View attachment 2560890
> 
> 1440p after:
> View attachment 2560891
> 
> 4k before:
> View attachment 2560892
> 
> 4k after:
> View attachment 2560893


how did you overcome the usb overcurrent protection thing?


----------



## jonRock1992

PanZwu said:


> how did you overcome the usb overcurrent protection thing?


I ditched the Dark Hero and got an MSI X570S Carbon Max.


----------



## RichieRich25

Never see any port royal scores but with more powertool i went up around 1000 pointsI scored 12 245 in Port Royal


----------



## J7SC

RichieRich25 said:


> Never see any port royal scores but with more powertool i went up around 1000 pointsI scored 12 245 in Port Royal


Nice, but do you have one w/ validation, ie. w/o tessellation mods and the other warning item ?


----------



## RichieRich25

J7SC said:


> Nice, but do you have one w/ validation, ie. w/o tessellation mods and the other warning item ?
> View attachment 2561690


It's because I'm on the preview drivers. They are not approved by 3dmark which sucks because the drivers are provided by amd themselves
Edit I see it says tesselation mode is modified but it's not which is weird.. guess I do another one


----------



## J7SC

RichieRich25 said:


> It's because I'm on the preview drivers. They are not approved by 3dmark which sucks because the drivers are provided by amd themselves
> Edit I see it says tesselation mode is modified but it's not which is weird.. guess I do another one


...yeah, preview driver issue is likely the ' +1 more' note ..tessellation is s.th. different. FYI, I'm asking because on my setup, the preview driver seems slower w/ ray tracing, but faster elsewhere.

Ed.: The other thing about the preview driver is that it does not turn the screen off (after 15 min of no activity, per my settings), whereby the regular driver does...


----------



## RichieRich25

J7SC said:


> ...yeah, preview driver issue is likely the ' +1 more' note ..tessellation is s.th. different. FYI, I'm asking because on my setup, the preview driver seems slower w/ ray tracing, but faster elsewhere.
> 
> Ed.: The other thing about the preview driver is that it does not turn the screen off (after 15 min of no activity, per my settings), whereby the regular driver does...


super weird issue where it kept saying tessellation modified when it wasnt. I had this issue before a few months back and basically i have to delete any 3dmark profile from amd software and now the tessellation issue is resolved. I scored 12 215 in Port Royal


----------



## Gabriel Luchina

Azazil1190 said:


> Probably is the game. Combination of amd drivers and the latest cod update.
> Do you get crashes only to cod?
> Try to restart shaders from the game


I believe that this gaming too... yep, only in COD. I moved RX 6900XT from 12900k to AMD 5950X, work perfectly in COD.

**** drivers, **** lucky! haeuheuhaeuhe


----------



## bloot

22.5.2 are out



https://www.amd.com/en/support/kb/release-notes/rn-rad-win-22-5-2


----------



## RichieRich25

bloot said:


> 22.5.2 are out
> 
> 
> 
> https://www.amd.com/en/support/kb/release-notes/rn-rad-win-22-5-2


Ddu here I come. Now my scores can be verified


----------



## RichieRich25

Graphics driver not approved but hopefully they update and it will be. Scores seem the same as preview drivers so I'll take it 








I scored 22 073 in Time Spy


AMD Ryzen 9 5950X, AMD Radeon RX 6900 XT x 1, 32768 MB, 64-bit Windows 11}




www.3dmark.com


----------



## J7SC

RichieRich25 said:


> Graphics driver not approved but hopefully they update and it will be. Scores seem the same as preview drivers so I'll take it
> 
> 
> 
> 
> 
> 
> 
> 
> I scored 22 073 in Time Spy
> 
> 
> AMD Ryzen 9 5950X, AMD Radeon RX 6900 XT x 1, 32768 MB, 64-bit Windows 11}
> 
> 
> 
> 
> www.3dmark.com


...usually takes 3DM a day or so


----------



## LtMatt

EastCoast said:


> @D1g1talEntr0py @RichieRich25 @Gabriel Luchina
> 
> There is something odd with Vanguard. Everytime the match ends and I'm at the vote screen temps would start to go up. Anyone else notice this?


It's normal. The last screen in Vanguard is very demanding and GPU power draw goes up significantly for that 15-30 seconds.

I play a lot of COD and Vanguard, etc and never had any stability issues with those games. If you are having stability issues here, I would verify your overclocks/undervolts (CPU/Memory/GPU) are truly stable.


----------



## Azazil1190

LtMatt said:


> It's normal. The last screen in Vanguard is very demanding and GPU power draw goes up significantly for that 15-30 seconds.
> 
> I play a lot of COD and Vanguard, etc and never had any stability issues with those games. If you are having stability issues here, I would verify your overclocks/undervolts (CPU/Memory/GPU) are truly stable.


Agree!
Cod is very sensitive at rams oc most of the time


----------



## RichieRich25

finally a valid result I scored 22 125 in Time Spy
350 pl, 2636min 2735max, 2150 mem, 1200 voltage, fclk at 2000(anything past 2000 I see artifact)


----------



## No-one-no1

Has anyone been able to find info on what the "ULV Settings" do? That was added in MPT 1.3.9 ?








The new MorePowerTool 1.3.9 Final is now available for download | igor'sLAB


The new, final version 1.3.9 of the MorePowerTool offers, in addition to new functions, of course, again the thoroughly revised engine, which significantly simplifies the use of the tool.




www.igorslab.de


----------



## RichieRich25

Does anyone have a 5950x, how are people getting 18000-19000 CPU score. I'm stuck at the 16000s. I'm using curve optimizer with ppt at 225, tdc 150 edc 170. Are they using an all core clock at 4.8?


----------



## alceryes

D1g1talEntr0py said:


> PCs are so weird like that. For the longest time I could not even launch Horizon Zero Dawn.


This was actually a problem with HZD not with your system.
They screwed up caching of textures on the loading screen. End result was that I could play HZD all day if I could quickly load up a game in 4 seconds. The crash happened at the 4-5 second mark. This was on a Titan Xp, BTW. They finally fixed the issue after people complained for 6 months straight.


----------



## deadfelllow

RichieRich25 said:


> Does anyone have a 5950x, how are people getting 18000-19000 CPU score. I'm stuck at the 16000s. I'm using curve optimizer with ppt at 225, tdc 150 edc 170. Are they using an all core clock at 4.8?


It's almost imposible allcore 4.8 GHZ with water cooling for daily. Maybe your pbo settings are messed up(co optimizer). Also your ppt is really low for 5950x( PBO wont even boost allcore above 4.4 or 4.5 with that tdp limit) . Maybe you can try 4.6 manuel oc All core with 1.35V


----------



## Sam64

RichieRich25 said:


> Does anyone have a 5950x, how are people getting 18000-19000 CPU score. I'm stuck at the 16000s. I'm using curve optimizer with ppt at 225, tdc 150 edc 170. Are they using an all core clock at 4.8?


Nope, try SMT-Off and u will get there....


----------



## RichieRich25

deadfelllow said:


> It's almost imposible allcore 4.8 GHZ with water cooling for daily. Maybe your pbo settings are messed up(co optimizer). Also your ppt is really low for 5950x( PBO wont even boost allcore above 4.4 or 4.5 with that tdp limit) . Maybe you can try 4.6 manuel oc All core with 1.35V


So I'm boosting to 4.6 all core and around 4.9 single using cpuz. What should the ppt and tdc be at to hit better numbers. I thoroughly tested co and basically almost all cores can go to 30 except around 4 of them at need 20. No stabiliyissues at all. Max temp is around 72 with around 50% fan speeds. I don't freak out about temps like other ppl do but I don't want to be past 80. That's where I'll stop


----------



## RichieRich25

AMD Ryzen 9 5950X @ 4573.93 MHz - CPU-Z VALIDATOR
this is with ppt 260, 175 tdc 190 edc. Still no difference


----------



## deadfelllow

RichieRich25 said:


> AMD Ryzen 9 5950X @ 4573.93 MHz - CPU-Z VALIDATOR
> this is with ppt 260, 175 tdc 190 edc. Still no difference


I will give you an example. I have a ryzen 5600x. So lets compare my pbo settings.

With -30 all cores. Even tho its stable im getting bad results compare to -8 best core - 12 2nd best core and others are like -15 -18. So when you set -30 to your cores that doesnt mean that youre gonna get better results. Try to change your co values also LLC is very important for PBO.


----------



## RichieRich25

deadfelllow said:


> I will give you an example. I have a ryzen 5600x. So lets compare my pbo settings.
> 
> With -30 all cores. Even tho its stable im getting bad results compare to -8 best core - 12 2nd best core and others are like -15 -18. So when you set -30 to your cores that doesnt mean that youre gonna get better results. Try to change your co values also LLC is very important for PBO.


Ok I see. I used Igor's hydra to run a test to see which cores were my best and then used then played around with the values but I will redo it again and see what I get


----------



## Sam64

Whatever you do with 5950X and CO, it will not get you to 18k cpu score in TS (not, that it matters) Still, to get the best cpu score in TS with 5950X you should try SMT-Off.


----------



## RichieRich25

Sam64 said:


> Whatever you do with 5950X and CO, it will not get you to 18k cpu score in TS (not, that it matters) Still, to get the best cpu score in TS with 5950X you should try SMT-Off.











I scored 22 491 in Time Spy


AMD Ryzen 9 5950X, AMD Radeon RX 6900 XT x 1, 32768 MB, 64-bit Windows 11}




www.3dmark.com





Your suggestion worked. Thanks for the tip. Was trying to figure out how ppl got there


----------



## Luggage

RichieRich25 said:


> I scored 22 491 in Time Spy
> 
> 
> AMD Ryzen 9 5950X, AMD Radeon RX 6900 XT x 1, 32768 MB, 64-bit Windows 11}
> 
> 
> 
> 
> www.3dmark.com
> 
> 
> 
> 
> 
> Your suggestion worked. Thanks for the tip. Was trying to figure out how ppl got there


TS freaks out with 32 threads :/


----------



## J7SC

...the 6900XT and its 'neighbors' > had a party  ...there probably will be another party once I flash the 6950XT bios and load the latest drivers along with a good helping of MPT goodness...


----------



## LtMatt

RichieRich25 said:


> Does anyone have a 5950x, how are people getting 18000-19000 CPU score. I'm stuck at the 16000s. I'm using curve optimizer with ppt at 225, tdc 150 edc 170. Are they using an all core clock at 4.8?


For Timespy disable SMT should boost CPU score a bit.


----------



## Azazil1190

Guy's how much you think is the safe voltage for benching an xtxh (toxic extreme )
1.23v ?


----------



## LtMatt

Azazil1190 said:


> Guy's how much you think is the safe voltage for benching an xtxh (toxic extreme )
> 1.23v ?


Stock is the only true safe value IMO. However, I go as high as 1.250v for short benchmarks.


----------



## Azazil1190

LtMatt said:


> Stock is the only true safe value IMO. However, I go as high as 1.250v for short benchmarks.


Thnx Matt!
Yes only for one run at superposition.
I have a competition vs fe 3090ti of my friend


----------



## LtMatt

Azazil1190 said:


> Thnx Matt!
> Yes only for one run at superposition.
> I have a competition vs fe 3090ti of my friend


Could be a tough one that, Superposition favours green but good luck.


----------



## deadfelllow

Azazil1190 said:


> Thnx Matt!
> Yes only for one run at superposition.
> I have a competition vs fe 3090ti of my friend


Even if you set 1.3V on your gpu. Superposition wont even use that much power for testing. Mostly 3Dmark matters.


----------



## Azazil1190

LtMatt said:


> Could be a tough one that, Superposition favours green but good luck.


I know ...but i will try.
We are close enough at score 4k opti.
He score 20607 and i think like he said that this is the max oc of his card.
Mine score is 20582 ,so close enough.


----------



## Azazil1190

deadfelllow said:


> Even if you set 1.3V on your gpu. Superposition wont even use that much power for testing. Mostly 3Dmark matters.


Thnx dude!
So I will try for 1.25


----------



## LtMatt

In that case you should be able to beat that easily if so far you are only using 1.2v.


----------



## deadfelllow

Azazil1190 said:


> Thnx dude!
> So I will try for 1.25





LtMatt said:


> In that case you should be able to beat that easily if so far you are only using 1.2v.


Precisely. For me(Superposition) it wasnt even using [email protected] In 3dmark tho it was 580 WxD


----------



## LtMatt

deadfelllow said:


> Precisely. For me(Superposition) it wasnt even using [email protected] In 3dmark tho it was 580 WxD


Balls to the wall then I say in Superposition.


----------



## Azazil1190

This is at 1.2 stock 2850-2950 and 2176 memory (i will try 2150 to see if i can improve my score)
Fclk 2176 ,i will try 2200
450w limit and 15%pl
Lets see if the higher voltage can help me
Or i can throw it from the window and buy 3090ti 
We have another round too vs my strix 6900top


----------



## tcclaviger

While running Star Citizen was totally stable. ASRock 6950 OCF. Absolute beastly GPU. All protections remain in place, no EVC2.








Cracked into FSU top 10 yesterday with it + X3D.
MPT is a blessing for 6950xts! It will pull an insane amount of power if allowed to do so...

For benching I had TDC limit to 380a and PPT to 600w and was banging both limiters at various times. I chickened out and didn't feed it more (which it wanted).

On Valid runs:
12896 PR
71169 FS Graphics score
25933 TS Graphics score

With 1100min/1325 max voltage set in MPT I was benching with clock min at 2850 and max at 3050, narrowing the window was a loss in score, needed more power to go higher.
2350 memory using FT2 at 1510mv.
2300 FCLK.
1200 SOC speed at 1000min mv/1275max mv.
I suspect I pulled an above average sample this time.


----------



## No-one-no1

I daily 1.25v  (on a legacy 6900XT)
I tried bumping up to 1.3v, but the scaling just starts tapering off too much to be worth it I think.
1.2 does 2800MHz
1.25 does 2880MHz
1.3 does 2930MHz
"Set" voltages and MHz. On custom cooling to air with junction temp peak of ~83degrees under max load.
Depends on your junction temp though. The higher the peak temp the more careful you need to be with the voltages.



Azazil1190 said:


> Guy's how much you think is the safe voltage for benching an xtxh (toxic extreme )
> 1.23v ?


----------



## Azazil1190

No-one-no1 said:


> I daily 1.25v  (on a legacy 6900XT)
> I tried bumping up to 1.3v, but the scaling just starts tapering off too much to be worth it I think.
> 1.2 does 2800MHz
> 1.25 does 2880MHz
> 1.3 does 2930MHz
> "Set" voltages and MHz. On custom cooling to air with junction temp peak of ~83degrees under max load.
> Depends on your junction temp though. The higher the peak temp the more careful you need to be with the voltages.


Thnx for the infos appreciate


----------



## deadfelllow

tcclaviger said:


> While running Star Citizen was totally stable. ASRock 6950 OCF. Absolute beastly GPU. All protections remain in place, no EVC2.
> View attachment 2561845
> 
> 
> Cracked into FSU top 10 yesterday with it + X3D.
> MPT is a blessing for 6950xts! It will pull an insane amount of power if allowed to do so...
> 
> For benching I had TDC limit to 380a and PPT to 600w and was banging both limiters at various times. I chickened out and didn't feed it more (which it wanted).
> 
> On Valid runs:
> 12896 PR
> 71169 FS Graphics score
> 25933 TS Graphics score
> 
> With 1100min/1325 max voltage set in MPT I was benching with clock min at 2850 and max at 3050, narrowing the window was a loss in score, needed more power to go higher.
> 2350 memory using FT2 at 1510mv.
> 2300 FCLK.
> 1200 SOC speed at 1000min mv/1275max mv.
> I suspect I pulled an above average sample this time.


[email protected] WOW. What is your cooling setup?


----------



## RichieRich25

No-one-no1 said:


> I daily 1.25v  (on a legacy 6900XT)
> I tried bumping up to 1.3v, but the scaling just starts tapering off too much to be worth it I think.
> 1.2 does 2800MHz
> 1.25 does 2880MHz
> 1.3 does 2930MHz
> "Set" voltages and MHz. On custom cooling to air with junction temp peak of ~83degrees under max load.
> Depends on your junction temp though. The higher the peak temp the more careful you need to be with the voltages.


So what's most likely to crash timespy. I'm on air. If I run 2635 2735 max. I sometimes fail but it's always test number 2. Temps are are 56 to 59 junction never higher than 90. Even if I put my PC next to my ac and reduce temps I will see a failed bench 
25% of the time . Is this voltage related? Pl is set to 350. Fclk set 2000. Ulvtemp dependent at 1200 max. CPU and ram are stable


----------



## Veii

Teasing


----------



## deadfelllow

Veii said:


> Teasing
> View attachment 2561866


Are you able to boot after loading 6900xt stock bios to MPT and change PPT EDC / Fabric clock values?

I was able to flash 6950xt bios to my Toxic EE LC. But when i change PPT/EDC values through mpt it is looping itself on reboot.


----------



## Veii

deadfelllow said:


> Are you able to boot after loading 6900xt stock bios to MPT and change PPT EDC / Fabric clock values?











Can you access the bios with it ?
If no, GOP refuses your deviceID

EDIT:
Just why would you do that ?
You shouldn't load different DeviceID powerplay tables onto a different firmware
ID missmatch - but yes still functions both ways & flashes officially
Either old MPT with old deviceID or new MPT from new current bios

EDIT2:
Flashing normally doesn't work
SMU DeviceID missmatches with ROM deviceID
So hence XTX to XTXH can never boot, or half boot at absolute best
Just shouldn't do that unless your amdvbflash can change PCI_DeviceID


----------



## tcclaviger

New scores uploaded with 5950x in place, still need more tuning time on the 5950x, it's behaving a little differently in this MB/AGESA so the CPU scores are coming up a bit short of expected.

18007 - FSU
33397 -FSE
52985 - FS
24749 - TS
12344 - TSE

Couldn't beat my X3D Port Royal score.

My first result showing over 3ghz GPU \0/








I scored 18 007 in Fire Strike Ultra


AMD Ryzen 9 5950X, AMD Radeon RX 6950 XT x 1, 32768 MB, 64-bit Windows 10}




www.3dmark.com





Having some odd issues in FS, I expect that score to come up a few thousand points when done.


deadfelllow said:


> [email protected] WOW. What is your cooling setup?


Chiller set to 4c water
Bykski block with LM
High flow, about 2GPM through the blocks.

Gave it "whatever" power today, as in, take what you want no limits, saw 980 system power draw during TSE GT2 (read from PSU). Shocking how much a single Navi21 is capable of pulling. 

At 525w GPU power the system power was around 810w, so ....best guess, GPU was sucking down nearly 700w today.


----------



## HyperC

Okay , so I have been windows 11 crashes from that update 2 weeks ago blocked it and what not. That problem has gone away but now I get ryzenmaster19 driver failed to open crashes game sound loop crashes? been on clean install Intel since the switch. Is this just baked into the drivers no matter what? or SAM? anyone know if a fix I want to say since the 22.5.1 drivers but then again haven't been playing a lot


----------



## tcclaviger

The easiest fix is Ryzen Software Slimmer and simply uninstal the RyzenMaster component baked into the Adrenaline driver. You lose nothing by doing so, it's a useless addition to the driver only present to provide CPU monitoring service for Zen CPUs.


----------



## LtMatt

tcclaviger said:


> The easiest fix is Ryzen Software Slimmer and simply uninstal the RyzenMaster component baked into the Adrenaline driver. You lose nothing by doing so, it's a useless addition to the driver only present to provide CPU monitoring service for Zen CPUs.


Another option is to just disable the RyzenMasterSDK scheduled task like I do. No more CPU options in GPU Tuning menu.


----------



## deadfelllow

Veii said:


> Can you access the bios with it ?
> If no, GOP refuses your deviceID
> 
> EDIT:
> Just why would you do that ?
> You shouldn't load different DeviceID powerplay tables onto a different firmware
> ID missmatch - but yes still functions both ways & flashes officially
> Either old MPT with old deviceID or new MPT from new current bios
> 
> EDIT2:
> Flashing normally doesn't work
> SMU DeviceID missmatches with ROM deviceID
> So hence XTX to XTXH can never boot, or half boot at absolute best
> Just shouldn't do that unless your amdvbflash can change PCI_DeviceID


Because when i flash Sapphire 6950xt Limited edition bios to my sapphire 6900xt Extreme edition. I cant change my clocks / power limit. But when i load 6900xt bios through mpt i can. But the thing is when i change the PPT/EDC value it won't turn on.

PS : Sapphire EE has 3 bioses.


----------



## Veii

deadfelllow said:


> cant change my clocks / power limit. But when i load 6900xt bios through mpt i can.


Device ID incompatibility 
Needs a touch on the Bios ~ Both GOP's and before MPT location
and then fix checksum and sign it


----------



## jonRock1992

Azazil1190 said:


> Guy's how much you think is the safe voltage for benching an xtxh (toxic extreme )
> 1.23v ?


Not trying to egg you on but I run 1262mV daily lol. No degradation in performance or stability. I'm using a custom loop though to keep temps in check.


----------



## Azazil1190

jonRock1992 said:


> Not trying to egg you on but I run 1262mV daily lol. No degradation in performance or stability. I'm using a custom loop though to keep temps in check.


For now im ok 
I win this round vs 3090ti of my friend so i enjoy the 1st place  until next time.
I score 20653 1.23v at superposition 4k vs 20607 3090ti.


----------



## J7SC

Azazil1190 said:


> For now im ok
> I win this round vs 3090ti of my friend so i enjoy the 1st place  until next time.
> I score 20653 1.23v at superposition 4k vs 20607 3090ti.


Now see what you made me do🥴 ...defend NVidia's honor w/ retired 2x2080Tis ! Fyi, they used a combined 760W for that (just GPUs, before CPU & other)...ambient was 24 C. 

As soon as I've flashed and modded the 6900XT's bios, I will try to defend AMD's honor....


----------



## HyperC

tcclaviger said:


> The easiest fix is Ryzen Software Slimmer and simply uninstal the RyzenMaster component baked into the Adrenaline driver. You lose nothing by doing so, it's a useless addition to the driver only present to provide CPU monitoring service for Zen CPUs.





LtMatt said:


> Another option is to just disable the RyzenMasterSDK scheduled task like I do. No more CPU options in GPU Tuning menu.
> View attachment 2561944


Thanks guys this is weird couldn't find it in the task scheduler, I did find it in the driver folder


----------



## tcclaviger

My first attempt with SP4K on 6950xt AMD punching back at 4k 

5th overall, gaming MPT settings not Bench settings 3dmark valid driver config, so Tess enabled etc

3050max/2800min GPU, 2320 FT2, ~550watts for gpu, 48c hotspot, 31c core, 35c memory.


----------



## Azazil1190

tcclaviger said:


> My first attempt with SP4K on 6950xt AMD punching back at 4k
> 
> 5th overall, gaming MPT settings not Bench settings 3dmark valid driver config, so Tess enabled etc
> 
> 3050max/2800min GPU, 2320 FT2, ~550watts for gpu, 48c hotspot, 31c core, 35c memory.
> 
> View attachment 2562072


48c hotspot  and 31c core OMG


----------



## tcclaviger

Got fed up with seasonal shifts in climate so...decided to dictate climate by force🤣🤣


----------



## deadfelllow

Guys finally i've found sapphire 6900xt EE gpu retention bracket screws and bought them. But i need help for spring types.










So for springs, i need to be sure before buying it.
















So this is the original spring for sapphire 6900xt ee gpu ret bracket screws.( sapphire gave me the specs)

So there isnt any springs for 1.7mm small outer diameter. Shortest is 2mm. Also shortest length is 5mm 

I couldnt find any conical springs shorter than this.



















Thanks for help.


----------



## 2080tiowner

Hi all, it's really impossible to flash XTXH bios on XTX board ?

I've a 6900xt nitro+ SE that is not XTXH chip...

I would like only go up than 2150 mhz on memory 

Thanks !


----------



## 99belle99

2080tiowner said:


> Hi all, it's really impossible to flash XTXH bios on XTX board ?
> 
> I've a 6900xt nitro+ SE that is not XTXH chip...
> 
> I would like only go up than 2150 mhz on memory
> 
> Thanks !


No that is not possible. And I would not force flash it unless you have a dual bios card.


----------



## J7SC

...I hope to test out the 6950XT bios swap within a week or less (> need free time, and trying to remember Ubuntu / USB method using amdflash that can override)...any 'updated links' re. process and respective versions ? 

Before the update, I thought I'd also run some 8K between my three systems. Hopefully, the 69/50XT can add a bit to score via VRAM+; fyi, w-cooled / ambient / ~ 400W for the 6900XT (~ 500W for the 3090, ~ 760W for the 2x 2080 Ti).


----------



## tcclaviger

aaaand 4th, ahead of an LN2+EVC2 6900xt in a very proficient OCer's hands:









3DMark Fire Strike Ultra Hall of Fame


The 3DMark.com Overclocking Hall of Fame is the official home of 3DMark world record scores.




www.3dmark.com





Don't get the wrong idea, its not typical, don't expect an air cooled or ambient loop 6950xt to do this. My EVC2 is in the mail traveling now so I can take a crack at 19000 score.

CPU Settings yield this:


----------



## 2080tiowner

99belle99 said:


> No that is not possible. And I would not force flash it unless you have a dual bios card.


It's possible to flash with TOXIC air cooled bios (not an XTXH chip as the SE but with highter clock boost) ?

If yes, only under linux ?

Thanks !!!


----------



## J7SC

...'coals to Newcastle' for this thread, but still...


----------



## ZealotKi11er

I used to like benching 6900 XT when we had 1.2v limits. Now everyone beats my 2875MHz @ 1.2v in TS. Just going to wait for 7900 XT + 7950X to get back on top.


----------



## LtMatt

deadfelllow said:


> Guys finally i've found sapphire 6900xt EE gpu retention bracket screws and bought them. But i need help for spring types.
> 
> View attachment 2562086
> 
> 
> So for springs, i need to be sure before buying it.
> 
> View attachment 2562087
> View attachment 2562091
> 
> So this is the original spring for sapphire 6900xt ee gpu ret bracket screws.( sapphire gave me the specs)
> 
> So there isnt any springs for 1.7mm small outer diameter. Shortest is 2mm. Also shortest length is 5mm
> 
> I couldnt find any conical springs shorter than this.
> 
> View attachment 2562088
> 
> 
> View attachment 2562089
> 
> 
> Thanks for help.


Maybe the smallest ones you can get will work?


----------



## deadfelllow

LtMatt said:


> Maybe the smallest ones you can get will work?


I am afraid of proper contact


----------



## vegetagaru

Hello everyone i have a Saphire Nitro + 6900xt and lately ive been experiencing hot spot temps above 110 when gaming. i have the card already by 1year and never reached this kind of values. the GPU temp goes when gaming normally between 70~75
i wanted to ask if anyone have similar "problem"

my settings atm are:
min:2550
max:2650
voltage:1175
fast timing
max freq 2100
power: +15

MPT:
power limit: 320
tdc(a): 320

curve:


----------



## rodac

vegetagaru said:


> Hello everyone i have a Saphire Nitro + 6900xt and lately ive been experiencing hot spot temps above 110 when gaming. i have the card already by 1year and never reached this kind of values. the GPU temp goes when gaming normally between 70~75
> i wanted to ask if anyone have similar "problem"


Yes after about a year. High clocks and temp accelerate thermal paste degradation, a re paste may be the solution, it worked for me but that is quite a bit of work and it voids the warranty. See @LtMatt excellent thread


----------



## vegetagaru

rodac said:


> Yes after about a year. High clocks and temp accelerate thermal paste degradation, a re paste may be the solution, it worked for me but that is quite a bit of work and it voids the warranty. See @LtMatt excellent thread


what i thing its strange it only started doing this like 1 to 2 weeks ago and the only noticeble thing i did was updating drivers (dunno last version i had installed) but tried the last 2 and same thing happened


----------



## Azazil1190

vegetagaru said:


> what i thing its strange it only started doing this like 1 to 2 weeks ago and the only noticeble thing i did was updating drivers (dunno last version i had installed) but tried the last 2 and same thing happened


Yeap need repaste as rodac told you.
Its the only solution.
I did it too to mine extreme toxic.
Follow Matt guide.

One question guys.
Does anyone have strix 6900xt lc?(xtxh)
Yesterday i made some test to my other system and after a run of 3dmark ts at 400w via mptool 
I saw of course lower score than i expected and one red light blinking(from 3) above the gpu pci cables (i have extensions)

Psu i think

At 4k gaming i dont have that issue.
2800-2900 max 400w via mptool (+15pl )
1.2v


----------



## vegetagaru

im a bit of "not happy" about voiding warranty tbh, im going to my local salesman to see what he says about it, if i should repast it even voiding it or if there's another solution(more about RMA it as i saw some people claiming that they sent it for RMA for same reasons). 
i even tried to run the graphics only on the rage default mode and the temps go even more crazier


----------



## Azazil1190

vegetagaru said:


> im a bit of "not happy" about voiding warranty tbh, im going to my local salesman to see what he says about it, if i should repast it even voiding it or if there's another solution(more about RMA it as i saw some people claiming that they sent it for RMA for same reasons).
> i even tried to run the graphics only on the rage default mode and the temps go even more crazier


You can remove carefully the void stickers or you can go to a print shop and made some like i do.
Or if you feel uncomfortable you can try for rma


----------



## deadfelllow

vegetagaru said:


> im a bit of "not happy" about voiding warranty tbh, im going to my local salesman to see what he says about it, if i should repast it even voiding it or if there's another solution(more about RMA it as i saw some people claiming that they sent it for RMA for same reasons).
> i even tried to run the graphics only on the rage default mode and the temps go even more crazier


For that air cooling card repasting should be easy. Sapphire doesnt give a **** about repasting your card  You need to do it.


----------



## tcclaviger

Just thought I'd post this here since it can affect 6900xtxh systems as well when being pushed hard. I first thought GPU was doing some safety shutdown but it was not the GPu, it was definitely the PSU.

Had a scenario during max OC benchmarking today, over 3ghz GPU speeds where the system would trip OVP/OCP and hard shutdown. Tested it a few times and was able to replicate it in almost exactly the same spot every time, power draw when occuring was only between 650 and 750 total system power, GPU between 450 and 550. The system, however has proven stable in scenarios of 980w system power loads...

Was informed Seasonic based PSUs we're having some challenges with noise on the 12v rail causing control IC to freak out due to some design flaw on sense circuitry. As a result the PSU just keeps seeing required voltage continuously rise and triggers OVP when it shouldn't. This is a 3 month old Thor 1200. It was only occuring in a very specific scenario, at a specific point during FSE Combined test.

Threw in a brand new AX1200i I had sitting here today, low and behold, no more OCP/OVP issues. I immediately picked up additional GPU OC room. Then to torture it I threw much harder hitting settings at the GPU, 1.28min, 1.325max vgpu, at 3050max, 2900min, AXi didn't even flinch.

If you're getting weird system shutdown, and are on a Seasonic based PSU might be worth taking a look at.


----------



## Azazil1190

tcclaviger said:


> Just thought I'd post this here since it can affect 6900xtxh systems as well when being pushed hard. I first thought GPU was doing some safety shutdown but it was, definitely the PSU.
> 
> Had a scenario during max OC benchmarking today, over 3ghz GPU speeds where the system would trip OVP/OCP and hard shutdown. Tested it a few times and was able to replicate it in almost exactly the same spot every time, power draw when occuring was only between 650 and 750 total system power, GPU between 450 and 550. The system, however has proven stable in scenarios of 980w system power loads...
> 
> Was informed Seasonic based PSUs we're having some challenges with noise on the 12v rail causing control IC to freak out due to some design flaw on sense circuitry. As a result the PSU just keeps seeing required voltage continuously rise and triggers OVP when it shouldn't. This is a 3 month old Thor 1200. It was only occuring in a very specific scenario, at a specific point during FSE Combined test.
> 
> Threw in a brand new AX1200i I had sitting here today, low and behold, no more OCP/OVP issues. I immediately picked up additional GPU OC room. Then to torture it I threw much harder hitting settings at the GPU, 1.28min, 1.325max vgpu, at 3050max, 2900min, AXi didn't even flinch.
> 
> If you're getting weird system shutdown, and are on a Seasonic based PSU might be worth taking a look at.


In my case i think is the psu because its old .
"Bequiet power zone 1000 bronze 80plus"

Is the only part in my system that is ancient


----------



## tcclaviger

Azazil1190 said:


> In my case i think is the psu because its old .
> "Bequiet power zone 1000 bronze 80plus"
> 
> Is the only part in my system that is ancient


My first AX1200i lasted 6 years, most of it pushing SLI setups of various types, starting with triple 980 Matrix Cards, + 1680v2 at 4.7 or 4.8 GHz, season depending. It lived between 1000 and 1150 most it's life and eventually couldn't correct 12v anymore sagging deep into the 11.x range under load.

6 years I think it's pretty fair considering the tortured life it led lol.


----------



## deadfelllow

tcclaviger said:


> My first AX1200i lasted 6 years, most of it pushing SLI setups of various types, starting with triple 980 Matrix Cards, + 1680v2 at 4.7 or 4.8 GHz, season depending. It lived between 1000 and 1150 most it's life and eventually couldn't correct 12v anymore sagging deep into the 11.x range under load.
> 
> 6 years I think it's pretty fair considering the tortured life it led lol.


My corsair RM650 is still alive after 9+ years lmao. Im still using it on my second system


----------



## Azazil1190

Before the 6900xt on the same system with same psu i was with strix 3090 on xoc 1000bios for benching and evga xoc 500 for daily gaming .
Never had an issue.(I already have it about 4-5years)
Dont know maybe the extension cables isnt right on the stock psu cables or the psu starting to die .

I will check later the cables again.


----------



## tcclaviger

Navi 21 hits harder on transients in general, but it also depends on which card SKU. Input filtering can impact how big the spike delivered to the PSU is and the duration of the spike. The OCF for example, is known to Mike Tyson PSUs that can't cut it.... see my Thor 1200 issue for example lol.

Same PSU laughed in the face of my shunt modded + XOC BIOS 3080ti and never missed a beat <shrug>.

New PSU let me turn it up to 12/10 instead of 11/10  3040max/2880min @ 1.255vgpu.


Spoiler: Score



A new #1 score for 1080 High, behold:









CPU limited in Medium:


----------



## Azazil1190

tcclaviger said:


> Navi 21 hits harder on transients in general, but it also depends on which card SKU. Input filtering can impact how big the spike delivered to the PSU is and the duration of the spike. The OCF for example, is known to Mike Tyson PSUs that can't cut it.... see my Thor 1200 issue for example lol.
> 
> Same PSU laughed in the face of my shunt modded + XOC BIOS 3080ti and never missed a beat <shrug>.
> 
> New PSU let me turn it up to 12/10 instead of 11/10  3040max/2880min @ 1.255vgpu.
> 
> 
> Spoiler: Score
> 
> 
> 
> A new #1 score for 1080 High, behold:
> View attachment 2562392
> 
> 
> CPU limited in Medium:
> View attachment 2562391


Which psu you think is ok ?
I already have the rm1000x on my 6900 tox.extreme and 10900k and work like a charm. This is my second system.

For 5950x and the strix 6900 top.
Im between bequiet dark pro 1200w platinum or corsair rm1200x


----------



## tcclaviger

TBH I'm massively out of touch with what's on the market for PSUs, but, the same person who informed of the Seasonic issue fully endorses Leadex Titanium series.


----------



## GTANY

Azazil1190 said:


> Which psu you think is ok ?
> I already have the rm1000x on my 6900 tox.extreme and 10900k and work like a charm. This is my second system.
> 
> For 5950x and the strix 6900 top.
> Im between bequiet dark pro 1200w platinum or corsair rm1200x


Beware, the bequiet dark pro 1200w platinum is a multi-rail power supply. To be avoided when overclocking high-end parts. The Corsair has only 1 12 V rail which is far better.

But buying a power supply now is a bad choice : the ATX 3.0 power supplies which have the 16pins 600 W connector for graphic cards will be available soon. For the next graphic cards generation, these latest power supplies will be a must-have.


----------



## Azazil1190

GTANY said:


> Beware, the bequiet dark pro 1200w platinum is a multi-rail power supply. To be avoided when overclocking high-end parts. The Corsair has only 1 12 V rail which is far better.


Yeah i forget that single vs multi rail.
Thanks!
So im going for the corsair as first option and is available in my country


----------



## Azazil1190

Just order this



https://www.corsair.com/us/en/Categories/Products/Power-Supply-Units/hxi-series-2017-config/p/CP-9020140-NA


----------



## Azazil1190

GTANY said:


> buying a power supply now is a bad choice : the ATX 3.0 power supplies which have the 16pins 600 W connector for graphic cards will be available soon. For the next graphic cards generation, these latest power supplies will be a must-have.


Ι know but i cant let it like this 
Is psychological and i think i get lower performance at 4k 
when the atx3.0 psu's comes out we gonna make the move for new hw.


----------



## D-EJ915

I'm using a 3090 in my current rig with the $200 maingear 1200 platinum and haven't had any issues but it's not a 6900xt granted, hard to beat the price.


----------



## J7SC

GTANY said:


> Beware, the bequiet dark pro 1200w platinum is a multi-rail power supply. To be avoided when overclocking high-end parts. The Corsair has only 1 12 V rail which is far better.
> 
> But buying a power supply now is a bad choice : the ATX 3.0 power supplies which have the 16pins 600 W connector for graphic cards will be available soon. For the next graphic cards generation, these latest power supplies will be a must-have.


FYI, I have an older *bequiet dark pro 1200w*, and like the newer models, it is > switchable between single and multi 12v rail(s).

Re. GPU 'warning lights' at the PCIe terminals, that can be very annoying but not necessarily fatal. Usually, when the 12v rail drops below s.th. like ~ 11.6v, it will trigger...HWInfo is great to show such 'droop' under load, especially when PSUs get older and have been stressed continuously.

FYI, I purchased two Seasonic Platinum Prime PX1300W over the last year (my first Seasonic PSUs btw), and so far at least, they're great re. 12v rail stability, even at GPUs w/over 520W (3090) and 450W (6900XTMPT), typically 11.94v - 12.04v idle to full load. I also have some older Antec 1300 HPCs which are very good. 

With new ATX formats looming, it is all but confirmed that most PSU vendors will offer dongles for their current hi-po PSUs, but then again, if one can wait a bit for the new factory-fresh formats to hit the shelves, a bit of patience might also be an option...


----------



## tommyd2k

Treated my Powercolor 6900xt with liquid metal. Hot spot temps dropped insanely for me. I was having issues with high hot spot temps since I water cooled the card almost a year ago.
Tried half a dozen pastes and hotspot temps would always get over 100deg. Now they avg 60-70deg gaming with a high of 82 running time spy. 
Highest Time Spy score before 24,121
New Time Spy record 24,663
With some practice 25k maybe?? 
Definitely worth doing in my case. LM'd the monoblock also which took 3-7 degrees off CPU per-core temps, adding 25-50Mhz boost.


----------



## tommyd2k

tcclaviger said:


> Just thought I'd post this here since it can affect 6900xtxh systems as well when being pushed hard. I first thought GPU was doing some safety shutdown but it was not the GPu, it was definitely the PSU.
> 
> Had a scenario during max OC benchmarking today, over 3ghz GPU speeds where the system would trip OVP/OCP and hard shutdown. Tested it a few times and was able to replicate it in almost exactly the same spot every time, power draw when occuring was only between 650 and 750 total system power, GPU between 450 and 550. The system, however has proven stable in scenarios of 980w system power loads...
> 
> Was informed Seasonic based PSUs we're having some challenges with noise on the 12v rail causing control IC to freak out due to some design flaw on sense circuitry. As a result the PSU just keeps seeing required voltage continuously rise and triggers OVP when it shouldn't. This is a 3 month old Thor 1200. It was only occuring in a very specific scenario, at a specific point during FSE Combined test.
> 
> Threw in a brand new AX1200i I had sitting here today, low and behold, no more OCP/OVP issues. I immediately picked up additional GPU OC room. Then to torture it I threw much harder hitting settings at the GPU, 1.28min, 1.325max vgpu, at 3050max, 2900min, AXi didn't even flinch.
> 
> If you're getting weird system shutdown, and are on a Seasonic based PSU might be worth taking a look at.


I have one of these in my system Asus ROG 1000w
Powers my 6900xt and 5950x no problem. It's also a rebranded Seasonic, I haven't run many Fire Strike tests or ran my GPU over 2900Mhz.. I'll work on my Fire Strike score and see how it goes.
I was looking at the Thor 1200w, but decided on the 1000w. One of the rare times I went with the cheaper option. I always make sure I buy stuff I can return if I need to, then I put it to the test.


----------



## Butanding1987

Old habits die hard.


----------



## tcclaviger

Hah, I love that AsRock is the only one not marking it up.



tommyd2k said:


> I have one of these in my system Asus ROG 1000w
> Powers my 6900xt and 5950x no problem. It's also a rebranded Seasonic, I haven't run many Fire Strike tests or ran my GPU over 2900Mhz.. I'll work on my Fire Strike score and see how it goes.
> I was looking at the Thor 1200w, but decided on the 1000w. One of the rare times I went with the cheaper option. I always make sure I buy stuff I can return if I need to, then I put it to the test.


It didn't become an issue until I started pushing near the limits of the card, I don't think it'll be a widespread problem between 6900/6950 and Seasonic. It may only be the OCF due to the input filtering design on it paired with its unique VRM design.

It may even be so specific outside of Ampere use as to be OCF + High OC + High vGPU + C8E due to the combination of VRM design, C8E's unique PCIE topology, and high voltage only achievable on KXTX or XTX/XTXH when using an EVC2.

The fundamental problem in the design isn't unique to my system however, the source is Jon Gerow aka Johnny Guru:

"An increasing high frequency oscillation is introduced back to the PSU from the PCIe slot when using Ampere cards and that causes the supervisor IC to freak out. Removing the +12V sense or just putting a ferrite bead on the +12V sense prevents the PSU from shutting down."


----------



## tommyd2k

The Powercolor 6950X is for sale on Amazon for $1099.00 Here if anyone is price shopping. I'd have to buy my girl some diamonds if I wanted to get away with buying one right now tho. I'll wait for 7K series, but damn where were these deals 6 months ago? Feels like I'm passing up a bargain.


----------



## Adel_F16

Guys I have a problem that is driving me nutts .

I sold my gigabyte 6900 XT to buy a Powercolor 6950 XT red devil. Like usually I decided to change the thermal pads and put liquid metal on (conductonaut). Still using the Red Devil Air cooler.

The problem I get is that the heatsink does not make contact with the gpu die and nothing is in it's way. I wasted 30$ on multiple thermal pads by removing, replacing etc. And nothing makes a difference.

When putting a flashlight on the other side of the card I can see the light when looking carefully between the gpu-die and the heatsink.

So what am I doing wrong? I Tried with thermal pads, without, with a very thin layer of LM and a thick layer of it (which seems to help, hotspot furmark instantly 115°C instead of powering off immidiately because of too much heat..)

The spacers where the bolts go through do not touch the pcb. But I assume it has to. The spring screws are garbage as they are not made to be used multiple times. Lol, I even tried tightening them up so much that one bolt broke inside the thread. I ordered 20 new ones from Amazon and I have them right now. Threads are luckily open from the other side so removing the broken bolt was no problem at all.

Does someone have a Red Devil and can you please send me a picture of the side where the spacers of the gpu are.

I even got so mad because this GPU is in this condition for 2 weeks that I accidentally broke the GPU header -.- (under MJ1005 on pcb). Does someone know how this header is called so I can order a new one and solder it in place?

I will post pictures soon (am not at home rn).


----------



## 99belle99

Jaysus you are going through the wars. How did you managed to break the header. I do not have any advice as I do not have a red devil card but you are obviously not mounting it correctly.


----------



## RichieRich25

Adel_F16 said:


> Guys I have a problem that is driving me nutts .
> 
> I sold my gigabyte 6900 XT to buy a Powercolor 6950 XT red devil. Like usually I decided to change the thermal pads and put liquid metal on (conductonaut). Still using the Red Devil Air cooler.
> 
> The problem I get is that the heatsink does not make contact with the gpu die and nothing is in it's way. I wasted 30$ on multiple thermal pads by removing, replacing etc. And nothing makes a difference.
> 
> When putting a flashlight on the other side of the card I can see the light when looking carefully between the gpu-die and the heatsink.
> 
> So what am I doing wrong? I Tried with thermal pads, without, with a very thin layer of LM and a thick layer of it (which seems to help, hotspot furmark instantly 115°C instead of powering off immidiately because of too much heat..)
> 
> The spacers where the bolts go through do not touch the pcb. But I assume it has to. The spring screws are garbage as they are not made to be used multiple times. Lol, I even tried tightening them up so much that one bolt broke inside the thread. I ordered 20 new ones from Amazon and I have them right now. Threads are luckily open from the other side so removing the broken bolt was no problem at all.
> 
> Does someone have a Red Devil and can you please send me a picture of the side where the spacers of the gpu are.
> 
> I even got so mad because this GPU is in this condition for 2 weeks that I accidentally broke the GPU header -.- (under MJ1005 on pcb). Does someone know how this header is called so I can order a new one and solder it in place?
> 
> I will post pictures soon (am not at home rn).


I have a 6900xt red devil and also repasted two of my friends GPUs recently. To remount the heatsink back correctly you have to push down on the board until you feel it go snuggly in place. Don't be scared to push down but obviously don't be aggressive. You really have to press firmly and you'll almost feel lock into place. You can can actually look at the memory chips and see if they are touching the thermal pads, if they are touching with no gaps, then it is on firmly. The washer mod works amazing on the red devil. It dropped my hotspot temps m
By around 15c.


----------



## Adel_F16

RichieRich25 said:


> I have a 6900xt red devil and also repasted two of my friends GPUs recently. To remount the heatsink back correctly you have to push down on the board until you feel it go snuggly in place. Don't be scared to push down but obviously don't be aggressive. You really have to press firmly and you'll almost feel lock into place. You can can actually look at the memory chips and see if they are touching the thermal pads, if they are touching with no gaps, then it is on firmly. The washer mod works amazing on the red devil. It dropped my hotspot temps m
> By around 15c.


I can feel the heatsink locking on the pcb but still there is no contact. It looks like my heatsink is bent a little bit because there is more space on the right side of the card between the pcb and heatsink (near the dp ports).

Do you know the size of those standard thermal pads on the memory?


----------



## RichieRich25

Adel_F16 said:


> I can feel the heatsink locking on the pcb but still there is no contact. It looks like my heatsink is bent a little bit because there is more space on the right side of the card between the pcb and heatsink (near the dp ports).
> 
> Do you know the size of those standard thermal pads on the memory?


----------



## RichieRich25

Adel_F16 said:


> I can feel the heatsink locking on the pcb but still there is no contact. It looks like my heatsink is bent a little bit because there is more space on the right side of the card between the pcb and heatsink (near the dp ports).
> 
> Do you know the size of those standard thermal pads on the memory?


My heatsink is also bent on mine. Two of friends have the same card and their red devils were perfect. Even the when mounting the heatsink back on it just fell in place . I notice that on mine, one of the the screw mounts that protrude is slightly bent so when mount the card I have to wiggle it a little until it sets in and then push down get a proper mount .


----------



## Adel_F16

RichieRich25 said:


> My heatsink is also bent on mine. Two of friends have the same card and their red devils were perfect. Even the when mounting the heatsink back on it just fell in place . I notice that on mine, one of the the screw mounts that protrude is slightly bent so when mount the card I have to wiggle it a little until it sets in and then push down get a proper mount .


I removed the protrudes lol. I put 2 plastic washers I had left on the outside of the pcb. Massaged my gpu by pushing it bit by bit and tightening.

Now the heatsink does make contact, but it could be better. Ordered new plastic washers. Thank you for your effort . Appreciate it a lot!

Better than stock temps atleast, but should be better.


----------



## RichieRich25

I'm noticing that all of a sudden my avg memory clocks in timespy are lower to 2120. It use to be 2138 always and now it's lower. Anybody ever experience this?


----------



## RichieRich25

RichieRich25 said:


> I'm noticing that all of a sudden my avg memory clocks in timespy are lower to 2120. It use to be 2138 always and now it's lower. Anybody ever experience this?


Never mind figured it out. Had signal RGB running in the background was taking up some performance.


----------



## RichieRich25

Friend challenged me in timespy extreme with his 3090. But I'm running into an issue where sometimes my avg memory clock is 2138 like its always been but most of the time it's lower like 2105. When I check comparable scores from other people and mine In the past it says 2138. What changed, anything I should be looking for, my scores seem to be much lower now not sure if that's the culprit.


----------



## J7SC

RichieRich25 said:


> Friend challenged me in timespy extreme with his 3090. But I'm running into an issue where sometimes my avg memory clock is 2138 like its always been but most of the time it's lower like 2105. When I check comparable scores from other people and mine In the past it says 2138. What changed, anything I should be looking for, my scores seem to be much lower now not sure if that's the culprit.


The only time I experienced less than 2138 ft (2150 ft set) is when I run MPT PL > 450W, and then inconsistently so.

Somewhat related, I ran multiple tests from 2100 st to 2150 ft VRAM just now, and 2150 ft got the fastest fps (...again). BTW, I did flash the 6950XT via Ubuntu a few days ago but ran into the GPU safe mode issue...then fooled around w/ s.th. else and couldn't boot anymore before I could try load an older MPT profile (?), so for now, I flashed back (dual vbios card 😀 )...will try again on the weekend, hopefully remembering what other folks did here re. sequence in that situation !?


----------



## RichieRich25

J7SC said:


> The only time I experienced less than 2138 ft (2150 ft set) is when I run MPT PL > 450W, and then inconsistently so.
> 
> Somewhat related, I ran multiple tests from 2100 st to 2150 ft VRAM just now, and 2150 ft got the fastest fps (...again). BTW, I did flash the 6950XT via Ubuntu a few days ago but ran into the GPU safe mode issue...then fooled around w/ s.th. else and couldn't boot anymore before I could try load an older MPT profile (?), so for now, I flashed back (dual vbios card 😀 )...will try again on the weekend, hopefully remembering what other folks did here re. sequence in that situation !?


I don't think it's power issue. When I check the graphs and it happens at the beginning of each test. It's like the GPU is asleep for like 1 sec and then it kicks in for the rest of the test at 2138 to 2140. I turned off DS settings in mpt but not sure why it's acting up like this. I did not do any 3dmark updates or GPU updates. Just started happening. I had a friend over who has the same card and I plugged it into my PC to run thermal checks after repaste so maybe his card messed with some settings so I have to ddu and try again


----------



## RichieRich25

So my score looks like this but my avg clock is so much lower than like 3 days ago. Not sure what happen


----------



## RichieRich25

My score vs friends score(3090 460pl). Beat him by the changing the PL to 370 but any body got any tricks for timespy extreme besides more power?


----------



## alceryes

Anyone else on an air cooler getting a HS temp that's 20ºC+ higher than the next highest temp after a good gaming session?

I'm getting into Assassin's Creed Odyssey. I'm playing with all graphics/eye-candy settings maxxed at 1440p and am having absolutely zero issues with framerate (smooth as butter), but I'm seeing a peak HS temp that's hitting 97ºC. A full 23ºC higher than the next highest temp. I'm wondering if the backside thermal pad mod I did is keeping the other temps down, explaining the larger-than-normal difference. Thoughts?










My VRAM isn't running that fast. Incorrect VRAM speed reporting is a known bug that's been around for over a year. It's in some of the past driver's release notes.


----------



## J7SC

RichieRich25 said:


> My score vs friends score(3090 460pl). Beat him by the changing the PL to 370 but any body got any tricks for timespy extreme besides more power?


As posted before, I run a 6900XT right next to a 3090 Strix in the same case (both water-cooled) in a work-play build. The 3090 can't catch the 6900XT in regular TimeSpy, but in TimeSpy Extreme & much of 4K in general (and all ray tracing), it is the other way around. Per below at 25 C ambient, my 6900XT, even at moderate voltages & w/downclocking etc, is no slouch, but RTX 3090s just go nuts in some things...all that said, both cards deliver more at the highest eye candy settings than even a C1 OLED 4K/120Hz really needs.


----------



## GTANY

alceryes said:


> Anyone else on an air cooler getting a HS temp that's 20ºC+ higher than the next highest temp after a good gaming session?
> 
> I'm getting into Assassin's Creed Odyssey. I'm playing with all graphics/eye-candy settings maxxed at 1440p and am having absolutely zero issues with framerate (smooth as butter), but I'm seeing a peak HS temp that's hitting 97ºC. A full 23ºC higher than the next highest temp. I'm wondering if the backside thermal pad mod I did is keeping the other temps down, explaining the larger-than-normal difference. Thoughts?
> 
> View attachment 2562757
> 
> 
> My VRAM isn't running that fast. Incorrect VRAM speed reporting is a known bug that's been around for over a year. It's in some of the past driver's release notes.


A hotspot temperature of 97°C for Assassin's Creed Odyssey is too high. I played this game a few weeks ago with a Powercolor 6900 XT Red Devil and it reached 85-90°C, around 20°C above the GPU max temperature. ACO does not use much power, around 300 W only which explains that the hotspot is quite low.


----------



## Cidious

Been skimming through this thread looking for some solid info on flashing. 

I own a 6900XT reference under an EKWB waterblock with LM, just as I ran my 6800XT reference before I killed it. 

Max TimeSpy is about 23300 points before hitting a hardwall. Tried fiddling with MPT further to get some more out of it. But other than power limits it kind of won't let me change a lot of of other things without going into the well known 500Mhz mode. 

I am fully aware that flashing an XTXH bios won't work for XTX cards. Jays just posted a flash vid and there was a post under it from some dude named Stefan Halter claiming that flashing a LC bios gave him some gains.

Since I'm fully under water and temps are ridiculously low, I'd be willing to flash something onto it that gives tad more voltage etc. Maybe a bit higher memory OC etc. 

My question. Is it worth flashing an XTX LC bios (from PowerColor for example since I can't find the Reference LC bios) to get a bit more daily juice out of it? (Not looking for 25.000 points on TimeSpy. More like a bit more headroom in line with the XTXH and XTX LC uiltimate models. 

And if it's worth, I should do it under Linux to bypass the vendor lock right. Or anyone able to provide me a Reference LC bios? (it's not on TechPowerUp). 

Then if the answer is not worth it. How to get past all these annoying AMD protections with MPT. Whatever I change other than power limits is resulting in the infamous 500Mhz mode. I've been trying to follow the guide on hardwareuxx.de for Navi 21. Without much luck yet.


----------



## RichieRich25

J7SC said:


> As posted before, I run a 6900XT right next to a 3090 Strix in the same case (both water-cooled) in a work-play build. The 3090 can't catch the 6900XT in regular TimeSpy, but in TimeSpy Extreme & much of 4K in general (and all ray tracing), it is the other way around. Per below at 25 C ambient, my 6900XT, even at moderate voltages & w/downclocking etc, is no slouch, but RTX 3090s just go nuts in some things...all that said, both cards deliver more at the highest eye candy settings than even a C1 OLED 4K/120Hz really needs.
> View attachment 2562800


My bad I didn't realize the picture never posted of the scores. My GPU score is 11549. Other friend with a red devil is 11505 And my other friend with a. Red devil( he didn't get lucky with silicon lottery) is 10910. Friend with 3090 highest score is 11366. So we beat him by a small amount. He's having issues with his though I believe (EVGA ftw 3) if he runs his power limit more than 107 and his clock at +105 he has hard crash where the PC shuts off. He changed his PSU from a Corsair 850 to the hx 1000 but still shuts off


----------



## J7SC

RichieRich25 said:


> My bad I didn't realize the picture never posted of the scores. My GPU score is 11549. Other friend with a red devil is 11505 And my other friend with a. Red devil( he didn't get lucky with silicon lottery) is 10910. Friend with 3090 highest score is 11366. So we beat him by a small amount. He's having issues with his though I believe (EVGA ftw 3) if he runs his power limit more than 107 and his clock at +105 he has hard crash where the PC shuts off. He changed his PSU from a Corsair 850 to the hx 1000 but still shuts off


...depending on the version of the EVGA FTW3, there were some PCB power distribution issues. This run below w/3090 was handy (have higher ones) and is based on daily/24hr clocks as I'm not looking for a 'race'. In any case, I really like both cards, it is just that they have different strengths and weaknesses and generally speaking, 4K and ray tracing starts to tax the 6900XT relatively more.


----------



## RichieRich25

J7SC said:


> ...depending on the version of the EVGA FTW3, there were some PCB power distribution issues. This run below w/3090 was handy (have higher ones) and is based on daily/24hr clocks as I'm not looking for a 'race'. In any case, I really like both cards, it is just that they have different strengths and weaknesses and generally speaking, 4K and ray tracing starts to tax the 6900XT a more.
> View attachment 2562864


Damn that's great score. He has this one. We just like messing with him since hes the only Nvidia owner in the group.


----------



## RichieRich25

Quick question. I have a 6950xt bios but I know I can't load it without issues but I copied some the memory frequency values and loaded it. Not my memory says 3000mhz instead of 2150 in timespy. Is this harmful. Memory temps don't go past 60c and it's like 77c in my PC room right now. Temps usually like 55 for memory


----------



## Petet1990

i live near a microcenter and the liquid devil 6900xt is for 949 there...cheaper than the liquid devil 6800xt. and seems as if buying the liquid devil is cheaper than buying the card and a wb separately. makes sense to do it this way or am i missing something?


----------



## CS9K

Petet1990 said:


> i live near a microcenter and the liquid devil 6900xt is for 949 there...cheaper than the liquid devil 6800xt. and seems as if buying the liquid devil is cheaper than buying the card and a wb separately. makes sense to do it this way or am i missing something?


What you're seeing is accurate. Prices are nosediving right now with AMD, and even within the same brands, their cards of various tiers are sometimes priced in ways that make _no_ sense what-so-ever. The RX 6900 XT Liquid Devil Ultimate started out at $2599 at the Dallas Micro Center early last year. They sat on the shelves for over three months, as they should. Good to see them down near reasonable prices.

Just keep a close eye on things, and if you see a deal that you think is good, and it fits your use-case, go for it.


----------



## Conenubi701

The definition of a CPU bottleneck.

My new 5800x3d Stock paired with my Liquid Devil Ultimate (Stock) vs my Old 2700x + 2.8ghz overclocked 2160mhz Memory OC LQU 6900xt on the same Motherboard / same RAM


----------



## RichieRich25

Im pretty sure this question has been asked many times but is it possible to flash the power color ultimate bios onto a regular 6900xt? if so how is it usually done? Nevermind I see that it isn't possible without hiccups are parameters getting stuck. I guess I will just wait for 7900xt to arrive and swtich


----------



## jonRock1992

Do you guys know if hwinfo is accurate at reading actual GPU voltage? I get two very different values between gpu-z and hwinfo64. If I set 1293mV in MPT, then GPU-z says 1.293V under load, but hwinfo64 says it's 1.23V. It seems like the hwinfo64 reading should be more accurate because I doubt my GPU is actually using 1.293V.


----------



## jonRock1992

I've been making some good progress with Superposition 4K Optimized. Jumped up to spot 25 in the leaderboards. Now that I can use the LC vbios, I can actually compete lol.


----------



## ptt1982

Adel_F16 said:


> Guys I have a problem that is driving me nutts .
> 
> I sold my gigabyte 6900 XT to buy a Powercolor 6950 XT red devil. Like usually I decided to change the thermal pads and put liquid metal on (conductonaut). Still using the Red Devil Air cooler.
> 
> The problem I get is that the heatsink does not make contact with the gpu die and nothing is in it's way. I wasted 30$ on multiple thermal pads by removing, replacing etc. And nothing makes a difference.
> 
> When putting a flashlight on the other side of the card I can see the light when looking carefully between the gpu-die and the heatsink.
> 
> So what am I doing wrong? I Tried with thermal pads, without, with a very thin layer of LM and a thick layer of it (which seems to help, hotspot furmark instantly 115°C instead of powering off immidiately because of too much heat..)
> 
> The spacers where the bolts go through do not touch the pcb. But I assume it has to. The spring screws are garbage as they are not made to be used multiple times. Lol, I even tried tightening them up so much that one bolt broke inside the thread. I ordered 20 new ones from Amazon and I have them right now. Threads are luckily open from the other side so removing the broken bolt was no problem at all.
> 
> Does someone have a Red Devil and can you please send me a picture of the side where the spacers of the gpu are.
> 
> I even got so mad because this GPU is in this condition for 2 weeks that I accidentally broke the GPU header -.- (under MJ1005 on pcb). Does someone know how this header is called so I can order a new one and solder it in place?
> 
> I will post pictures soon (am not at home rn).


I've gone through what you are going experiencing. I think I reassembled my Red Devil card 30+ times, and always got more scratches on the air cooler/ heatsink etc. Got me mad as hell, and I spent around 70€ on the thermal pads and paste etc. It's an endless loop. Eventually, I couldn't figure out why the heatsink wasn't getting a good contact, and just like you, the temps shot up to 115C immediately, or sometimes I could get them to stay around 90C. In the end, I waterblocked it and built a custom loop, incl 5600X in it. Been happy for a year with the water cooling, it's a decent card that OCs to 2650mhz and 2100mhz, temps stay well under control now, hotspot around 70-78C depending on the ambient temp, goes up to 90C due to how humid and hot our apartment gets here in Tokyo. That 90C is on a 40C-42C day. The temps are fantastic when the ambient is around 22C or so, hotspot typically 70-75C during timespy, which scored around 23600 at best, but with the latest drivers it goes up to 23300 points. That's around 5% faster than 6950XT stock, so I'm happy with it!


----------



## 99belle99

ptt1982 said:


> I've gone through what you are going experiencing. I think I reassembled my Red Devil card 30+ times, and always got more scratches on the air cooler/ heatsink etc. Got me mad as hell, and I spent around 70€ on the thermal pads and paste etc. It's an endless loop. Eventually, I couldn't figure out why the heatsink wasn't getting a good contact, and just like you, the temps shot up to 115C immediately, or sometimes I could get them to stay around 90C. In the end, I waterblocked it and built a custom loop, incl 5600X in it. Been happy for a year with the water cooling, it's a decent card that OCs to 2650mhz and 2100mhz, temps stay well under control now, hotspot around 70-78C depending on the ambient temp, goes up to 90C due to how humid and hot our apartment gets here in Tokyo. That 90C is on a 40C-42C day. The temps are fantastic when the ambient is around 22C or so, hotspot typically 70-75C during timespy, which scored around 23600 at best, but with the latest drivers it goes up to 23300 points. That's around 5% faster than 6950XT stock, so I'm happy with it!


Sounds like your rig would do wonders here in Ireland. Doesn't get hot that much here.


----------



## alceryes

Conenubi701 said:


> The definition of a CPU bottleneck.
> 
> My new 5800x3d Stock paired with my Liquid Devil Ultimate (Stock) vs my Old 2700x + 2.8ghz overclocked 2160mhz Memory OC LQU 6900xt on the same Motherboard / same RAM
> 
> View attachment 2562927


There's about a 300MHz GPU clock difference between the old and the new. Can you lower the GPU clock a bit to match the old test and re-run?
Also, any reason why the graphics test 1 showed such a huge increase but the graphics test 2 showed none?

This is very useful if we can get a close-to-perfect apples to apples comparison. What were the CPU clocks set to, old vs. new? Were there any static OC settongs or did you just let the CPUs 'do their thing'?

Thanks again for posting. I'll be upgrading to an AM5 platform from my aging Z370 and am looking for a nice boost in overall performance, if not specifically graphics. Although I'm hoping the platform will allow the GPU to eke out a few more FPS on average.


----------



## Adel_F16

ptt1982 said:


> I've gone through what you are going experiencing. I think I reassembled my Red Devil card 30+ times, and always got more scratches on the air cooler/ heatsink etc. Got me mad as hell, and I spent around 70€ on the thermal pads and paste etc. It's an endless loop. Eventually, I couldn't figure out why the heatsink wasn't getting a good contact, and just like you, the temps shot up to 115C immediately, or sometimes I could get them to stay around 90C. In the end, I waterblocked it and built a custom loop, incl 5600X in it. Been happy for a year with the water cooling, it's a decent card that OCs to 2650mhz and 2100mhz, temps stay well under control now, hotspot around 70-78C depending on the ambient temp, goes up to 90C due to how humid and hot our apartment gets here in Tokyo. That 90C is on a 40C-42C day. The temps are fantastic when the ambient is around 22C or so, hotspot typically 70-75C during timespy, which scored around 23600 at best, but with the latest drivers it goes up to 23300 points. That's around 5% faster than 6950XT stock, so I'm happy with it!


This cooler is a real pain to work with. Bought thermal pads again. Put conductonaut again. Placed 8 nylon washers on all spring bolts which u tightened so much that I twisted all the screws. I did this on purpose to put as much as pressure as I can.

Guess what.. Temps are okay. Stock was better.

My hotspot is the same as stock hotspot but gpu temp is higher than stock gpu temp. The delta between the hotspot and the gpu is a lot lower but the gpu temp is higher unfortunately.

I can't reassemble the card anymore because of the twisted bolts. I have to drill them out in the future. I am also thinking about going full custom loop again.


----------



## alceryes

GTANY said:


> A hotspot temperature of 97°C for Assassin's Creed Odyssey is too high. I played this game a few weeks ago with a Powercolor 6900 XT Red Devil and it reached 85-90°C, around 20°C above the GPU max temperature. ACO does not use much power, around 300 W only which explains that the hotspot is quite low.


Thanks. Not sure what I can do about it right now. Probably just the limitations of the stock graphite TIM on the core.
I'll probably take the card apart to redo the TIM on both frontside and backside, but can't do it right now. Maybe in 1-2 months.


----------



## RichieRich25

Adel_F16 said:


> This cooler is a real pain to work with. Bought thermal pads again. Put conductonaut again. Placed 8 nylon washers on all spring bolts which u tightened so much that I twisted all the screws. I did this on purpose to put as much as pressure as I can.
> 
> Guess what.. Temps are okay. Stock was better.
> 
> My hotspot is the same as stock hotspot but gpu temp is higher than stock gpu temp. The delta between the hotspot and the gpu is a lot lower but the gpu temp is higher unfortunately.
> 
> I can't reassemble the card anymore because of the twisted bolts. I have to drill them out in the future. I am also thinking about going full custom loop again.


Damn sorry your having a hard time. I ended up buying new screws from eBay because I stripped one the screws from over tightening my first time taking it apart. Really surprised your temps are not better though. I repasted mine with a cheap amazon paste some syy(don't remember the name) but it's my go to paste now. Running 365 pl the extra 15%(418 is what hwinfo showed on timespy extreme run) and my avg GPU temp is 60 and hot spot avg around 82 with a max of 93. I want to watercool it so bad but until I can unlock the mem and voltage properly I think it's a waste. When I game i rarely see 60c and my room is 76f Ambient on a good day


----------



## Takla

ptt1982 said:


> temps stay well under control now, hotspot around 70-78C depending on the ambient temp, goes up to 90C due to how humid and hot our apartment gets here in Tokyo. That 90C is on a 40C-42C day. The temps are fantastic when the ambient is around 22C or so, hotspot typically 70-75C during timespy, which scored around 23600 at best, but with the latest drivers it goes up to 23300 points. That's around 5% faster than 6950XT stock, so I'm happy with it!


Geez. Maybe buy a decent AC, too, after spending thousands on a PC...


----------



## LtMatt

I had similar issues with the 6750 XT Red Devil and poor contact after changing the thermal paste. I never really could figure out why.
I guess the 6900 XT version uses a similar mechanism and has the same issue. I ended up returning mine and getting another and just sticking with the stock application second time around. It was disappointing as I wanted to put liquid metal on it but it just didn’t make good enough contact to see improvement and it got worse. I did try washers too but it didn’t seem to make much if any difference.


----------



## Adel_F16

LtMatt said:


> I had similar issues with the 6750 XT Red Devil and poor contact after changing the thermal paste. I never really could figure out why.
> I guess the 6900 XT version uses a similar mechanism and has the same issue. I ended up returning mine and getting another and just sticking with the stock application second time around. It was disappointing as I wanted to put liquid metal on it but it just didn’t make good enough contact to see improvement and it got worse. I did try washers too but it didn’t seem to make much if any difference.


Looks like this issue with red devil coolers is more common than I thought..

The thing that bothers me the most is that I can't figure out why it is happening.. I even let 2 other people look at the card and we couldn't figure it out..

I repasted tens of cards through the years. Did multiple custom loops with case modding but this card lol..


----------



## jonRock1992

Adel_F16 said:


> Looks like this issue with red devil coolers is more common than I thought..
> 
> The thing that bothers me the most is that I can't figure out why it is happening.. I even let 2 other people look at the card and we couldn't figure it out..
> 
> I repasted tens of cards through the years. Did multiple custom loops with case modding but this card lol..


The red devil ultimate cooler was trash on my card. I reached shut down due to thermal limits with the stock cooler with very modest overclocks. I put a water block on it and never looked back.


----------



## RichieRich25

Adel_F16 said:


> Looks like this issue with red devil coolers is more common than I thought..
> 
> The thing that bothers me the most is that I can't figure out why it is happening.. I even let 2 other people look at the card and we couldn't figure it out..
> 
> I repasted tens of cards through the years. Did multiple custom loops with case modding but this card lol..


Maybe I'll make a little tutorial on it. I've repasted mine and 2 of friends so far , but mine by far has the worst fitment issues. I want to redo the thermal pads on mine so maybe when I do it I'll make a video and I might use thermal grizzly extreme this time around


----------



## supergt99

I had fitmemt issues with mine too. Several repastes. Using liquid metal now and gelid pads. Seems to make good contact now. Even with LM I had some issues. Finally got it where I want it. 2750mhz, 1.2v with vmin. 390pl. 53% fan


----------



## LtMatt

A video would be good if anyone had success doing it and wants to create one.


----------



## RichieRich25

supergt99 said:


> I had fitmemt issues with mine too. Several repastes. Using liquid metal now and gelid pads. Seems to make good contact now. Even with LM I had some issues. Finally got it where I want it. 2750mhz, 1.2v with vmin. 390pl. 53% fan


Are you water cooled or air cooled? And what are your temps?


----------



## supergt99

I’m still on stock cooler. At this point I’ll wait for next gen. Temps are 65-70 edge, mid 80’s hotspot.


----------



## RichieRich25

supergt99 said:


> I’m still on stock cooler. At this point I’ll wait for next gen. Temps are 65-70 edge, mid 80’s hotspot.


Wow that's better cooling than me at 365 pl. I'm around 58-60c edge, 90-94 hotspot with max fans on timespy. In gaming I run only 120fps 1440p and pretty much get around 50c edge and 70 hotspot. I don't even want to know what my temps are at 390


----------



## supergt99

forgot to mention these are gaming temps. Benching it will hit 90 hotspot. I’ll run timespy again when I get home.


----------



## J7SC

...max values I've ever seen w/ w-cooling, 25 C ambient and a good helping of MPT and clocks below...fyi, I really like the thermal putty on the back of the GPU chip connecting to an extra heatsink, as well as on the VRAM chips. Normal peak temps are about 6C - 7C lower...

...in other news, I successfully flashed my 1.175V XTX to the 6950 vbios via Ubuntu, but still have problems w/ CSM / smart access memory  ...another weekend coming up to try to conquer it.


----------



## jonRock1992

J7SC said:


> ...max values I've ever seen w/ w-cooling, 25 C ambient and a good helping of MPT and clocks below...fyi, I really like the thermal putty on the back of the GPU chip connecting to an extra heatsink, as well as on the VRAM chips. Normal peak temps are about 6C - 7C lower...
> 
> ...in other news, I successfully flashed my 1.175V XTX to the 6950 vbios via Ubuntu, but still have problems w/ CSM / smart access memory  ...another weekend coming up to try to conquer it.
> View attachment 2563330


If you're using an Asus motherboard, there is no way to get past the CSM issue with the 6950XT vBIOS. I had that issue with the dark hero, but then I switched to the MSI X570S Carbon, and I can get smart access memory with the 6950XT vBIOS.


----------



## RichieRich25

J7SC said:


> ...max values I've ever seen w/ w-cooling, 25 C ambient and a good helping of MPT and clocks below...fyi, I really like the thermal putty on the back of the GPU chip connecting to an extra heatsink, as well as on the VRAM chips. Normal peak temps are about 6C - 7C lower...
> 
> ...in other news, I successfully flashed my 1.175V XTX to the 6950 vbios via Ubuntu, but still have problems w/ CSM / smart access memory  ...another weekend coming up to try to conquer it.
> View attachment 2563330


I'm really contemplating getting a waterblock. Wife lost a bet and owes me some dough so I'm debating on waterblock or 4k 144hz monitor. If I get a waterblock what's a good one for red devil. Thinking alpha cool but unsure I want this best watercool setup


----------



## J7SC

RichieRich25 said:


> I'm really contemplating getting a waterblock. Wife lost a bet and owes me some dough so I'm debating on waterblock or 4k 144hz monitor. If I get a waterblock what's a good one for red devil. Thinking alpha cool but unsure I want this best watercool setup


Igor's Lab released this one yesterday...I usually do full custom loops & blocks (like Phanteks a lot) but this AlpaCool combo looks, well, 'cool' and seems to perform quite well for a hybrid between AIO and custom...vid is in German but has graphs and links to English text page.


----------



## RichieRich25

J7SC said:


> Igor's Lab released this one yesterday...I usually do full custom loops & blocks (like Phanteks a lot) but this AlpaCool combo looks, well, 'cool' and seems to perform quite well for a hybrid between AIO and custom...vid is in German but has graphs and links to English text page.


Yes I've researched their all in one watercooling and block but I don't see one listed for the red devil unless I'm not looking in the right spot..Ive been tempted to just get liquid devil 6950 but with the new GPUs around the corner might be a silly decision. I'm one those silly people who upgrade their GPU often but If I can get a solid 4k 144hz card for competitive gaming I'll stop there. This 6900xt definitely at 4k as surprised in titles where fps actually matters but I want something a bit stronger and I'm happy. I'll leave my PC alone for a while and enjoy it


----------



## J7SC

RichieRich25 said:


> (...)* I'll leave my PC alone for a while and enjoy it*


...famous last words department...😁


----------



## RichieRich25

J7SC said:


> ...famous last words department...😁


Lol your right.


----------



## RichieRich25

These are my temps with a 365 PL on MPT. Almost 100 on the hotspot. Ambient temps in the room are around 74-75 degrees. Seems like the thermal paste im using is starting to where off again lol. going to try thermal grizzly extreme and see where it gets me


----------



## Conenubi701

alceryes said:


> There's about a 300MHz GPU clock difference between the old and the new. Can you lower the GPU clock a bit to match the old test and re-run?
> Also, any reason why the graphics test 1 showed such a huge increase but the graphics test 2 showed none?
> 
> This is very useful if we can get a close-to-perfect apples to apples comparison. What were the CPU clocks set to, old vs. new? Were there any static OC settongs or did you just let the CPUs 'do their thing'?
> 
> Thanks again for posting. I'll be upgrading to an AM5 platform from my aging Z370 and am looking for a nice boost in overall performance, if not specifically graphics. Although I'm hoping the platform will allow the GPU to eke out a few more FPS on average.


Both CPUs were set to stock clocks. The 2700x had an SOC voltage bump to try to handle the 4 sticks 3200mhz of RAM at CL14 since Zen1 and Zen+ have awful IMCs, and slight negative voltage offset to the core voltage to lower the temps.

5800x3D was stock, but now I'm at a -0.2250mv offset on the core, and a -0.0660mv offset on the SOC since the CPU and IMC don't need that much voltage to work.


The 6900XT paired with the 2700x was at a 2798mhz overclock on the core and 2162mhz on the Memory. I think you're talking about the "average clock" being lower on the 2700x, but this is due to the 2700x actually choking the 6900xt and not feeding it the frames that made the 6900xt work/clock higher. Lighter load = lower working clock on average. That lighter load being caused by the cpu bottlenecking the GPU.

The 6900XT run paired with the 5800x3D was fully stock on the lighter OC Bios (less wattage than the Unleash Bios)


The reason GT1 has a massive difference is because it's a more CPU sensitive test and GT2 is less CPU bound. Even so, GT2 on the stock GPU with the 5800x3d ended up being the same as a big overclock on the gpu with a 2700x.

I can definitely run firestrike again while introducing a 200mhz lower clock on the GPU to see where it ends up.


----------



## Bart

I can speak very highly of the Alphacool block for my Asus TUF 6900XT, in regard to the Igors Lab post above. I used Kingpin KPX thermal paste on mine, with some Fujipoly pads for the back plate. I run my PL at 375W, core at 2750, and the thing struggles to reach 40C under full load. It WILL get to 40 after an hour or so, but won't touch 41. Hotspot is about 20-25C more, depending on benchmark / game / conditions, etc. Very happy with that block.


----------



## RichieRich25

Bart said:


> I can speak very highly of the Alphacool block for my Asus TUF 6900XT, in regard to the Igors Lab post above. I used Kingpin KPX thermal paste on mine, with some Fujipoly pads for the back plate. I run my PL at 375W, core at 2750, and the thing struggles to reach 40C under full load. It WILL get to 40 after an hour or so, but won't touch 41. Hotspot is about 20-25C more, depending on benchmark / game / conditions, etc. Very happy with that block.


Yea if I do go the watercooling route I will most likely go with alphacool. Ever since I put the glass back on my case, my temps have shot up about 7c to 10c and im all about the way my pc looks in general so ill have to plan out the cleanest look for a watercool setup without creating clutter inside. I have been contemplating just creating my own case using blender and my 3d printer but thats going to take some time and planning but i essentially want to make my own open air case.


----------



## ptt1982

RichieRich25 said:


> Wow that's better cooling than me at 365 pl. I'm around 58-60c edge, 90-94 hotspot with max fans on timespy. In gaming I run only 120fps 1440p and pretty much get around 50c edge and 70 hotspot. I don't even want to know what my temps are at 390


That's very good! I've got same temps with watercooling. Albeit, I will of course be using the same custom loop in future setups as well. I just got frikkin tired of tuning the GPU and now that it works at maxed at 405W and stays under 93C hotspot and under 63C edge at worst case scenarios, I don't want to touch it anymore. The OC is crap though, and with today's drivers maxed at 2630mhz, and for stability I'm running it 2620mhz daily. Still, 3% faster than a 6950xt stock, so I'm happy with it. 

Next upgrade RDNA4 and Zen 5 3D Vcache, AM5 board, DDR5 and a Gen5 Nvme in 2024/2025. Gotto buy a new 4K120hz tv as well, so it will be a frikkin 5500 EUR upgrade.


----------



## J7SC

ptt1982 said:


> That's very good! I've got same temps with watercooling. Albeit, I will of course be using the same custom loop in future setups as well. I just got frikkin tired of tuning the GPU and now that it works at maxed at 405W and stays under 93C hotspot and under 63C edge at worst case scenarios, I don't want to touch it anymore. The OC is crap though, and with today's drivers maxed at 2630mhz, and for stability I'm running it 2620mhz daily. Still, 3% faster than a 6950xt stock, so I'm happy with it.
> 
> Next upgrade RDNA4 and Zen 5 3D Vcache, AM5 board, DDR5 and a Gen5 Nvme in 2024/2025. Gotto buy a new 4K120hz tv as well, so it will be a frikkin 5500 EUR upgrade.


....4K 120HZ OLED is worth it, almost more than a top GPU, though you obviously need a good one to drive max frames and eye candy...


----------



## RichieRich25

J7SC said:


> ....4K 120HZ OLED is worth it, almost more than a top GPU, though you obviously need a good one to drive max frames and eye candy...


Yea I have a 55 inch in the living room. A little too big to play on, if i was on a controller then it would be perfect. I did hook up my pc to the lg a while back when I got new flooring done in the game room, and I have to say that TV with HDR on was amazing. I was on my Destiny 2 grind and it was the best experience I had gaming thus far. If lg was the make a smaller version in 34 inch I would grab one in a heart beat. I have a good HDR monitor but HDR on that thing was night and day, it took my eyes like 2 or 3 days to adjust.


----------



## J7SC

RichieRich25 said:


> Yea I have a 55 inch in the living room. A little too big to play on, if i was on a controller then it would be perfect. I did hook up my pc to the lg a while back when I got new flooring done in the game room, and I have to say that TV with HDR on was amazing. I was on my Destiny 2 grind and it was the best experience I had gaming thus far. If lg was the make a smaller version in 34 inch I would grab one in a heart beat. I have a good HDR monitor but HDR on that thing was night and day, it took my eyes like 2 or 3 days to adjust.


...I've gotten totally used to working & playing on the 48 inch OLED (on the right in the pic below); in fact, now I think the 40 inch Philips workstation monitor (on the left) is too small  . But with 55 inch or bigger as a desktop unit, it could be an issue, also because of the pixel pitch. We have an older 55 inch IPS HDR in our media room and the difference between the two 'seems more' than the 7 inch diagonal stretch would lead one to believe.


----------



## Adel_F16

Alienware has a 34" qhd ultrawide 175 hz qd-oled monitor. The AW3423DW.

Does someone know what kind of header (MJ1005) this is? Broke it off my card and need to order new one to solder it back on. I can't sync my rgb with my motherboard without this connector.

Pics from the internet:


----------



## RichieRich25

Adel_F16 said:


> Alienware has a 34" qhd ultrawide 175 hz qd-oled monitor. The AW3423DW.
> 
> Does someone know what kind of header (MJ1005) this is? Broke it off my card and need to order new one to solder it back on. I can't sync my rgb with my motherboard without this connector.
> 
> Pics from the internet:
> 
> View attachment 2563568
> 
> 
> View attachment 2563570


The dell is nice. I got a chance to check it out about a week ago but I think the LG has better colors especially in her.

That plug I'm not sure about.


----------



## RichieRich25

J7SC said:


> ...I've gotten totally used to working & playing on the 48 inch OLED (on the right in the pic below); in fact, now I think the 40 inch Philips workstation monitor (on the left) is too small  . But with 55 inch or bigger as a desktop unit, it could be an issue, also because of the pixel pitch. We have an older 55 inch IPS HDR in our media room and the difference between the two 'seems more' than the 7 inch diagonal stretch would lead one to believe.
> View attachment 2563556


My LG. It's definitely to big to game close on but I have fhe rgbic lights on the back and it would make the gaming experience so much better. But my wife would not allow it lol.


----------



## J7SC

RichieRich25 said:


> My LG. It's definitely to big to game close on but I have fhe rgbic lights on the back and it would make the gaming experience so much better. But my wife would not allow it lol.


...something else to keep in mind with the _really big TV/ monitors_, and especially OLEDs, is that you can run them as 'ultrawide' where desired and/or it makes sense. This *> link* is for NVIdia cards, but it might also work for AMD.

pic from the linked article


----------



## Azazil1190

J7SC said:


> ...something else to keep in mind with the _really big TV/ monitors_, and especially OLEDs, is that you can run them as 'ultrawide' where desired and/or it makes sense. This *> link* is for NVIdia cards, but it might also work for AMD.
> 
> pic from the linked article


Useful info im gonna try to mine 😉 lg oled 65" c1 evo panel


----------



## J7SC

Azazil1190 said:


> Useful info im gonna try to mine 😉 lg oled 65" c1 evo panel
> View attachment 2563618


Gorgeous setup !


----------



## Azazil1190

J7SC said:


> Gorgeous setup !


Thnx mate!


----------



## Azazil1190

Doesn't work right in games or something i missing.
For desktop use its ok.


----------



## J7SC

Azazil1190 said:


> Doesn't work right in games or something i missing.
> For desktop use its ok.
> View attachment 2563622
> 
> View attachment 2563623


...check the note at the bottom of the earlier-linked page re. bit setting, may be that could be it as they did run CP' 77 in ultrawide


----------



## Azazil1190

J7SC said:


> ...check the note at the bottom of the earlier-linked page re. bit setting, may be that could be it as they did run CP' 77 in ultrawide


Already did it .
I changed to 8bit and i close auto hdr from Windows.
Nothing change.

Thants ok i can leave without it
So back to 4k


----------



## J7SC

Azazil1190 said:


> Already did it .
> I changed to 8bit and i close auto hdr from Windows.
> Nothing change.
> 
> Thants ok i can leave without it
> So back to 4k


...I'm only about 1.5 ft away from the 48 at 4K when I game; that works almost like 'pseudo VR'


----------



## Azazil1190

J7SC said:


> ...I'm only about 1.5 ft away from the 48 at 4K when I game; that works almost like 'pseudo VR'


Agree but i cant play from that distance, make's me dizzy 
Im a sofa guy  to my second system i have a 32" 1440p monitor but once you go oled you never go back


----------



## J7SC

Azazil1190 said:


> Agree but i cant play from that distance, make's me dizzy
> Im a sofa guy  to my second system i have a 32" 1440p monitor but once you go oled you never go back


...dizzy squared = CP '77 + 4k Ultra + 'Psycho-level' ray tracing + DLSS Quality + HDR10 Pro at 15 inches viewing distance from OLED 

....this one via w-cooled RTX 3090 (btw, HDR screws up screenies a bit ), but 6900XT is also connected to the OLED as well...


----------



## crastakippers

Azazil1190 said:


> Useful info im gonna try to mine 😉 lg oled 65" c1 evo panel


My son plays all his PC games in the living room on the same TV.
He only has a Ryzen 3800X and RTX2080 driving it but all the games he plays run fine.
I don't know how people play PC games from a couch but there you go.  

EDIT: The couch reference was directed at my son btw not you. Watching him play world of tanks with a mouse and keyboard while lying on the couch. I don't know how kids do that.


----------



## Azazil1190

crastakippers said:


> My son plays all his PC games in the living room on the same TV.
> He only has a Ryzen 3800X and RTX2080 driving it but all the games he plays run fine.
> I don't know how people play PC games from a couch but there you go.
> 
> EDIT: The couch reference was directed at my son btw not you. Watching him play world of tanks with a mouse and keyboard while lying on the couch. I don't know how kids do that.


I play only with controller from the couch.(when i have time) 
Keyboard and mouse isnt for me anymore


----------



## RichieRich25

Now you got me wanting to move my build to the living room lol. Hopefully some more OLED displays make its way to the market and I can get one in 34 inch. The alienware is great, but its the LG. Something about that tv is special. This is how im setup now, using signal RGB


----------



## RichieRich25

by the way I did my final repaste(thermal grizzly extreme), and I redid the thermal pads using gelid extreme. I wanted to make a video but I really had no time. With the same settings at 365pl my GPU temp dropped 10c, and hotspot temp dropped 12c from the syy2 thermal paste during a timespy test. Not sure if paste or new thermal pads but I know feel more heat being dispersed through the heatsink. For those having issues with mounting, The trick I use is the check all the thermal pads on the memory on each side of the gpu and make its making contact with the heatsink before I even screw it on, if you see any spaces for gaps , you have to push down harder on the card until you see all pads making contacting.


----------



## Azazil1190

RichieRich25 said:


> by the way I did my final repaste(thermal grizzly extreme), and I redid the thermal pads using gelid extreme. I wanted to make a video but I really had no time. With the same settings at 365pl my GPU temp dropped 10c, and hotspot temp dropped 12c from the syy2 thermal paste during a timespy test. Not sure if paste or new thermal pads but I know feel more heat being dispersed through the heatsink. For those having issues with mounting, The trick I use is the check all the thermal pads on the memory on each side of the gpu and make its making contact with the heatsink before I even screw it on, if you see any spaces for gaps , you have to push down harder on the card until you see all pads making contacting.


I repaste my toxic extreme too(second pc my son pc) but i have to repaste my strix lc top and thinking to put lm on this thing.
To my toxic i put noctua h2 and is a bit better than the stock paste


----------



## RichieRich25

Azazil1190 said:


> I repaste my toxic extreme too(second pc my son pc) but i have to repaste my strix lc top and thinking to put lm on this thing.
> To my toxic i put noctua h2 and is a bit better than the stock paste


I just hope this paste doesn't dry up like the regular thermal grizzly. But the tempd are great. The gelid pads dropped memory Temps by 5c


----------



## Azazil1190

RichieRich25 said:


> I just hope this paste doesn't dry up like the regular thermal grizzly. But the tempd are great. The gelid pads dropped memory Temps by 5c


I don't like much the grizzly on cpu because like you said dry very fast , don't know if to gpu has a different behaviour. And of course the extreme has different ingredients
Usually to my gpus and cpus i put gelid extreme but this time i took the risk to try the noctua to my 5950x 10900k and toxic after some reviews.
Still believe that the gelid is a bit better and stays wet much more than others


----------



## Azazil1190

Here is example after a test (forza horizon5 benchmark) on my stric lc with stock paste.But at 400w and above it can reach 80-82c easy


----------



## Azazil1190

And here is the results after repaste my toxic and of course is my win vs the 3090ti f.e of my friend  btw i want lower hot spot temp to be ok


----------



## RichieRich25

Azazil1190 said:


> I don't like much the grizzly on cpu because like you said dry very fast , don't know if to gpu has a different behaviour. And of course the extreme has different ingredients
> Usually to my gpus and cpus i put gelid extreme but this time i took the risk to try the noctua to my 5950x 10900k and toxic after some reviews.
> Still believe that the gelid is a bit better and stays wet much more than others


Yea after about 2 months the kryonaut was dry to the point where one wipe was enough to remove the paste entirely. The syy2 I was using when I just removed it today was still very goey which is why my Temps just never changed much. I hear the extreme is better and last longer so we shall see. If I really want to keep my pc cold I should invest in an A/C for the room. Vornado fan is not cutting it. It gets extremely hot In this room and there are no windows for a window ac or portable ac. So I have to invest in a ductless ac soon


----------



## Azazil1190

RichieRich25 said:


> Yea after about 2 months the kryonaut was dry to the point where one wipe was enough to remove the paste entirely. The syy2 I was using when I just removed it today was still very goey which is why my Temps just never changed much. I hear the extreme is better and last longer so we shall see. If I really want to keep my pc cold I should invest in an A/C for the room. Vornado fan is not cutting it. It gets extremely hot In this room and there are no windows for a window ac or portable ac. So I have to invest in a ductless ac soon


Agree for a/c . Those gpus need it the hot days.
Now in my living room i have a ambient of 27c for me its hot


----------



## RichieRich25

Azazil1190 said:


> Agree for a/c . Those gpus need it the hot days.
> Now in my living room i have a ambient of 27c for me its hot


Just to show you how hot this room is. Here in NY it's 20c outside and my room is at 24.7c. Colder outside then in the room with a fan blowing and my homemade evap cooler made from an old ice cooler was going to throw out. Other rooms in the house are 21c with the no ac on


----------



## 99belle99

I'm sitting here in 8 degrees Celsius and this is our summer. It's about 10 or 11 outside.


----------



## J7SC

Azazil1190 said:


> I repaste my toxic extreme too(second pc my son pc) but i have to repaste my strix lc top and thinking to put lm on this thing.
> To my toxic i put noctua h2 and is a bit better than the stock paste


...I like and sometimes use Thermal Grizzly, but it can dry out quicker. These days, I prefer Gelid GC Extreme for CPU and GPU dies - it's thicker and stays that way for longer. It has now been a year since I assembled it and temps are exactly where they were before. VRAM gets thermal putty, ditto for the back of the die to make contact with an extra heatsink (fan nearby). In pic below, the 3950X/6900XT combo is on the upper left, the 5950X/3090 lower right. Below that is the 6900XT during 'build-up'.


----------



## Azazil1190

J7SC said:


> ...I like and sometimes use Thermal Grizzly, but it can dry out quicker. These days, I prefer Gelid GC Extreme for CPU and GPU dies - it's thicker and stays that way for longer. It has now been a year since I assembled it and temps are exactly where they were before. VRAM gets thermal putty, ditto for the back of the die to make contact with an extra heatsink (fan nearby). In pic below, the 3950X/6900XT combo is on the upper left, the 5950X/3090 lower right. Below that is the 6900XT during 'build-up'.
> View attachment 2563698


So im not the only one that he likes the gelid extreme.
Isnt my idea that the specific paste keep longer than others


----------



## J7SC

Azazil1190 said:


> So im not the only one that he likes the gelid extreme.
> Isnt my idea that the specific paste keep longer than others


Gelid Extreme, especially in combination w/ softer pads or thermal putty on VRAM, is very helpful when it gets to getting good contact on even the longest PCBs / blocks / coolers dies that may not be perfectly level. Also, Gelid Extreme, MX4 and MX5 at least don't seem to dry out - I have opened blocks after 5+ years that used some of those, and the paste was pristine / as moist as on day one.


----------



## RichieRich25

J7SC said:


> Gelid Extreme, especially in combination w/ softer pads or thermal putty on VRAM, is very helpful when it gets to getting good contact on even the longest PCBs / blocks / coolers dies that may not be perfectly level. Also, Gelid Extreme, MX4 and MX5 at least don't seem to dry out - I have opened blocks after 5+ years that used some of those, and the paste was pristine / as moist as on day one.


I just used the gelid extreme pads this morning on my board. Temps are way better on vram. Room is nice an cool and vram Temps are 53c from the 62 it was yesterday


----------



## LtMatt

Azazil1190 has the best 6900 XTXH sample I’ve seen so far.


----------



## RichieRich25

LtMatt said:


> Azazil1190 has the best 6900 XTXH sample I’ve seen so far.


I swear my friend has the best XT sample. He only runs 330pl at 2600min 2700max and puts up a 23700 on timespy. For me to get the same score I need 370pl at 2635 2735


----------



## Azazil1190

LtMatt said:


> Azazil1190 has the best 6900 XTXH sample I’ve seen so far.


We are close enough


----------



## J7SC

Azazil1190 said:


> We are close enough


...is that the 'royal we'  ?
Anyway, I do appreciate that you have one of if not the best XTXH cards embedded in a really gorgeous overall setup, rather than in some hapless Dell accounting computer doing life insurance actuaries in upper left Easter Island...


----------



## Azazil1190

J7SC said:


> ...is that the 'royal we'  ?
> Anyway, I do appreciate that you have one of if not the best XTXH cards embedded in a really gorgeous overall setup, rather than in some hapless Dell accounting computer doing life insurance actuaries in upper left Easter Island...


Royal we yeap! 
All we have good sample's  and superb setups, 6900xt its amazing card and powerful. Thats all


----------



## SpajdrEX

Hello guys, a little late to the party. Got a new Powercolor 6900XT Red Evil Ultimate with ASIC 90.0%
I can see you guys talking about getting TimeSpy score of 23400 with PL just 330W.
Does it mean 330W being set in MPT and not touching PL in Radeon Software?
Any tips on what to change? VRAM can handle 2162Mhz set / 2150Mhz real (Fast timings).
With these settings I can reach 22K for now in TimeSpy.


----------



## RichieRich25

SpajdrEX said:


> Hello guys, a little late to the party. Got a new Powercolor 6900XT Red Evil Ultimate with ASIC 90.0%
> I can see you guys talking about getting TimeSpy score of 23400 with PL just 330W.
> Does it mean 330W being set in MPT and not touching PL in Radeon Software?
> Any tips on what to change? VRAM can handle 2162Mhz set / 2150Mhz real (Fast timings).
> With these settings I can reach 22K for now in TimeSpy.
> 
> View attachment 2563805
> View attachment 2563806


That's 330pl and adding the extra 15% in and software. Max clock in and software 2700 and min 2600


----------



## SpajdrEX

Ok, got 23600  Needed to bump Core Voltage though to 1.125mV
Lowest clocks during benchmark is 2537Mhz
Forgot to mention I disabled all DS features.


----------



## RichieRich25

SpajdrEX said:


> Ok, got 23600  Needed to bump Core Voltage though to 1.125mV
> Lowest clocks during benchmark is 2537Mhz
> Forgot to mention I disabled all DS features.
> View attachment 2563809
> 
> 
> View attachment 2563808
> 
> 
> 
> View attachment 2563807


Depending on Temps. U can bump up the pl more. I have mine at 350 and adjust the slider in amd software as needed.


----------



## Azazil1190

Try f


SpajdrEX said:


> Ok, got 23600  Needed to bump Core Voltage though to 1.125mV
> Lowest clocks during benchmark is 2537Mhz
> Forgot to mention I disabled all DS features.
> View attachment 2563809
> 
> 
> View attachment 2563808
> 
> 
> 
> View attachment 2563807


Try for 450w to mptool if you are ok from temps.
And i show that is something isnt right to your voltage (1125?)


----------



## RichieRich25

Azazil1190 said:


> Try f
> 
> Try for 450w to mptool if you are ok from temps.
> And i show that is something isnt right to your voltage (1125?)


450 air cooled is okay? Or is he watercooled? I have never tried more than 385. Anything more my Hotspot would be 100+


----------



## Azazil1190

RichieRich25 said:


> 450 air cooled is okay? Or is he watercooled? I have never tried more than 385. Anything more my Hotspot would be 100+


My bad forget to mention this.
If he's on air its a risk but if he want he can give a short shot to check the hotspot.
On water its a must


----------



## SpajdrEX

These settings keep hotspot under 90C, anything above makes hotspot around 95-97C.
I keep 1.125mV as maximum for these clocks as if I keep it auto (1.200mV) HWINFO64 shows that it eats 1.157mV, which is higher than I needed.


----------



## LtMatt

I've had the chance to play around with a few 6950 XT Toxics, here is my summary.

They are all lower asic quality than the XTXHs I've had. Core overclocking has not been great, most can barely do 2725Mhz set as max frequency at stock voltage.

However, even at only that speed they still outperformed the Toxic EE due to the memory frequency, coming in around 5% faster despite being 100-125Mhz slower on the core at stock voltage.

I was quite surprised that the memory alone with core clocks matched was adding 6-7% extra performance, at least in the scenario I tested (Days Gone 4K max settings).

One of my samples appears to be as good as or slightly better than my Toxic EE, that one may be a keeper. Still fine tuning clocks but looks like 2850Mhz set in Radeon Software is stable at stock voltage, maybe even 2875Mhz. This sample was miles ahead of the others I tried.


----------



## Azazil1190

LtMatt said:


> I've had the chance to play around with a few 6950 XT Toxics, here is my summary.
> 
> They are all lower asic quality than the XTXHs I've had. Core overclocking has not been great, most can barely do 2725Mhz set as max frequency at stock voltage.
> 
> However, even at only that speed they still outperformed the Toxic EE due to the memory frequency, coming in around 5% faster despite being 100-125Mhz slower on the core at stock voltage.
> 
> I was quite surprised that the memory alone with core clocks matched was adding 6-7% extra performance, at least in the scenario I tested (Days Gone 4K max settings).
> 
> One of my samples appears to be as good as or slightly better than my Toxic EE, that one may be a keeper. Still fine tuning clocks but looks like 2850Mhz set in Radeon Software is stable at stock voltage, maybe even 2875Mhz. This sample was miles ahead of the others I tried.


Nice mini review Matt!
So you think worth doing the move from 6900 to 6950 for 4k? Until the 7xxx series


----------



## Kiwisaver53

I have a question, I just bought the red devil 6900xt, I'm using currently a 11700f cpu and 2 sata drives at 7200 rpm, a ssd at 500gb, four fans and a DVD drive. No led. Is my 80+ Gold , 850 watt psu enough?


----------



## rodac

LtMatt said:


> I've had the chance to play around with a few 6950 XT Toxics, here is my summary.
> 
> They are all lower asic quality than the XTXHs I've had. Core overclocking has not been great, most can barely do 2725Mhz set as max frequency at stock voltage.
> 
> However, even at only that speed they still outperformed the Toxic EE due to the memory frequency, coming in around 5% faster despite being 100-125Mhz slower on the core at stock voltage.
> 
> I was quite surprised that the memory alone with core clocks matched was adding 6-7% extra performance, at least in the scenario I tested (Days Gone 4K max settings).
> 
> One of my samples appears to be as good as or slightly better than my Toxic EE, that one may be a keeper. Still fine tuning clocks but looks like 2850Mhz set in Radeon Software is stable at stock voltage, maybe even 2875Mhz. This sample was miles ahead of the others I tried.


Great to hear this, I can only dream of being able to try several copies like you did, this shows the silicon lottery issue. I did not expect that any 6950s can match your Toxic EE knowing the top end overclock you achieved.


----------



## Azazil1190

Kiwisaver53 said:


> I have a question, I just bought the red devil 6900xt, I'm using currently a 11700f cpu and 2 sata drives at 7200 rpm, a ssd at 500gb, four fans and a DVD drive. No led. Is my 80+ Gold , 850 watt psu enough?


You are fine


----------



## LtMatt

Azazil1190 said:


> Nice mini review Matt!
> So you think worth doing the move from 6900 to 6950 for 4k? Until the 7xxx series


No, not really. I wanted that extra 5-7% though. 🤭


----------



## adamCfw

broke 24k with a 6900xt toxic XTXH. PL 400w









I scored 21 435 in Time Spy


AMD Ryzen 7 5800X, AMD Radeon RX 6900 XT x 1, 16384 MB, 64-bit Windows 11}




www.3dmark.com


----------



## RichieRich25

So I noticed that my scores a better when I limit voltage. I was using the temp dependent setting at 1200 but I decided to turn that off and set max voltage to 1.145. At 2600min 2700max my scores are better vs 2625 2725 1.2volts.


----------



## Counterassy14

Hi,

does anyone here have a 6950 XT Liquid Devil or maybe even 6950s in general? 
There is so little Information on those and I wanna see how they perform (3d mark, temps, hotspot) and what their asics scores are.
I got one for fairly cheap (arriving in a few days) and now I wanna see how it compares to others.
My current 6900 XT Reference only has an asics score of 80.1% and clocks at 2650-2750MHz 1.2v and I hope to blow at least that one out of the water.

Very Interrested to see some scores and settings or places where I could find them.


----------



## supergt99

RichieRich25 said:


> So I noticed that my scores a better when I limit voltage. I was using the temp dependent setting at 1200 but I decided to turn that off and set max voltage to 1.145. At 2600min 2700max my scores are better vs 2625 2725 1.2volts.


same results for me, 1.125v with MPT. clock 2600/2700max. 23600 timespy, cooler running too. hotspot in mid 70's


----------



## RichieRich25

supergt99 said:


> same results for me, 1.125v with MPT. clock 2600/2700max. 23600 timespy, cooler running too. hotspot in mid 70's


Yea my timespy is 23577 at 1.145 2600/ 2700vs 23300 at 1.2 2625/2725. If I go to 1.135 I crash on the 2nd test. But Hotspot doesn't go past 80 now


----------



## J7SC

LtMatt said:


> I've had the chance to play around with a few 6950 XT Toxics, here is my summary.
> 
> They are all lower asic quality than the XTXHs I've had. Core overclocking has not been great, most can barely do 2725Mhz set as max frequency at stock voltage.
> 
> However, even at only that speed they still outperformed the Toxic EE due to the memory frequency, coming in around 5% faster despite being 100-125Mhz slower on the core at stock voltage.
> 
> I was quite surprised that the memory alone with core clocks matched was adding 6-7% extra performance, at least in the scenario I tested (Days Gone 4K max settings).
> 
> One of my samples appears to be as good as or slightly better than my Toxic EE, that one may be a keeper. Still fine tuning clocks but looks like 2850Mhz set in Radeon Software is stable at stock voltage, maybe even 2875Mhz. This sample was miles ahead of the others I tried.


Very interesting comp ! From my POV, really too close to next-gen GPU release by AMD and NVidia for me to switch anything up now...regular 6900XTX can crack TS_Gr 24.1k w/ water-cooling and MPT, so it will have to do until I can take a look what next gens will bring.


Spoiler


----------



## RichieRich25

scores and temps are so much better by limiting voltage. it seems 1145 is my sweet spot since I can keep going up in clock speeds without issues.This run was at 2610/2710


----------



## RichieRich25

RichieRich25 said:


> View attachment 2564369
> 
> 
> scores and temps are so much better by limiting voltage. it seems 1145 is my sweet spot since I can keep going up in clock speeds without issues.This run was at 2610/2710
> I also just got a evaporative cooler( hopefully it doesnt ruin my pc but i doubt it will since its far away and I have a dehumidifier that catches alot of the moisture but finally brings my room down to 70f degrees from 77f so temps are great as well


----------



## SpajdrEX

@RichieRich25 Hi mate, can you show all your MPT settings?


----------



## RichieRich25

SpajdrEX said:


> @RichieRich25 Hi mate, can you show all your MPT settings?












at work so I tried my best remotely to access my computer to use snipping tool to catch a screen shot of everything. These are the only values I changed. This is a 6900xt red devil. Literally got the day of launch.


----------



## Azazil1190

RichieRich25 said:


> View attachment 2564400
> 
> 
> at work so I tried my best remotely to access my computer to use snipping tool to catch a screen shot of everything. These are the only values I changed. This is a 6900xt red devil. Literally got the day of launch.


Nice settings 👍
What is your resolution?
4k?


----------



## RichieRich25

Azazil1190 said:


> Nice settings 👍
> What is your resolution?
> 4k?


Well because of all the LG oled talks I'm hooked up to the LG now. Timespy extreme gpu score 11472 with the same settings posted. For fathers day/bday was going to get the Alienware oled but I desperately needed a car so I got a car instead but I might stay connected to the LG because I'm enjoying the experience alot


----------



## Azazil1190

Im trying now your settings the only difference are the clocks 2700/2800 .
Im testing at 4k the ascent.
For me this game is one from the most demanding games at 4k.


----------



## Azazil1190

Seems stable so far .i will keep them























The ts above was at 2675/2775 because at 2700/2800 1.143v it crashed


----------



## Azazil1190

And last one ts extreme with same settings but the watts are too high . something isnt right at mptool


----------



## RichieRich25

Azazil1190 said:


> And for last one ts extreme with same settings but the watts are too high . something isnt rights at mptool
> View attachment 2564415


I think it's timespy. Whenever I do timespy extreme I see my wattage hit 419. But it only happens in timespy extreme never in any games. In most games my highest wattage is like 380.


----------



## Grindcore77

I scored 22 019 in Time Spy


Intel Core i9-9900KF Processor, AMD Radeon RX 6900 XT x 1, 65536 MB, 64-bit Windows 11}




www.3dmark.com





Best I can manage so far with my Gigabyte 6900xt xtreme waterforce...
Can maybe get higher gpu clocks if I give it some more voltage......Im at 1.313 at the moment..

Memory OCing has been more of a problem...can go higher than 2150 but performance suffers....😑


----------



## LtMatt

J7SC said:


> Very interesting comp ! From my POV, really too close to next-gen GPU release by AMD and NVidia for me to switch anything up now...regular 6900XTX can crack TS_Gr 24.1k w/ water-cooling and MPT, so it will have to do until I can take a look what next gens will bring.
> 
> 
> Spoiler
> 
> 
> 
> 
> View attachment 2564347


I decided to actually keep the worse sample 6950 XT Toxic I have, purely for the fact that the temperatures on this sample are the lowest. At most there is 1-2 FPS difference between the best and worst sample. Weighing it all up, I'd rather have the 10c + lower temps. I must have got perfect contact on the mount with the worst sample.

Here are some results from a Timespy run. These runs are using the maximum stable clocks achievable on stock voltage and are 20 stress test runs stable on Firestrike Ultra and Timespy. It is definitely one of the lowest clocking Navi21s I've ever had. Max it can run at 1.2v is 2648/2748Mhz, so barely scrapping 2700Mhz locked under load.

AMD Radeon RX 6950 XT video card benchmark result - AMD Ryzen 7 5800X3D,ASUSTeK COMPUTER INC. ROG CROSSHAIR VIII DARK HERO (3dmark.com)

Here's another run but with a higher CPU score and slightly slower memory overclock so a lower GPU score.
AMD Radeon RX 6950 XT video card benchmark result - AMD Ryzen 7 5800X3D,ASUSTeK COMPUTER INC. ROG CROSSHAIR VIII DARK HERO (3dmark.com)

Now I've come to terms with its capabilities (or lack of ) just gonna enjoy some games and some super low temps with the rad fans running at low speed, see videos below. 
5800X3D Benchmark GTA V 6950 XT Toxic 1440P Ultra Settings - YouTube
5800X3D Benchmark Warzone Rebirth 6950 XT Toxic 2160P Competitive Settings - YouTube


----------



## Azazil1190

LtMatt said:


> I decided to actually keep the worse sample 6950 XT Toxic I have, purely for the fact that the temperatures on this sample are the lowest. At most there is 1-2 FPS difference between the best and worst sample. Weighing it all up, I'd rather have the 10c + lower temps. I must have got perfect contact on the mount with the worst sample.
> 
> Here are some results from a Timespy run. These runs are using the maximum stable clocks achievable on stock voltage and are 20 stress test runs stable on Firestrike Ultra and Timespy. It is definitely one of the lowest clocking Navi21s I've ever had. Max it can run at 1.2v is 2648/2748Mhz, so barely scrapping 2700Mhz locked under load.
> 
> AMD Radeon RX 6950 XT video card benchmark result - AMD Ryzen 7 5800X3D,ASUSTeK COMPUTER INC. ROG CROSSHAIR VIII DARK HERO (3dmark.com)
> 
> Here's another run but with a higher CPU score and slightly slower memory overclock so a lower GPU score.
> AMD Radeon RX 6950 XT video card benchmark result - AMD Ryzen 7 5800X3D,ASUSTeK COMPUTER INC. ROG CROSSHAIR VIII DARK HERO (3dmark.com)
> 
> Now I've come to terms with its capabilities (or lack of ) just gonna enjoy some games and some super low temps with the rad fans running at low speed, see videos below.
> 5800X3D Benchmark GTA V 6950 XT Toxic 1440P Ultra Settings - YouTube
> 5800X3D Benchmark Warzone Rebirth 6950 XT Toxic 2160P Competitive Settings - YouTube


The point is that one good sample' of 6900 oc is the same like an average 6950?


----------



## LtMatt

Azazil1190 said:


> The point is that one good sample' of 6900 oc is the same like an average 6950?


There's not a lot in it, no.


----------



## Azazil1190

Matt do you have the ac Valhalla?

If yes can you make a run(max oc 6950) of the in game benchmark 4k ultra? Im curious to see the difference vs 6900


----------



## RichieRich25

LtMatt said:


> I decided to actually keep the worse sample 6950 XT Toxic I have, purely for the fact that the temperatures on this sample are the lowest. At most there is 1-2 FPS difference between the best and worst sample. Weighing it all up, I'd rather have the 10c + lower temps. I must have got perfect contact on the mount with the worst sample.
> 
> Here are some results from a Timespy run. These runs are using the maximum stable clocks achievable on stock voltage and are 20 stress test runs stable on Firestrike Ultra and Timespy. It is definitely one of the lowest clocking Navi21s I've ever had. Max it can run at 1.2v is 2648/2748Mhz, so barely scrapping 2700Mhz locked under load.
> 
> AMD Radeon RX 6950 XT video card benchmark result - AMD Ryzen 7 5800X3D,ASUSTeK COMPUTER INC. ROG CROSSHAIR VIII DARK HERO (3dmark.com)
> 
> Here's another run but with a higher CPU score and slightly slower memory overclock so a lower GPU score.
> AMD Radeon RX 6950 XT video card benchmark result - AMD Ryzen 7 5800X3D,ASUSTeK COMPUTER INC. ROG CROSSHAIR VIII DARK HERO (3dmark.com)
> 
> Now I've come to terms with its capabilities (or lack of ) just gonna enjoy some games and some super low temps with the rad fans running at low speed, see videos below.
> 5800X3D Benchmark GTA V 6950 XT Toxic 1440P Ultra Settings - YouTube
> 5800X3D Benchmark Warzone Rebirth 6950 XT Toxic 2160P Competitive Settings - YouTube


Very high fps compared to mine. For reference my 6900xt with 2550/2715 365pl on rebirth island I get around 150-165 fps on avg with occasional spikes to 200fps. my card is capable of going to 2775 and no crashes in warzone but at 2715 I can pretty much get my desired 144fps so thats where I leave it


----------



## deadfelllow

Guys what do you guys think about this instead of paste?



Amazon.com.tr



Cuz I'am having big contact issues on my 6900XT. Does this help? ( Its 64W/mk and also conductive)


----------



## LtMatt

Azazil1190 said:


> Matt do you have the ac Valhalla?
> 
> If yes can you make a run(max oc 6950) of the in game benchmark 4k ultra? Im curious to see the difference vs 6900


I do, but don't have it downloaded atm. If I get it downloaded I'll share some results.


RichieRich25 said:


> Very high fps compared to mine. For reference my 6900xt with 2550/2715 365pl on rebirth island I get around 150-165 fps on avg with occasional spikes to 200fps. my card is capable of going to 2775 and no crashes in warzone but at 2715 I can pretty much get my desired 144fps so thats where I leave it


What resolution and what CPU are you using?

If you are not using a 5800X3D and are running 1080P then you will be bottlenecking the 6900 XT in Warzone.


----------



## Azazil1190

LtMatt said:


> I do, but don't have it downloaded atm. If I get it downloaded I'll share some results.
> 
> Thanks!
> Appreciate!


----------



## RichieRich25

LtMatt said:


> I do, but don't have it downloaded atm. If I get it downloaded I'll share some results.
> 
> What resolution and what CPU are you using?
> 
> If you are not using a 5800X3D and are running 1080P then you will be bottlenecking the 6900 XT in Warzone.


This is with a 5950x and at 2160p max graphic settings with raytracing off


----------



## earphonelnwshop

hey guy. i try to learning mpt

6900xt reference version( xtx ) + water block + mpt setting

Do you think*? *


----------



## Azazil1190

earphonelnwshop said:


> hey guy. i try to learning mpt
> 
> 6900xt reference version( xtx ) + water block + mpt setting
> 
> Do you think*? *
> 
> View attachment 2564490
> View attachment 2564491


Looking good.
Can you post your mpt setting's?


----------



## Kaltenbrunner

If the 4080 wasn't right around the corner, I'd get a used 6900xt this summer, for sure. I didn't realize it beats the 3080 ti sometimes too. I'm just a gamer, and RT just robs my FPS anyways.

But if the 4080 lanches at near what a 3080ti is currently, I'd probably get it. I still OWN a pile of games that I don't want to try, because I'd have to turn down the settings to get 155fps.


----------



## RichieRich25

Kaltenbrunner said:


> If the 4080 wasn't right around the corner, I'd get a used 6900xt this summer, for sure. I didn't realize it beats the 3080 ti sometimes too. I'm just a gamer, and RT just robs my FPS anyways.
> 
> But if the 4080 lanches at near what a 3080ti is currently, I'd probably get it. I still OWN a pile of games that I don't want to try, because I'd have to turn down the settings to get 155fps.


In my honest opinion the 6900xt is the best competitive gpu. In games where fps matters I notice the 6900xt comes out on top especially in 1440p. In 4k the 3090/ti is king but here in NY the 3090 cost 1699 vs 6900xt at 899. 3080ti cost 999. With more power tool none of friends with a 3090(air cooled vs air cooled) can match my performance unless we turn on Ray tracing.


----------



## Kaltenbrunner

I see the 6900xt doing good in DX12, I haven't tried that lately with a 12th gen cpu and 6700xt, I should try BF1 now w/ DX12. It was crap before with rtx2070 or 6700xt

But I play a lot of 'older' games, from the 2000s and early 2010s, so DX9/10/11. Whats the 6900xt really like in them vs the 3080/ti ? No one really does old game review's, with new tech. Maybe I should start a utube ch for that.


----------



## RichieRich25

Kaltenbrunner said:


> I see the 6900xt doing good in DX12, I haven't tried that lately with a 12th gen cpu and 6700xt, I should try BF1 now w/ DX12. It was crap before with rtx2070 or 6700xt
> 
> But I play a lot of 'older' games, from the 2000s and early 2010s, so DX9/10/11. Whats the 6900xt really like in them vs the 3080/ti ? No one really does old game review's, with new tech. Maybe I should start a utube ch for that.


I know with new drivers there is some improvements. I use to play destiny 2 and after the preview drivers or 22.5.1 the performance improved a bit but I'm not sure how the 6000 series performs in older games.


----------



## earphonelnwshop

Azazil1190 said:


> Looking good.
> Can you post your mpt setting's?


sure . my mpt setting and timespy scores now

and i tested stability in battlefield v 4k ultra
evething is OK. 








6900xt reference version( xtx ) + water block + mpt setting


----------



## Azazil1190

earphonelnwshop said:


> sure . my mpt setting and timespy scores now
> 
> and i tested stability in battlefield v 4k ultra
> evething is OK.
> 
> 
> 
> 
> 
> 
> 
> 
> 6900xt reference version( xtx ) + water block + mpt setting
> View attachment 2564520
> View attachment 2564521
> View attachment 2564522
> View attachment 2564523


Nice samplel


----------



## deadfelllow

Azazil1190 said:


> Nice samplel


Hey,

Can you reach 3k+ on low gpu usage games? (BF5,God of War etc)


----------



## Azazil1190

deadfelllow said:


> Hey,
> 
> Can you reach 3k+ on low gpu usage games? (BF5,GoW etc)


Hi mate!
Need to test it .(gow gear of wars or god of war)
Btw i dont have bf5 i dont like it.
At low gpu usage games i trust more my second system 10900k oc at 5.2 and 4500 c17 with very tight timings


----------



## Azazil1190

Here is an example of Valhalla.
1080p and 1440p.
With 5950x i cant reach those fps even with higher oc gpu.
Toxic ee and 10900k
Im gonna try again on 5950x


----------



## RichieRich25

This is with the same settings I use in Timespy benchmarks. I can go alot higher and clocks but wanted to use whatever i use in timespy to compare
quick edit and comment. It seems that mpt does nothing for this benchmark. gpu power never goes past 305 watts no matter how hard i push it. Going to try and 4k bench now


----------



## RichieRich25

4k benchmark same 365pl at 2610/2710 core settings 2150 mem clock. Wattage never when past 340


----------



## Azazil1190

Try cyberpunk 4k or the ascent at ultra and you will pull close enough to 400w


----------



## RichieRich25

i have to redownload cyber hold on


----------



## RichieRich25

It seems low and my pl wont go past 319. Wonder if limiting the voltage is the issue. ill have to change mpt


----------



## Azazil1190

Yes you are low just a sec to find one run of the past
















1440p and 4k ultra ,rt off of course
max oc at stock without mptool


----------



## earphonelnwshop

i try to oc 2900/2800 mhz on my 6900xt reference and test in battlefield v 4k ultra agian
look good


----------



## RichieRich25

Azazil1190 said:


> Yes you are low just a sec to find one run of the past
> View attachment 2564556
> 
> View attachment 2564555
> 
> 1440p and 4k ultra ,rt off of course
> max oc at stock without mptool


Increase voltage back to 1175 but power won't go past the 330w. It'd almost like I'm stock. The avg clock speed is 2660 when checking. I might have to increase the core clocks i guess


----------



## 8800GT

earphonelnwshop said:


> i try to oc 2900/2800 mhz on my 6900xt reference and test in battlefield v 4k ultra agian
> look good


That's a fantastic card right there. I can't get past 2750 in games on mine, and even then that's at 1.25v.


----------



## Azazil1190

RichieRich25 said:


> Increase voltage back to 1175 but power won't go past the 330w. It'd almost like I'm stock. The avg clock speed is 2660 when checking. I might have to increase the core clocks i guess


I think yes try it.
Because higher clocks boosts the watts more


----------



## Azazil1190

earphonelnwshop said:


> i try to oc 2900/2800 mhz on my 6900xt reference and test in battlefield v 4k ultra agian
> look good


Amazing sample for a ref.its like a great xtxh💪
Voltage ?

Here is a small video from my strix lc
2900/3000
Recorded from my cellphone as always


----------



## RichieRich25

Azazil1190 said:


> I think yes try it.
> Because higher clocks boosts the watts more


No luck 345 is thr max at 2675 2775. Crash at 2700 2800 but it's hot . Turning on ac now. Scores stays the same with clock speeds. Avg at 47fps


----------



## Azazil1190

RichieRich25 said:


> No luck 345 is thr max at 2675 2775. Crash at 2700 2800 but it's hot . Turning on ac now. Scores stays the same with clock speeds. Avg at 47fps


Strange.
Maybe uninstall and install the drivers fix it.
And reset mpt .
I don't know what else to think.

Only to cyberpunk you have that issue?


----------



## RichieRich25

Azazil1190 said:


> Strange.
> Maybe uninstall and install the drivers fix it.
> And reset mpt .
> I don't know what else to think.
> 
> Only to cyberpunk you have that issue?


Seems like it. Timespy sees 420watts, warzone and vanguard go to 380 like I have it set. Only cyberpunk seems to be very low even though the clock speeds are high. It's like Ray tracing is on when it's not. 1440p is seeing the same behavior. 1440p avg fps at 105


----------



## Azazil1190

RichieRich25 said:


> Seems like it. Timespy sees 420watts, warzone and vanguard go to 380 like I have it set. Only cyberpunk seems to be very low even though the clock speeds are high. It's like Ray tracing is on when it's not. 1440p is seeing the same behavior. 1440p avg fps at 105


Something is wrong with the specific game so
Check again the graphics option's.And reset the game


----------



## SpajdrEX

Azazil1190 said:


> Try cyberpunk 4k or the ascent at ultra and you will pull close enough to 400w


Or simply Quake II RTX at 4K maxed settings will pull 400W easily


----------



## alceryes

Hey all.
What's the best closed loop available for the reference 6900 XT? Does it come with a backside heatsink too?


----------



## RichieRich25

Azazil1190 said:


> Something is wrong with the specific game so
> Check again the graphics option's.And reset the game


Tried everything but still same scores and power usage. The only thing I see odd is that Radeon and hwinfo show fps in the 200s but cyber only shows 110s. Clock avg around 2725. Ray tracing low vs ultra nets the same fps during my testing. very odd


----------



## Azazil1190

RichieRich25 said:


> Tried everything but still same scores and power usage. The only thing I see odd is that Radeon and hwinfo show fps in the 200s but cyber only shows 110s. Clock avg around 2725. Ray tracing low vs ultra nets the same fps during my testing. very odd
> View attachment 2564588


[email protected] it


----------



## J7SC

alceryes said:


> Hey all.
> What's the best closed loop available for the reference 6900 XT? Does it come with a backside heatsink too?


...not specific to reference design, but I tried a Bykski block - for the first time ever - on my Gigabyte 6900 XTX (3x 8 pin) and am really impressed with both the cooling performance and quality (now over a year in service, pls see below...). Per link, the reference block looks almost identical > here

FYI, the Bykski block came with a custom backplate, but I obviously modded it w/ thermal putty and extra heatsink









6900XT Bykski block on the bottom...


----------



## Anhphe93

I need help! The system has a reboot problem. What should I note when changing on MPT with 6900xt sapphire XTX?


----------



## RichieRich25

Anhphe93 said:


> I need help! The system has a reboot problem. What should I note when changing on MPT with 6900xt sapphire XTX?


Are you rebooting under gpu load? What are your psu specs and what are temps looking like?


----------



## Anhphe93

RichieRich25 said:


> Are you rebooting under gpu load? What are your psu specs and what are temps looking like?


before that i had no problem with my custom MPT. but last 2 days it crashes the system shuts down when i play the game. i was looking for the cause and it seems to be the FT2 setup, but i only use FT1. now no problem occurs after disabling FT2. 
I use 1000w gold gigabyte PSU. 6900xt sapphire nitro+ se with air cooling. Use liquid metal on it.


----------



## RichieRich25

Anhphe93 said:


> before that i had no problem with my custom MPT. but last 2 days it crashes the system shuts down when i play the game. i was looking for the cause and it seems to be the FT2 setup, but i only use FT1. now no problem occurs after disabling FT2.
> I use 1000w gold gigabyte PSU. 6900xt sapphire nitro+ se with air cooling. Use liquid metal on it.


ahh I see. Fast timings 2 i was unable to use as well. Even if I lowered the mem clock down to its lowest number i would still get some artifact and then crash. I have only seen it work on xtxh cards


----------



## RichieRich25

Azazil1190 said:


> [email protected] it


quick question what AMD driver do you use right now?


----------



## Azazil1190

RichieRich25 said:


> quick question what AMD driver do you use right now?


22.5.2
And i think they improve my sotr bench too.
Freshhhh 1080p 1440p 4k
For better results i need 12900k or 5800x3D


----------



## alceryes

RichieRich25 said:


> quick question what AMD driver do you use right now?


22.3.1


----------



## Anhphe93

RichieRich25 said:


> ahh I see. Fast timings 2 i was unable to use as well. Even if I lowered the mem clock down to its lowest number i would still get some artifact and then crash. I have only seen it work on xtxh cards


the problem persists, maybe it's not the cause. I recorded the phenomenon before the system shut down, but don't know how to upload it here


----------



## earphonelnwshop

Azazil1190 said:


> Amazing sample for a ref.its like a great xtxh💪
> Voltage ?
> 
> Here is a small video from my strix lc
> 2900/3000
> Recorded from my cellphone as always


i use 1.225V at 2900/2800 right now. 
wait i modity cooling system i will try 1.25 and 1.275V at  2900/3000


----------



## Kaltenbrunner

The 6900xt is a real beast for high fps 1440p. I hope the price on it keeps dropping.

With a 12700k and 6700xt, fast ram, nmve, BF1 runs great maxed out and locked to 155fps. So I tried DX12 again, and I can't understand it, it's still a stutter-fest. It starts like it's just loading stuff, and gets a bit better, but then it just stutters and lags every few seconds, or as u move around/etc.

IDK if there's some control panel settings that would fix it, but I read that DX12 has laggy response times anyways, so I'm quite happy with DX11 in that.

BF5 still runs too slow for my liking, and IDK if it's just my low/medium settings, but sometimes it looks and feels like a poor quality 2002 era game I played, at least in some tank LMG's and zoomed anyways.



FC4 and SWBF2, they can't run maxxed and smooth at 155fps. Just another game that needs a bit more power yet. The 6900xt is looking like mighty good option.

I've spent more time thinking about upgrades than playing games this spring


----------



## earphonelnwshop

Setting again MPT+Modify cooling system
6900XT Reference(XTX) OC 2963/2863 mhz at 1.25v
in battlefield v 4k ultra . look good


----------



## alceryes

Kaltenbrunner said:


> So I tried DX12 again, and I can't understand it, it's still a stutter-fest. It starts like it's just loading stuff, and gets a bit better, but then it just stutters and lags every few seconds, or as u move around/etc.


Are you absolutely sure you're not getting any drops in CPU/GPU frequencies?
Have you run HWiNFO64 in the background (sensors only, logging on) while you see these studders and then gone through the log file?


----------



## RichieRich25

Kaltenbrunner said:


> The 6900xt is a real beast for high fps 1440p. I hope the price on it keeps dropping.
> 
> With a 12700k and 6700xt, fast ram, nmve, BF1 runs great maxed out and locked to 155fps. So I tried DX12 again, and I can't understand it, it's still a stutter-fest. It starts like it's just loading stuff, and gets a bit better, but then it just stutters and lags every few seconds, or as u move around/etc.
> 
> IDK if there's some control panel settings that would fix it, but I read that DX12 has laggy response times anyways, so I'm quite happy with DX11 in that.
> 
> BF5 still runs too slow for my liking, and IDK if it's just my low/medium settings, but sometimes it looks and feels like a poor quality 2002 era game I played, at least in some tank LMG's and zoomed anyways.
> 
> 
> 
> FC4 and SWBF2, they can't run maxxed and smooth at 155fps. Just another game that needs a bit more power yet. The 6900xt is looking like mighty good option.
> 
> I've spent more time thinking about upgrades than playing games this spring


Have you tried setting your min clock speed and maximum clock speed 100mhz apart. For example min clock 2500 and maximum 2600. I know this stopped stuttering in cod and fortnite for me


----------



## Anhphe93

RichieRich25 said:


> Have you tried setting your min clock speed and maximum clock speed 100mhz apart. For example min clock 2500 and maximum 2600. I know this stopped stuttering in cod and fortnite for me





http://imgur.com/a/y8spE9L

This is a video I recorded the error taking place. do you know what the error is?


----------



## supergt99

The flashing in game? VRAM overclock too much?


Anhphe93 said:


> http://imgur.com/a/y8spE9L
> 
> This is a video I recorded the error taking place. do you know what the error is?


----------



## RichieRich25

supergt99 said:


> The flashing in game? VRAM overclock too much?


Was just about to say this. Memory clock too high. Similar to artifact. Happen to me in new world on my sons 6600xt. Vram was too high.


----------



## LtMatt

New Warzone update is out, the new map runs disgustingly fast on RDNA2 men. 
5800X3D Benchmark Warzone Fortune's Keep RX 6950 XT Toxic 1080P Low Settings - YouTube


----------



## Azazil1190

LtMatt said:


> New Warzone update is out, the new map runs disgustingly fast on RDNA2 men.
> 5800X3D Benchmark Warzone Fortune's Keep RX 6950 XT Toxic 1080P Low Settings - YouTube


My son playing now and i watching him


----------



## Anhphe93

supergt99 said:


> The flashing in game? VRAM overclock too much?





RichieRich25 said:


> Was just about to say this. Memory clock too high. Similar to artifact. Happen to me in new world on my sons 6600xt. Vram was too high.


i have been using stable for 1 year with FT1 2150. has it degraded? I recently followed a video about making copper pads on vram, is that the cause?
And now i will have shutdown problem when overclocking, no more that flicker. Reverting watman settings to default i will have no problem, if overclocking after just 1 minute the system will shut down with error in eventview as whea 18. But i'm sure it's not cpu's CO fault. If not overclock vga in watman the problem will go away. Even if i revert the original bios, nothing changes.


----------



## RichieRich25

Anhphe93 said:


> i have been using stable for 1 year with FT1 2150. has it degraded? I recently followed a video about making copper pads on vram, is that the cause?
> And now i will have shutdown problem when overclocking, no more that flicker. Reverting watman settings to default i will have no problem, if overclocking after just 1 minute the system will shut down with error in eventview as whea 18. But i'm sure it's not cpu's CO fault. If not overclock vga in watman the problem will go away. Even if i revert the original bios, nothing changes.


Is that happening in every game or just one particular game? Shutdowns usually happen from temps or some type of power issue


----------



## supergt99

Sounds like gpu or vram is overheating


----------



## Anhphe93

RichieRich25 said:


> Is that happening in every game or just one particular game? Shutdowns usually happen from temps or some type of power issue


I didn't have any problems during 1 year of use before I used MPT, and it only happened after 1 week of normal use with MPT. the sudden power off problem happens in all games, i still don't know the cause, i tried increasing the vram vol 1400 but nothing changed. i don't think the problem is in the PSU, in default mode in watman i have no problem.


----------



## J7SC

Anhphe93 said:


> I didn't have any problems during 1 year of use before I used MPT, and it only happened after 1 week of normal use with MPT. the sudden power off problem happens in all games, i still don't know the cause, i tried increasing the vram vol 1400 but nothing changed. i don't think the problem is in the PSU, in default mode in watman i have no problem.


I wouldn't rule out PSU / transient spikes completely...






FYI, Igor's lab did s.th. similar a while back


----------



## Anhphe93

J7SC said:


> I wouldn't rule out PSU / transient spikes completely...
> 
> 
> 
> 
> 
> 
> FYI, Igor's lab did s.th. similar a while back


I will check this issue


----------



## RichieRich25

Anhphe93 said:


> I will check this issue


What is your power limit set to in mpt. If it's not happening without mpt then most likely a power issue. Maybe too much power is causing some type of spikes and your psu is tripping due to over current protection. This happens to 3090s and certain power supplies


----------



## Anhphe93

RichieRich25 said:


> What is your power limit set to in mpt. If it's not happening without mpt then most likely a power issue. Maybe too much power is causing some type of spikes and your psu is tripping due to over current protection. This happens to 3090s and certain power supplies


maybe my vram was corrupted. the problem is not caused by psu. ridiculous, it has degraded after 1 year of use. now i can just leave it at 2000. So depressed, maybe next time i will give up amd


----------



## RichieRich25

Anhphe93 said:


> maybe my vram was corrupted. the problem is not caused by psu. ridiculous, it has degraded after 1 year of use. now i can just leave it at 2000. So depressed, maybe next time i will give up amd


Doubt it is vram. Vram instability would cause a crash but a system restart I don't know. A friend of mine was having restarts and it was temps being too high we repasted and never crashed again


----------



## lestatdk

RichieRich25 said:


> Doubt it is vram. Vram instability would cause a crash but a system restart I don't know. A friend of mine was having restarts and it was temps being too high we repasted and never crashed again


Before I put a WB on my card I had the same problem. Shutdowns due to high temps


----------



## RichieRich25

lestatdk said:


> Before I put a WB on my card I had the same problem. Shutdowns due to high temps


Usually shutdowns are due to temp or power spikes that trip the psu and to save your system is just reboots


----------



## GTANY

LtMatt said:


> I decided to actually keep the worse sample 6950 XT Toxic I have, purely for the fact that the temperatures on this sample are the lowest. At most there is 1-2 FPS difference between the best and worst sample. Weighing it all up, I'd rather have the 10c + lower temps. I must have got perfect contact on the mount with the worst sample.
> 
> Here are some results from a Timespy run. These runs are using the maximum stable clocks achievable on stock voltage and are 20 stress test runs stable on Firestrike Ultra and Timespy. It is definitely one of the lowest clocking Navi21s I've ever had. Max it can run at 1.2v is 2648/2748Mhz, so barely scrapping 2700Mhz locked under load.
> 
> AMD Radeon RX 6950 XT video card benchmark result - AMD Ryzen 7 5800X3D,ASUSTeK COMPUTER INC. ROG CROSSHAIR VIII DARK HERO (3dmark.com)
> 
> Here's another run but with a higher CPU score and slightly slower memory overclock so a lower GPU score.
> AMD Radeon RX 6950 XT video card benchmark result - AMD Ryzen 7 5800X3D,ASUSTeK COMPUTER INC. ROG CROSSHAIR VIII DARK HERO (3dmark.com)
> 
> Now I've come to terms with its capabilities (or lack of ) just gonna enjoy some games and some super low temps with the rad fans running at low speed, see videos below.
> 5800X3D Benchmark GTA V 6950 XT Toxic 1440P Ultra Settings - YouTube
> 5800X3D Benchmark Warzone Rebirth 6950 XT Toxic 2160P Competitive Settings - YouTube


In order to see the difference between a 6900 XT and a 6950 XT, I ran Time Spy with the same GPU frequency, 2700 Mhz, and 2150 Mhz fast timings on the memory : I obtained a graphic score of 23962 (5950X inside). Compared to your 24643 score, my 6900 XT is around 3% slower, because of the memory frequency delta. I don't think that your 5800X3D will change the graphic score because Time Spy is GPU-limited.


----------



## RichieRich25

GTANY said:


> In order to see the difference between a 6900 XT and a 6950 XT, I ran Time Spy with the same GPU frequency, 2700 Mhz, and 2150 Mhz fast timings on the memory : I obtained a graphic score of 23962 (5950X inside). Compared to your 24643 score, my 6900 XT is around 3% slower, because of the memory frequency delta. I don't think that your 5800X3D will change the graphic score because Time Spy is GPU-limited.


That's a really good score for those settings u used. What pl were you running?


----------



## GTANY

500 W. But on TS GT2, the power does not exceed 450 W. The card is watercooled.


----------



## Anhphe93

RichieRich25 said:


> Doubt it is vram. Vram instability would cause a crash but a system restart I don't know. A friend of mine was having restarts and it was temps being too high we repasted and never crashed again


I will fix this problem soon, and see if I can get it back


----------



## Bart

Any of you guys notice any difference in OCing when upgrading to Windows 11? I might be losing my old mind, but my 6900XT seems to OC better now, for some strange reason. I recently upgraded to Windows 11, and also replaced a faulty Corsair MP600 NVME SSD that committed seppuku for no apparent reason. Now I'm not half as nuts as you guys, so my ONLY change in MPT is to set the power limit to 375W instead of 272, that's it. Now all of a sudden my GPU's "ceiling" has gone up from 2750mhz to 2800mhz, couldn't do that before, and also does this with the memory at 2150, fast timings enabled. Played Far Cry 6 for hours no sweat, seems stable as heck, which somehow feels like it shouldn't be. Watched the GPU hit 2754mhz on the clock, while pulling only 275W of power in game, haven't seen a crash yet. It even survives Time Spy / Extreme stress tests. I'm wondering if that flaky PCIE4 SSD was somehow borking my OC ability.

TLDR: has Windows 11 been better for anyone elses OC?


----------



## 99belle99

Bart said:


> Any of you guys notice any difference in OCing when upgrading to Windows 11? I might be losing my old mind, but my 6900XT seems to OC better now, for some strange reason. I recently upgraded to Windows 11, and also replaced a faulty Corsair MP600 NVME SSD that committed seppuku for no apparent reason. Now I'm not half as nuts as you guys, so my ONLY change in MPT is to set the power limit to 375W instead of 272, that's it. Now all of a sudden my GPU's "ceiling" has gone up from 2750mhz to 2800mhz, couldn't do that before, and also does this with the memory at 2150, fast timings enabled. Played Far Cry 6 for hours no sweat, seems stable as heck, which somehow feels like it shouldn't be. Watched the GPU hit 2754mhz on the clock, while pulling only 275W of power in game, haven't seen a crash yet. It even survives Time Spy / Extreme stress tests. I'm wondering if that flaky PCIE4 SSD was somehow borking my OC ability.
> 
> TLDR: has Windows 11 been better for anyone elses OC?


I do not run much bench marks anymore but I did do a hour or so of benchmarks about two months ago and could not touch my previous high score in TimeSpy even in the CPU test in TS. Windows 10 previously and now on Windows 11 since around the launch of W11.

So yea my experience is I do not perform as good on W11 as I did previously on W10. But as I said I do not run bench marks anymore to test it fully.


----------



## RichieRich25

Bart said:


> Any of you guys notice any difference in OCing when upgrading to Windows 11? I might be losing my old mind, but my 6900XT seems to OC better now, for some strange reason. I recently upgraded to Windows 11, and also replaced a faulty Corsair MP600 NVME SSD that committed seppuku for no apparent reason. Now I'm not half as nuts as you guys, so my ONLY change in MPT is to set the power limit to 375W instead of 272, that's it. Now all of a sudden my GPU's "ceiling" has gone up from 2750mhz to 2800mhz, couldn't do that before, and also does this with the memory at 2150, fast timings enabled. Played Far Cry 6 for hours no sweat, seems stable as heck, which somehow feels like it shouldn't be. Watched the GPU hit 2754mhz on the clock, while pulling only 275W of power in game, haven't seen a crash yet. It even survives Time Spy / Extreme stress tests. I'm wondering if that flaky PCIE4 SSD was somehow borking my OC ability.
> 
> TLDR: has Windows 11 been better for anyone elses OC?


In nut shell windows 11 and the newer amd drivers allow me to overclock the cpu more but my benchmarks were higher on windows 10 vs 11. Its hard to compare actual games vs windows 10 but I will say that on windows 11 I can push clock speeds higher in games and HDR is a night and day compared to windows 10


----------



## Grindcore77

Anyone here flashed the amd ref LC bios on a gigabyte xtreme waterforce 6900xt?


----------



## sniperpowa

What’s the highest fclk and soc voltage you can run. I’m trying to get high scores don’t care about longevity. I’m still learning most I can get is 25,500. Xfx xero using amd lc bios with evc2. I know there’s some settings im missing.


----------



## Grindcore77

2100 min....anymore black screen...2250 max....boost 2250....fclk 

Soc voltage I've had up to 1275...no issues...gpu voltage im at 1.318...anymore and I get black screen also....
With the LC bios could you increase your mem clocks? 2150 is max for me..anymore and fps take a hit...


----------



## sniperpowa

Grindcore77 said:


> 2100 min....anymore black screen...2250 max....boost 2250....fclk
> 
> Soc voltage I've had up to 1275...no issues...gpu voltage im at 1.318...anymore and I get black screen also....
> With the LC bios could you increase your mem clocks? 2150 is max for me..anymore and fps take a hit...


 yeah it’s stock is 2310. I’ve gone up to 2360 fast timings and 2430 normal timing


----------



## Grindcore77

sniperpowa said:


> yeah it’s stock is 2310. I’ve gone up to 2360 fast timings and 2430 normal timing


Ahhh sweet...did it change your gpu clock max? Like make it lower than before the flash? I really want to try the lc bios on mine but haven't found anywhere if someone has done it with the same card...my gpu clocks are pretty good so just need a boost in the mem department...I have an external flasher just haven't bitten the bullet yet!


----------



## sniperpowa

Grindcore77 said:


> Ahhh sweet...did it change your gpu clock max? Like make it lower than before the flash? I really want to try the lc bios on mine but haven't found anywhere if someone has done it with the same card...my gpu clocks are pretty good so just need a boost in the mem department...I have an external flasher just haven't bitten the bullet yet!


No I just flashed it using Linux.


----------



## sniperpowa

Grindcore77 said:


> Ahhh sweet...did it change your gpu clock max? Like make it lower than before the flash? I really want to try the lc bios on mine but haven't found anywhere if someone has done it with the same card...my gpu clocks are pretty good so just need a boost in the mem department...I have an external flasher just haven't bitten the bullet yet!


It changes some things stock. Power limit is reduced bulir we raise that anyway.


----------



## Grindcore77

sniperpowa said:


> It changes some things stock. Power limit is reduced bulir we raise that anyway.


Yep good ol MPT!


----------



## sniperpowa

Grindcore77 said:


> Yep good ol MPT!


Thanks now I’m over 26k gpu score!


----------



## Grindcore77

Awesome!! I think I'm gonna have to flash!!! I saw a link for a guide in this thread somewhere but can't find it now! I'm thinking I will give it a go in linux and if it goes pear shaped will pull out the card and use the external flasher


----------



## Justye95

hi all guys, via MPT is it possible to remove the block for the overclocking of the vram? I have a liquid reference 6900 xt and I would like to test the overclock of the vram over 2150mhz
if I go to modify the vram parameter via MPT, adrenalin as soon as I go above 2150mhz blocks the GPU clock at 500mhz. I know that higher clocks than the vram does not increase performance, but I would like to try it for fun


----------



## sniperpowa

Justye95 said:


> hi all guys, via MPT is it possible to remove the block for the overclocking of the vram? I have a liquid reference 6900 xt and I would like to test the overclock of the vram over 2150mhz
> if I go to modify the vram parameter via MPT, adrenalin as soon as I go above 2150mhz blocks the GPU clock at 500mhz. I know that higher clocks than the vram does not increase performance, but I would like to try it for fun


im not sure about reference. It has to be an xtxh chip and you flash the special amd LC bios and your memory is overclocked. There’s a guide on here. Stock is like 2310 I run 2350 fast timings on a Xfx xero.


----------



## Grindcore77

Ok found it. That guide is on pg 218....that doesn't mention using the -f code either...you don't have to force flash within Linux? How did you do it? Same as guide?


----------



## spajdr

Just in case anyone missed it, driver 22.5.2 was updated 4x times at least since it's original release, last update was released yesterday ;-).


----------



## sniperpowa

Grindcore77 said:


> Ok found it. That guide is on pg 218....that doesn't mention using the -f code either...you don't have to force flash within Linux? How did you do it? Same as guide?


I installed unbuntu on a flash drive put the bios and tools on the flash drive booted into Ubuntu and flashed following the guide.


----------



## jonRock1992

spajdr said:


> Just in case anyone missed it, driver 22.5.2 was updated 4x times at least since it's original release, last update was released yesterday ;-).


I wonder if they fixed the issue with Death Stranding. On my system, with the original 22.5.2 , Death Stranding performance would tank whenever several UI elements were on screen.


----------



## Kaltenbrunner

How is AMD vs NV in Linux gaming ? Any difference vs how they compare in windows ? 

I can't justify the 6950xt price, as much as I'd like too. They'd have to drop a lot more yet. I'd love to know how many cards the retailers buy in bulk at a price, and what they expect to do with all those massively over priced OCed cards.

Whats the AsRock 6900xt phantom gaming OC like ? Thats the cheapest card here, and there's a couple of 3080ti's for 75-100 more. So a cheap-end 6900xt still looks like a saner, but good choice.


----------



## Grindcore77

Ok im getting this....does this mean I have to use the -f command?


----------



## spajdr

And 22.6.1 driver is out boys


https://www.amd.com/en/support/graphics/amd-radeon-6000-series/amd-radeon-6900-series/amd-radeon-rx-6900-xt


----------



## alceryes

spajdr said:


> And 22.6.1 driver is out boys
> 
> 
> https://www.amd.com/en/support/graphics/amd-radeon-6000-series/amd-radeon-6900-series/amd-radeon-rx-6900-xt


Anyone tried it yet...?


----------



## kairi_zeroblade

Do you guys debloat the Drivers?? and if you do, what utility (or app) do you use?


----------



## Blameless

alceryes said:


> Anyone tried it yet...?


Functionally identical to 22.5.2 in everything I've tried it with. Takes my previous SPPT just fine.



kairi_zeroblade said:


> Do you guys debloat the Drivers?? and if you do, what utility (or app) do you use?


Used to do it manually, but since Radeon Software Slimmer can do essentially everything I used to do, short of modifying the .inf, which breaks signing anyway, I mostly just use that now.


----------



## CCoR

Blameless said:


> Functionally identical to 22.5.2 in everything I've tried it with. Takes my previous SPPT just fine.
> 
> 
> 
> Used to do it manually, but since Radeon Software Slimmer can do essentially everything I used to do, short of modifying the .inf, which breaks signing anyway, I mostly just use that now.


Can you recommend or show us what you uncheck or leave default? I've tried before but it would prevent each game profile from saving presets. It would reset every time I tried opening game.


----------



## Smartsystem

Hi to the Community,
RX 6900 XT Sapphire Toxic LE happy user for a few days, i experienced some settings and i need experienced user advices. Globally, the card run well but i wanna ask a question is there a way to set like a stable frequency like 2750Mhz kinda "fixed value" or something because i have sliders with "min" and "max" frequency depends on the card loading charge over 50% between min, max range depending on tension/ frequency curve but maybe i'am wrong.
But i have again some stability issues. Like drivers crashes and reseting to vanilla. I was able to push on AC Valhalla ingame bench @2850Mhz (highest value) Vram 2150Mhz at 1.15V on 362 watts TDP using MPT, with pretty good temps.








So ok but when i try RDR2 the game crash at start ingame 3D loading and GTA5 crashed too with D3D11 or something error popup alert. So i see that the card return like a timeout popup then go to crash when charge fully load like going min to max range sometimes. Is there a way to try to fine tune average global gaming profile to be able to have good perfs like over 2750 Mhz and stabilized settings. i post screenshot then i have to say that my screen is UHD 60hz like "firmware locked" so i dont run to much for FPS, but i try to really try to fine tune. Many thanks for help or advice by more experienced users.


----------



## Sufferage

spajdr said:


> And 22.6.1 driver is out boys
> 
> 
> https://www.amd.com/en/support/graphics/amd-radeon-6000-series/amd-radeon-6900-series/amd-radeon-rx-6900-xt












Not for me it seems...


----------



## RichieRich25

22.6.1 Scores increased but looks like the card needs more voltage for me to use the same overclocks. 1145 was not enough so I bumped up to 1155 and surprising my temps are lower and its actually hotter in my room
AMD Radeon RX 6900 XT video card benchmark result - AMD Ryzen 9 5950X,ASUSTeK COMPUTER INC. ROG CROSSHAIR VIII HERO (WI-FI) (3dmark.com)

timespy extreme increase as well
AMD Radeon RX 6900 XT video card benchmark result - AMD Ryzen 9 5950X,ASUSTeK COMPUTER INC. ROG CROSSHAIR VIII HERO (WI-FI) (3dmark.com)


----------



## deadfelllow

Since 22.5.1 I cant do flat 2750-2850 on Timespy Graphics Test. Am i the only one?


----------



## spajdr

@Smartsystem Hey mate, what kind of font you use? It looks nice.


----------



## Counterassy14

deadfelllow said:


> Since 22.5.1 I cant do flat 2750-2850 on Timespy Graphics Test. Am i the only one?


TimeSpy is hard on stability so 2750-2850 is kinda high already with your 91 asic quality your temps should be fine at 1.218-1.25v so you might wanna give that a try.

There is also the possibility of mhz jumps while your bench is running (especially in GT2) so 2740-2840 might be a lot more stable as it doesn’t suddenly try to run at 2850 but instead stays fixed at 2790.


----------



## deadfelllow

I already did Timespy Graphic with 2756-2860 on october with 1.2V 450W PPT etc. But since 22.5.1 i couldnt go over 2740-2840









I scored 20 110 in Time Spy


AMD Ryzen 5 5600X, AMD Radeon RX 6900 XT x 1, 16384 MB, 64-bit Windows 10}




www.3dmark.com





@Counterassy14


----------



## Smartsystem

@spajdr that font called "BatmanForeverAlternate" bold


----------



## LtMatt

6900 XT owners might want to see if this is true, sold mine now. 
AMD new driver now lets you increase vram clock limit and voltage : Amd (reddit.com)


----------



## Blameless

CCoR said:


> Can you recommend or show us what you uncheck or leave default? I've tried before but it would prevent each game profile from saving presets. It would reset every time I tried opening game.


I uncheck all packages other than Display Driver, HDMI Audio Driver, High Definition Audio Controller, and AMD Settings; uncheck ALL scheduled tasks; then all display driver components, except amdafd and amdpcibridge. Make sure you click "modify installer" before "run installer".

Run the installer and select a full install. Don't check clean install, but remove old drivers first with uninstall + DDU.

After the new drivers are installed, I open Task Scheduler and remove all AMD tasks, except "StartCN". I then disable everything in the Radeon Software preferences page and turn off all performance monitoring (I use better tools, if I need them).

Wattman and my custom game profiles work fine this way, but there is no ReLive (OBS Studio + obs-afm is superior), no AMD Link, no overlay, no RyzenMaster, and much less excess garbage.


----------



## J7SC

LtMatt said:


> 6900 XT owners might want to see if this is true, sold mine now.
> AMD new driver now lets you increase vram clock limit and voltage : Amd (reddit.com)


...just tried it on my XTX, but no cigar (VRAM > 2150) w/o the GPU locking to 500 MHz. According to other posts, there were three version of the new 22.6.1. driver w/ the same call number, so perhaps the initial version would allow to bump VRAM w/o penalty. Does anyone have a link to the _initial / first versio_n of 22.6.1. (not the one on AMD's site now, and I even checked 'older drivers') ?


----------



## BHS1975

J7SC said:


> ...just tried it on my XTX, but no cigar (VRAM > 2150) w/o the GPU locking to 500 MHz. According to other posts, there were three version of the new 22.6.1. driver w/ the same call number, so perhaps the initial version would allow to bump VRAM w/o penalty. Does anyone have a link to the _initial / first versio_n of 22.6.1. (not the one on AMD's site now, and I even checked 'older drivers') ?


I just installed the one on Guru3d and it works.


----------



## J7SC

BHS1975 said:


> I just installed the one on Guru3d and it works.


This > one ?


----------



## BHS1975

Nvm that one didn't work after all. Trying an older signed version from softpedia.


----------



## J7SC

BHS1975 said:


> Nvm that one didn't work after all. Trying an older signed version from softpedia.


...ok, pls let us know.


----------



## Justye95

Hi guys, did you find the 22.6.1 driver that allows the unlocked oc of the vram?


----------



## LtMatt

Justye95 said:


> Hi guys, did you find the 22.6.1 driver that allows the unlocked oc of the vram?


I don't think anyone has validated as actually working yet. There's one post on Reddit, confirmed by no one else.


----------



## BHS1975

J7SC said:


> ...ok, pls let us know.


Wrong driver. Wouldn't even load.


----------



## J7SC

BHS1975 said:


> Wrong driver. Wouldn't even load.


...same here - it's the legacy driver for Win 7, based on 22.6.1 ! It might be fun to put together a Win 7 machine with the 6900XT and 6950 vbios (loaded on of two bios chips) as smart access memory issue won't matter in Win 7, afaik. 

Perhaps, the initial release (and subsequently modified) versions of 22.6.1 had that VRAM oversight they then 'fixed' ?


----------



## spajdr

I was reading on reddit that doing washer mod could improve temps on hotspot, so I added some silicone (translucent) washers and guess what guys, nothing changed at all, still 30C difference or more.
(Red Devil 6900XT Ultimate)


----------



## SpajdrEX

Forgot to ask, should I try to do a repaste on GPU?


----------



## spajdr

Repaste done, temps got even worse (used GC Extreme)


----------



## Justye95

.


----------



## Justye95

J7SC said:


> ...same here - it's the legacy driver for Win 7, based on 22.6.1 ! It might be fun to put together a Win 7 machine with the 6900XT and 6950 vbios (loaded on of two bios chips) as smart access memory issue won't matter in Win 7, afaik.
> 
> Perhaps, the initial release (and subsequently modified) versions of 22.6.1 had that VRAM oversight they then 'fixed' ?


i talked to the guy from the post on reddit, he told me he will send his driver to me asap. As soon as I try it and it works I will upload it here for you all
sorry for my bad english


----------



## J7SC

Justye95 said:


> i talked to the guy from the post on reddit, he told me he will send his driver to me asap. As soon as I try it and it works I will upload it here for you all
> sorry for my bad english


...great, tx !


----------



## SpajdrEX

All good now!


----------



## RichieRich25

spajdr said:


> Repaste done, temps got even worse (used GC Extreme)


You gotta push down with some force and make sure the thermal pads are actually touching the vram. You have to visually inspect each side. Once you see them all making contact you can then screw it down. I'm using thermal grizzly extreme and it has been the best in temps so far


----------



## Justye95

J7SC said:


> ...great, tx !


These are the drivers he sent me, actually the digital signature is dated June 22nd, but I couldn't get it to work with a higher clock than the vram, if anyone can let me know 








AMD_Software_Installer_22.6.1


Shared with Dropbox




www.dropbox.com


----------



## Justye95

sorry if I have the wrong section
I applied MX-4 thermal paste, replaced all with Thermalright thermal pads
I have an EKWB D5 pump with 2 radiators of 360mm, 30mm and 60mm
liquid temperature 33/34 degrees (93.2F)
Are these temperatures good? they look very bad to me.
I was expecting at least 10 degrees less. what am I doing wrong?
tell me what to do, I'll post more photos here
Edit: the flowmeter does not show the flow of water well, do not pay attention


----------



## GTANY

Justye95 said:


> sorry if I have the wrong section
> I applied MX-4 thermal paste, replaced all with Thermalright thermal pads
> I have an EKWB D5 pump with 2 radiators of 360mm, 30mm and 60mm
> liquid temperature 33/34 degrees (93.2F)
> Are these temperatures good? they look very bad to me.
> I was expecting at least 10 degrees less. what am I doing wrong?
> tell me what to do, I'll post more photos here
> Edit: the flowmeter does not show the flow of water well, do not pay attention


59°C for the GPU, when the water temperature is around 33°C, is too high : it is a delta of 26°C !

I have a delta of 10-14°C on a 6900 XT (power limit of 500 W), with a high-end watercooling system. You should obtain a delta of 15-20°C. What are your CPU temperatures ? I they are high too, the pump may be responsible for the low cooling performance. I see a flow of 37 l/h which is very low. You say that it is inaccurate. But a low flowrate may be the cause of your problem.

Another possible cause : your thermal pads are too thick which makes a bad contact between the GPU and the waterblock : I experienced bad GPU temperatures with a 1.2 mm pad. Replaced with a 1 mm pad and problem gone.


----------



## spajdr

(Control game at 1440P with RT maxed)
After carefully tightened the screws, temps got much better (used GC Extreme paste on GPU) and applied washers.


----------



## LtMatt

spajdr said:


> View attachment 2565756
> 
> (Control game at 1440P with RT maxed)
> After carefully tightened the screws, temps got much better (used GC Extreme paste on GPU) and applied washers.


Nice work! What do you mean by carefully tighten, can you be more specific to help other users with the same card? 

A picture of the washers/back plate would be good too.


----------



## BHS1975

Justye95 said:


> These are the drivers he sent me, actually the digital signature is dated June 22nd, but I couldn't get it to work with a higher clock than the vram, if anyone can let me know
> 
> 
> 
> 
> 
> 
> 
> 
> AMD_Software_Installer_22.6.1
> 
> 
> Shared with Dropbox
> 
> 
> 
> 
> www.dropbox.com


I tried those and no dice.


----------



## LtMatt

I thought it was too good to be true.

That said, I have been able to push my memory overclock further on 22.6.1 with the 6950 XT so that's why I did wonder if there was any merit to this.


----------



## J7SC

LtMatt said:


> I thought it was too good to be true.
> 
> That said, I have been able to push my memory overclock further on 22.6.1 with the 6950 XT so that's why I did wonder if there was any merit to this.


...yeah, a bit annoying. I did get the 6950 XTX bios flashed on my 6900XT via Ubuntu on the alternate vbios slot and it works re. VRAM speed increase, but not with Smart Memory Access enabled (Asus CH8 Hero Wifi / 3950X)...would have been nice if this latest driver worked on the regular vbios w/ Smart Memory Access on. That said, my pure gamer is the 5950X w/3090 Strix, so not too big a problem, just, well annoying.


----------



## Justye95

J7SC said:


> ...yeah, a bit annoying. I did get the 6950 XTX bios flashed on my 6900XT via Ubuntu on the alternate vbios slot and it works re. VRAM speed increase, but not with Smart Memory Access enabled (Asus CH8 Hero Wifi / 3950X)...would have been nice if this latest driver worked on the regular vbios w/ Smart Memory Access on. That said, my pure gamer is the 5950X w/3090 Strix, so not too big a problem, just, well annoying.


is it possible to flash the bios of the 6950 xt on the rx 6900 xt? I have a reference


----------



## J7SC

Justye95 said:


> is it possible to flash the bios of the 6950 xt on the rx 6900 xt? I have a reference


...hit and miss, I think, as it depends on the specific GPU (and even mobo to some extent, re. Smart Access RAM). My custom PCB 6900XT(X) 3x8 pin is physically identical to the 6950XT model by the same vendor, and I used > this method posted in this thread before.


----------



## spajdr

@*LtMatt *I used washers which are most of time provided with these screws (Silicone 1mm, can be squished)
And as for tightening the screws, some of them just weren't tightened properly


----------



## spajdr

Won't torture it much further than this


----------



## jonRock1992

Still have the same performance drop in Death Stranding Director's Cut whenever HUD elements are on screen with the latest driver (22.6.1). This issue started occurring with the May Preview Driver, and it hasn't gone away. I submit a bug report each time. The next WHQL driver better not have this issue.


----------



## spajdr

@jonRock1992 can you write me all your PC specs, I'm in AMD Vanguard beta driver team, so perhaps something can be done about it.


----------



## jonRock1992

spajdr said:


> @jonRock1992 can you write me all your PC specs, I'm in AMD Vanguard beta driver team, so perhaps something can be done about it.


PM sent


----------



## jonRock1992

Here is a video showing how massive the FPS drop is in Death Stranding DC with the May Preview Driver, 22.5.2, and 22.6.1. The FPS drops occur when there are HUD elements on the screen.


----------



## SpajdrEX

@jonRock1992 ok, few more questions.
Did this happens also in following scenarios like.:
1. Outside snowy areas
2. On snowy area with just one cargo the screen?
3. On snowy area without any cargo on scenery?
4. Can you reproduce it at the start of the game?

Edit.: Ok, we reproduced the issue and raised a ticket


----------



## jonRock1992

SpajdrEX said:


> @jonRock1992 ok, few more questions.
> Did this happens also in following scenarios like.:
> 1. Outside snowy areas
> 2. On snowy area with just one cargo the screen?
> 3. On snowy area without any cargo on scenery?
> 4. Can you reproduce it at the start of the game?


1.) Happens in any area.
2.) The more HUD elements that are on screen, the larger the FPS drop.
3.) No FPS drop with no UI on the screen.
4.) I have not tried starting a new game.


----------



## jonRock1992

SpajdrEX said:


> @jonRock1992 ok, few more questions.
> Did this happens also in following scenarios like.:
> 1. Outside snowy areas
> 2. On snowy area with just one cargo the screen?
> 3. On snowy area without any cargo on scenery?
> 4. Can you reproduce it at the start of the game?
> 
> Edit.: Ok, we reproduced the issue and raised a ticket


Sweet! Good news!


----------



## tubs2x4

How come a game like doom runs so silky smooth even on high resolutions with lower end cards with settings cranked compared to other games out there that run choppy?


----------



## Kaltenbrunner

What do u guys expect to beat the 6900 xt in the fall, from NV ? I mean if right now it can take the fight to the 3080/ti and sometimes 3090....unless NV is doing some massive increase, the 6900xt would probably be fighting the 4070-4080, right ?

And don't the Ti cards release way way after the x090 cards?

Why is buying PC parts for gaming, so hard ?? AMD might have new cards for xmas or early next year, I'm not waiting that long.


----------



## jonRock1992

Kaltenbrunner said:


> What do u guys expect to beat the 6900 xt in the fall, from NV ? I mean if right now it can take the fight to the 3080/ti and sometimes 3090....unless NV is doing some massive increase, the 6900xt would probably be fighting the 4070-4080, right ?
> 
> And don't the Ti cards release way way after the x090 cards?
> 
> Why is buying PC parts for gaming, so hard ?? AMD might have new cards for xmas or early next year, I'm not waiting that long.


I'm sure a good binned and fully overclocked 6900 XTX-H with the LC vBIOS and an unlocked power limit could take on the 4080 in most games. It will probably fall short in ray tracing vs the 4080 though.


----------



## LtMatt

tubs2x4 said:


> How come a game like doom runs so silky smooth even on high resolutions with lower end cards with settings cranked compared to other games out there that run choppy?


Well optimised and also not that demanding graphically.


----------



## Blameless

CCoR said:


> Can you recommend or show us what you uncheck or leave default? I've tried before but it would prevent each game profile from saving presets. It would reset every time I tried opening game.


In the process of getting my 6800 XT working and overclocked on Server 2022, I stumbled upon a simpler method to get a minimalist install.

Extract the driver package, manually install the display driver via device manager, then find ccc2_install.exe inside the display driver folder and open that up as an archive. Extract the "cnext64" folder from the "CN\cnext\" directory inside that ccc2 executable. Run ccc-next64.msi and this will install the AMD Software, without any of the extra add-ons, but with full display and game settings, plus wattman/performance tuning support.

Full path for the current driver should look like this: "whql-amd-software-adrenalin-edition-22.6.1-win10-win11-june29\Packages\Drivers\Display\WT6A_INF\B380472\ccc2_install.exe\CN\cnext\cnext64\"

Overall, the end result is very similar to the RSS process I described, but it's even easier and removes a little bit more.


----------



## Garlicky

Hey guys, has anyone else tried increasing core voltage/max vram clock with driver version 22.6.1. Some how I was able to increase core voltage and vram clock limit. I have a toxic with the xtx silicon,


----------



## J7SC

Garlicky said:


> Hey guys, has anyone else tried increasing core voltage/max vram clock with driver version 22.6.1. Some how I was able to increase core voltage and vram clock limit. I have a toxic with the xtx silicon,
> View attachment 2565985


I get the same screen, but the '500' (either an Min or Max) doesn't allow the GPU to actually clock up in 3D w/the normal vbios and Resizable Bar / Smart Memory Access enabled - it works on the secondary vbios flashed via Ubuntu, but without Smart Memory Access.


----------



## Godhand007

Hi All,

Back after a long time. *A word of caution if anyone else is looking to buy a Toxic EE card*. These cards are extremely bad overlockers, in general (@LtMatt can confirm) and have thermal paste issue which cause high hotspot temps (check earlier posts in this thread).

I had my card RMAe' d and got a another card with same issues, but they did not send me a proper box or any accessories with it (not even the screws for AIO). The cards looks worse for wear with torn sleaves on AIO and scratches etc. I guess that's ok by Sapphire. Also, It took more than 1.5 months and constant badgering and threat of consumer court action to get the process expediated. All in all one of the worst RMA experiences that I have ever had. And the card I got out of it is overall worse than the one I had before.


----------



## BHS1975

Godhand007 said:


> Hi All,
> 
> Back after a long time. *A word of caution if anyone else is looking to buy a Toxic EE card*. These cards are extremely bad overlockers, in general (@LtMatt can confirm) and have thermal paste issue which cause high hotspot temps (check earlier posts in this thread).
> 
> I had my card RMAe' d and got a another card with same issues, but they did not send me a proper box or any accessories with it (not even the screws for AIO). The cards looks worse for wear with torn sleaves on AIO and scratches etc. I guess that's ok by Sapphire. Also, It took more than 1.5 months and constant badgering and threat of consumer court action to get the process expediated. All in all one of the worst RMA experiences that I have ever had. And the card I got out of it is overall worse than the one I had before.


XFX is the way to go. They make a very nice 6900xt.


----------



## nordskov

GODHAND007

I got the EXTREME EDITION 6900XT XTXH thats a very nice card.. i only repasted with LM and now i get awesome temps.. and boost is really up to its name EXTREME... i play cod MW at 2900+ mhz, at only 1.250v and with temps that never exceed more then around 65c. timespy maxed out 500+w and 25799 score i get like 65c and 78 junction temp, at 1.325V so i think it holds up pretty damn good. my brother got the 6900XT XTXH asus rog strix LC 240mm Aio and at same settings and drivers he gets like 500points less then me in 3dmark timespy 

The sapphire toxic extreme is a really damn good card, only fault is the temps by stock it sucks ass... the thermal paste they used aint really good at all.. but once repasted it sure flyes!!


----------



## deadfelllow

nordskov said:


> GODHAND007
> 
> I got the EXTREME EDITION 6900XT XTXH thats a very nice card.. i only repasted with LM and now i get awesome temps.. and boost is really up to its name EXTREME... i play cod MW at 2900+ mhz, at only 1.250v and with temps that never exceed more then around 65c. timespy maxed out 500+w and 25799 score i get like 65c and 78 junction temp, at 1.325V so i think it holds up pretty damn good. my brother got the 6900XT XTXH asus rog strix LC 240mm Aio and at same settings and drivers he gets like 500points less then me in 3dmark timespy
> 
> The sapphire toxic extreme is a really damn good card, only fault is the temps by stock it sucks ass... the thermal paste they used aint really good at all.. but once repasted it sure flyes!!


Lucky for you. Many people who have Sapphire 6900XT EE struggling with proper contact after repasting. Nice temps for 500W! 

Could you please share hwinfo64 Screenshot?


----------



## LtMatt

Toxic EE are binned so if you get a stinker it is indeed bad luck. What I will say is that the 6950 XT Toxic LE is definitely not binned and just typical XT silicon with 1.2v and the faster memory. However I did get an excellent mount and temps with my 6950 XT Toxic and liquid metal.


----------



## nordskov

Godhand007 said:


> Hi All,
> 
> Back after a long time. *A word of caution if anyone else is looking to buy a Toxic EE card*. These cards are extremely bad overlockers, in general (@LtMatt can confirm) and have thermal paste issue which cause high hotspot temps (check earlier posts in this thread).
> 
> I had my card RMAe' d and got a another card with same issues, but they did not send me a proper box or any accessories with it (not even the screws for AIO). The cards looks worse for wear with torn sleaves on AIO and scratches etc. I guess that's ok by Sapphire. Also, It took more than 1.5 months and constant badgering and threat of consumer court action to get the process expediated. All in all one of the worst RMA experiences that I have ever had. And the card I got out of it is overall worse than the one I had before.





deadfelllow said:


> Lucky for you. Many people who have Sapphire 6900XT EE struggling with proper contact after repasting. Nice temps for 500W!
> 
> Could you please share hwinfo64 Screenshot?


Im at the phone right now but i Can post this if it helps any, when Playing mw cod i get around 60-65c While Playing maxed out, i dunno if im lucky with this card Tbh. Before repaste with only uv and higher clockspeed i easy hit 100 junction and throttle problems. Liquid metal sure does hell of a job, used thermal grizzly condonaut


----------



## Godhand007

LtMatt said:


> Toxic EE are binned so if you get a stinker it is indeed bad luck. What I will say is that the 6950 XT Toxic LE is definitely not binned and just typical XT silicon with 1.2v and the faster memory. However I did get an excellent mount and temps with my 6950 XT Toxic and liquid metal.


This just boils down to silicon lottery in that case, which defeats the purpose of binning.


----------



## Godhand007

nordskov said:


> Im at the phone right now but i Can post this if it helps any, when Playing mw cod i get around 60-65c While Playing maxed out, i dunno if im lucky with this card Tbh. Before repaste with only uv and higher clockspeed i easy hit 100 junction and throttle problems. Liquid metal sure does hell of a job, used thermal grizzly condonaut
> View attachment 2566064
> 
> View attachment 2566065


My cards won't do more than 2730Mhz, no matter how much voltage I pump in them (TS GT2). Do you remember what you were able to get on core clocks before re-paste?


----------



## nordskov

Godhand007 said:


> My cards won't do more than 2730Mhz, no matter how much voltage I pump in them (TS GT2). Do you remember what you were able to get on core clocks before re-paste?


About 2850 in driver effective clock around 2750 and with slight bump in voltage it does 2800-2840 in ts and 2900+ in gaming


----------



## Godhand007

nordskov said:


> About 2850 in driver effective clock around 2750 and with slight bump in voltage it does 2800-2840 in ts and 2900+ in gaming


Mine wouldn't do more than ~2730 effective no matter how much voltage I pump in TS GT2.


----------



## nordskov

Godhand007 said:


> Mine wouldn't do more than ~2730 effective no matter how much voltage I pump in TS GT2.


Bad luck i guess. My brothers rog strix 6900xt does same speeds as mine. But for some reason it does 500 points less no matter voltage etc


----------



## Godhand007

nordskov said:


> Bad luck i guess. My brothers rog strix 6900xt does same speeds as mine. But for some reason it does 500 points less no matter voltage etc


Mine can do ~24500(this is with LC memory clocks). I might go for repaste but not LM. Any suggestions on that?


----------



## nordskov

Godhand007 said:


> Mine can do ~24500(this is with LC memory clocks). I might go for repaste but not LM. Any suggestions on that?


Maybe just the regular thermal grizzly seems to do pretty Well. I get 24200 with these settings no extra voltage etc actually did a little uv aswell this is just my power saving profile as the electricity is like 3 times higher Then normally


----------



## Godhand007

nordskov said:


> Maybe just the regular thermal grizzly seems to do pretty Well. I get 24200 with these settings no extra voltage etc actually did a little uv aswell this is just my power saving profile as the electricity is like 3 times higher Then normally
> View attachment 2566102


I can barely get over this and that too while using 2350 Mhz mem timings. I will go for the repaste and see if that lets me get over 2800Mhz at least. Good thing about the warranty situation now is that the RMA'd card is so banged up outwardly, that they won't be able to reject warranty even if I move few screws around.


----------



## nordskov

Godhand007 said:


> I can barely get over this and that too while using 2350 Mhz mem timings. I will go for the repaste and see if that lets me get over 2800Mhz at least. Good thing about the warranty situation now is that the RMA'd card is so banged up outwardly, that they won't be able to reject warranty even if I move few screws around.


If i just put 1.250v and push 420w Max and 2850max boost i do around 25000 clean score with good temps so sounds like your card sure is ****ed 😅


----------



## Godhand007

nordskov said:


> If i just put 1.250v and push 420w Max and 2850max boost i do around 25000 clean score with good temps so sounds like your *card *sure is ****ed 😅


Cards. These are not good cards with binned overclocking chips. You get lucky or not just like any other silicon lottery.


----------



## nordskov

Godhand007 said:


> Cards. These are not good cards with binned overclocking chips. You get lucky or not just like any other silicon lottery.


I just thought they were better Then avg thats why i bought it in first place since they Stock was able with just driver to do 2730 boost out of the box. Yours is the EE and not LE right ? EE seems to have higher boost Stock at 2730 vs 2680 with toxic boost


----------



## Godhand007

nordskov said:


> I just thought they were better Then avg thats why i bought it in first place since they Stock was able with just driver to do 2730 boost out of the box. Yours is the EE and not LE right ? EE seems to have higher boost Stock at 2730 vs 2680 with toxic boost


2730Mhz is the maximum boost frequency that _can_, _might_, _depending upon_ etc. occur according to Sapphire. Yes, My card is an EE. This is the best I can do right now. I am using 6950XT LE bios to get those extra 200 points from 2350Mhz mem OC.


----------



## nordskov

Is your sam enabled ? Just wondering why at Those speeds thats all u get 🤔 am i able to hit that High memory aswell ? There isnt any problems running that bios ?


----------



## jonRock1992

I would try the LC vBIOS before the 6950 XT vBIOS. You can get stable Fast Timing Level 2 memory settings with the LC vBIOS. That's not possible with the 6950 XT vBIOS in my testing.


----------



## nordskov

jonRock1992 said:


> I would try the LC vBIOS before the 6950 XT vBIOS. You can get stable Fast Timing Level 2 memory settings with the LC vBIOS. That's not possible with the 6950 XT vBIOS in my testing.


How much more performance does it bring ? Is it worth it in my case ? And does sam still work with that bios ??


----------



## jonRock1992

nordskov said:


> How much more performance does it bring ? Is it worth it in my case ? And does sam still work with that bios ??


I can't remember how much more. It depends on the application. But it does work with SAM.


----------



## Godhand007

nordskov said:


> Is your sam enabled ? Just wondering why at Those speeds thats all u get 🤔 am i able to hit that High memory aswell ? There isnt any problems running that bios ?


Yes. No issues with Bios. You need to flash it properly though. Also, set the SOC to 1150 from 1162.


----------



## Godhand007

jonRock1992 said:


> I would try the LC vBIOS before the 6950 XT vBIOS. You can get stable Fast Timing Level 2 memory settings with the LC vBIOS. That's not possible with the 6950 XT vBIOS in my testing.


I have tried LC bios and 6950 XT. 2350Mhz mem speeds are working without any issues on both (level 1). Haven't tried level 2 timings though.


----------



## spajdr

Godhand007 said:


> I have tried LC bios and 6950 XT. 2350Mhz mem speeds are working without any issues on both (level 1). Haven't tried level 2 timings though.


Hi mate, would it possible to write here how exactly you flashed the bios to the card, including link to the bios  if possible, thank you.


----------



## Frosted racquet

Hey guys, sorry for jumping in with my 6600XT question, but this thread is pretty active so apologies.
Regarding VRAM frequency performance scaling, going from 2250 to 2300 doesn't net the same performance jump as from 2200 to 2250, but I'm not getting any performance regression.
Jumping from 2200 to 2250 would produce for ex. 50 points more in some test while going to 2300 would get 20 points more.
I'm pretty sure the card is VRAM bandwidth limited in all situations, so I'm wondering if memory ECC is fixing some occasional errors hence the lesser improvements or?
Mind you, some games crash with 2300 with 1.35v (WHEA bsod with SAM enabled), but are apparently stable with 1.4v VRAM. Is it worth it in your opinion to push 2300 @ 1.4v?


----------



## Godhand007

spajdr said:


> Hi mate, would it possible to write here how exactly you flashed the bios to the card, including link to the bios  if possible, thank you.


There is already a guide. You can get LC bios from here or TechPowerUp bios database.


----------



## RichieRich25

Just posting a quick update. On 22.6.1 on a red devil 6900xt im actually able to get voltage to 1200 from 1175 without the max core going to 500. The only issue is once I run 3dmark or anything gpu intensive it starts to artifact. So we are getting somewhere. Im going to play around with it more and see if im successful.


----------



## Justye95

is it possible to follow the guide of the flash bios on the rx 6900 xt reference?


----------



## Counterassy14

is there a good tool to test the memory on Navi21?


I have observed very wierd behavior when overclocking memory with TimeSpy GT1 on my card. 
It'll run normally but mid-test it'll drop fps very hard ( from 150 to 60fps or lower ) for a few seconds and then come back as if nothing ever happend.
I now want to tune the memory to be faster and more stable but without having to run GT1 all day long.


----------



## Godhand007

Counterassy14 said:


> is there a good tool to test the memory on Navi21?
> 
> 
> I have observed very wierd behavior when overclocking memory with TimeSpy GT1 on my card.
> It'll run normally but mid-test it'll drop fps very hard ( from 150 to 60fps or lower ) for a few seconds and then come back as if nothing ever happend.
> I now want to tune the memory to be faster and more stable but without having to run GT1 all day long.


OCCT works for me,


----------



## RedF

RichieRich25 said:


> Just posting a quick update. On 22.6.1 on a red devil 6900xt im actually able to get voltage to 1200 from 1175 without the max core going to 500. The only issue is once I run 3dmark or anything gpu intensive it starts to artifact. So we are getting somewhere. Im going to play around with it more and see if im successful.


Could you upload the driver? It seems to be an early version.


----------



## supergt99

i tried some drivers that were posted earlier. i was able to change voltage in MPT and keep my overclock. BUT-- the voltage would only go 1018mv with artifacts. unless theres a diferent driver it looks like it doesnt work. also i have red devil 6900xt and MSI afterburner.


----------



## nilssohn

RichieRich25 said:


> On 22.6.1 on a red devil 6900xt im actually able to get voltage to 1200 from 1175 without the max core going to 500.


Would you mind to test if the VRAM limit in MPT Overdrives Limits can also be raised, from 1075 to 1125? And then put to 2250 in Adrenalin?

We are currently discussing a possibly similar case in our home forum.


----------



## LtMatt

RichieRich25 said:


> Just posting a quick update. On 22.6.1 on a red devil 6900xt im actually able to get voltage to 1200 from 1175 without the max core going to 500. The only issue is once I run 3dmark or anything gpu intensive it starts to artifact. So we are getting somewhere. Im going to play around with it more and see if im successful.


That’s always been the case adding voltage was never an issue with MPT, it was increasing the clock speeds that reduced the speed to 500Mhz.


----------



## supergt99

i tried VRAM. NO good. if i go 1 mhz over the card dedaults to 500.


----------



## nilssohn

LtMatt said:


> That’s always been the case adding voltage was never an issue with MPT, it was increasing the clock speeds that reduced the speed to 500Mhz.


Well, only if you use the TEMP_DEPENDENT_VMIN workaround. If an XTX card with 1175 mV default GFX voltage is increased to 1200 in the MPT power tab, driver or app will crash instantly on heavy GPU load. This has always been the case.

But now there is a user in Luxx forum who is claiming to own a XTX which can not only run at 1200 mV this way, but also is able to run 2250 MHz VRAM core. We are currently trying to figure out how this is possible. That's why I was asking about VRAM above,


----------



## Garlicky

nilssohn said:


> Well, only if you use the TEMP_DEPENDENT_VMIN workaround. If an XTX card with 1175 mV default GFX voltage is increased to 1200 in the MPT power tab, driver or app will crash instantly on heavy GPU load. This has always been the case.
> 
> But now there is a user in Luxx forum who is claiming to own a XTX which can not only run at 1200 mV this way, but also is able to run 2250 MHz VRAM core. We are currently trying to figure out how this is possible. That's why I was asking about VRAM above,


Hi guys, the user he mentioned in this message is me. I do not know why my card is able to do this. I updated to 22.6.1 a few hours after release and here it is. An other user on LUXX forums also noticed my bios version is in fact not meant to be on the 6900XT TOXIC and instead on the 6950XT nitro pure. Here are some photos. I'm running 3dmark now.
NOTE: when pushed over 2150 mhz on the vram I lose a lot of performance, so it's mostly useless for me, but maybe someone will good vram can take advantage of this.


----------



## jonRock1992

I ditched 22.6.1. Radeon Super Resolution doesn't work with it. I get an error saying that I can't use it because I have Eyefinity enabled. However, I definitely do not because I only have one monitor lol.


----------



## LtMatt

jonRock1992 said:


> I ditched 22.6.1. Radeon Super Resolution doesn't work with it. I get an error saying that I can't use it because I have Eyefinity enabled. However, I definitely do not because I only have one monitor lol.


Seen other users mention that Jon, it’s odd because it works fine for me. Please file a big report using the bug report tool within AMD software.


----------



## FLukawa90

Good day everyone. I have problem with my Sapphire Toxic RX 6900 XT EE. Few weeks ago my GPU start to always reach max Temp Junction 95C when full load about 2 minutes. Core temp seems normal. With this condition, Toxic mode is useless. GPU always will down to 280W. Should I RMA or there is something I can do to fix it?


----------



## Godhand007

FLukawa90 said:


> Good day everyone. I have problem with my Sapphire Toxic RX 6900 XT EE. Few weeks ago my GPU start to always reach max Temp Junction 95C when full load about 2 minutes. Core temp seems normal. With this condition, Toxic mode is useless. GPU always will down to 280W. Should I RMA or there is something I can do to fix it?


Read on from here and make a call.


----------



## tubs2x4

FLukawa90 said:


> Good day everyone. I have problem with my Sapphire Toxic RX 6900 XT EE. Few weeks ago my GPU start to always reach max Temp Junction 95C when full load about 2 minutes. Core temp seems normal. With this condition, Toxic mode is useless. GPU always will down to 280W. Should I RMA or there is something I can do to fix it?


Maybe needs a repaste. Have read on this forum that sapphire using poor thermal paste. Likely would fix it up.


----------



## Godhand007

FLukawa90 said:


> Good day everyone. I have problem with my Sapphire Toxic RX 6900 XT EE. Few weeks ago my GPU start to always reach max Temp Junction 95C when full load about 2 minutes. Core temp seems normal. With this condition, Toxic mode is useless. GPU always will down to 280W. Should I RMA or there is something I can do to fix it?


FYI, This is "normal" according to Sapphire RMA support.


----------



## FLukawa90

tubs2x4 said:


> Maybe needs a repaste. Have read on this forum that sapphire using poor thermal paste. Likely would fix it up.


Okay, I will try to repaste it since RMA is not my option at first since it will took months, either use it or sell it. Is Sapphire RMA so bad?


----------



## Counterassy14

FLukawa90 said:


> Okay, I will try to repaste it since RMA is not my option at first since it will took months, either use it or sell it. Is Sapphire RMA so bad?


If you do a repaste, use a good thermalpaste as it makes a huge difference at those wattages. Way more than it does for cpus.

oh and also buy some thermal pads as they are very hard to keep from ripping 😅


----------



## deadfelllow

FLukawa90 said:


> Okay, I will try to repaste it since RMA is not my option at first since it will took months, either use it or sell it. Is Sapphire RMA so bad?


Try to repaste with thick ones. Otherwise you will be having pump out issue.


----------



## alceryes

Are there any closed loop AIO coolers available for the reference 6900 XT?
If not, what is the best aftermarket HS+F combos for it?


----------



## RedF

alceryes said:


> Are there any closed loop AIO coolers available for the reference 6900 XT?
> If not, what is the best aftermarket HS+F combos for it?


Alphacool Eiswolf 2 rx 6900 xt


----------



## Counterassy14

RedF said:


> Alphacool Eiswolf 2 rx 6900 xt


Had this as well and the performance is almost the same as the custom loop that I have but there are definitely some cuts you have to make when using it.

1. There is no upgrade path for it, next GPU and you‘ll need to buy a new AIO. The Pump doesn‘t fit on other Blocks, not even on other Alphacool blocks. (look up a disassembly, the Block is not an Eisblock with a pump attached, instead it is kinda cut at the top to fit the pump)

2. There is no practical way of cleaning the block (it‘ll destroy the plastic cover on top of the card) and mine started discoloring after a few moths.

3. 360mm is not enough to cool 500w so you‘ll need another Radiator fairly soon and at that point you can just build a custom loop anyway.

but all that said, it is very quick and easy to assemble and as long as you only calculate one generation it is also cheaper (custom will get cheaper with further generations as you‘ll only need to swap the block and not the pump or the radiator).


----------



## FLukawa90

Counterassy14 said:


> If you do a repaste, use a good thermalpaste as it makes a huge difference at those wattages. Way more than it does for cpus.
> 
> oh and also buy some thermal pads as they are very hard to keep from ripping 😅


I only open the GPU Core, so I just repaste without touching any VRAM using Kingpin KPX. Is it seems enough?
Seems solve the issue. Before repaste it will hit 95C within 2 minutes Furmark and start throttling, now after 5 minutes it still under 90C


deadfelllow said:


> Try to repaste with thick ones. Otherwise you will be having pump out issue.


I follow your suggestion, I put some paste every side too. It was so dry, hard to clean too. What a mess pfftt


----------



## Grindcore77

Hey all! I managed to flash the LC bios on my Gigabyte extreme waterforce 6900xt! So have the added extra mem clocks etc....it improved my score by another 500 points or so..so far...im at 25758 gpu at the moment...here are some settings...just wondering if I could get any advice on some of them..thank you!


----------



## Grindcore77

Oh I have tried fast timings 2...in both mpt and wattman...but my score took a little hit with it activated...but yes the LC bios let's you use timing 2 and it doesn't crash like my OG bios


----------



## jonRock1992

Grindcore77 said:


> Oh I have tried fast timings 2...in both mpt and wattman...but my score took a little hit with it activated...but yes the LC bios let's you use timing 2 and it doesn't crash like my OG bios


Nice results! FT2 actually helps my GPU. I have it stable at 2362MHz FT2. For benching I can go a little higher before I start seeing negative results.


----------



## alceryes

Counterassy14 said:


> 1. There is no upgrade path for it, next GPU and you‘ll need to buy a new AIO. The Pump doesn‘t fit on other Blocks, not even on other Alphacool blocks. (look up a disassembly, the Block is not an Eisblock with a pump attached, instead it is kinda cut at the top to fit the pump)


This kinda sucks but expected, I guess.

Maybe I'll just keep the reference cooler and take it completely apart, use some good TIM on the core with the washer method, and see what I can do with that. Previously, I had just done the quick PCB backside mod. I really don't want to spend almost $300 for a ~7% performance improvement.


----------



## ptt1982

Hey all! Two Qs.

1) Is it still impossible to flash LC or XTXH bios to 6900xt and get to 1.2mv voltage and above 500mhz, and without artifacting use the card normally at higher core / memory frequencies etc.?

2) We are finally moving to a new place with the wife, and it's around 200km away (bumpy road) and I'd appreciate moving tips for a custom loop PC. Would this be perhaps enough: Draining the loop the best I can, take out the GPU, plug the two tubes leading to the GPU very well for any drops, and fill the case with paper towels around any junction points. Anything I should be extra careful about?

Cheers for all the cool info and testing you guys do, it's quite interesting to come back here every month and see the progress!


----------



## deadfelllow

ptt1982 said:


> Hey all! Two Qs.
> 
> 1) Is it still impossible to flash LC or XTXH bios to 6900xt and get to 1.2mv voltage and above 500mhz, and without artifacting use the card normally at higher core / memory frequencies etc.?
> 
> 2) We are finally moving to a new place with the wife, and it's around 200km away (bumpy road) and I'd appreciate moving tips for a custom loop PC. Would this be perhaps enough: Draining the loop the best I can, take out the GPU, plug the two tubes leading to the GPU very well for any drops, and fill the case with paper towels around any junction points. Anything I should be extra careful about?
> 
> Cheers for all the cool info and testing you guys do, it's quite interesting to come back here every month and see the progress!


For your first question. You can do it via MPT. You dont need to flash LC or XTXH bios to get 1.2V.


----------



## earphonelnwshop

I just received the evc2sx.today i try to set power limit tricks in my 6900XT reference version

Excellent test results. i can run 2760mhz use only 210-290w stable. Reduce the heat down a lot.


----------



## Counterassy14

earphonelnwshop said:


> I just received the evc2sx.today i try to set power limit tricks in my 6900XT reference version
> 
> Excellent test results. i can run 2760mhz use only 210-290w stable. Reduce the heat down a lot.


what voltages are you running at? Is that edge or hotspot temperature?
If this is edge at 1.175v it seems really hot by comparison to my cards.
60C is probably in the high 90s or even 100C hotspot range (without liquid metal).

Those are the temps I reached by pushing 450-500w @1.218 - 1.25v through my chips.

If this is hotspot instead ... teach me please


----------



## RedF

earphonelnwshop said:


> I just received the evc2sx.today i try to set power limit tricks in my 6900XT reference version
> 
> Excellent test results. i can run 2760mhz use only 210-290w stable. Reduce the heat down a lot.


You mean the Isense Ant. at 0.65?
You don't consume less with it, only 35% less is displayed.

You consume 35% more power with it.


----------



## earphonelnwshop

RedF said:


> You mean the Isense Ant. at 0.65?
> You don't consume less with it, only 35% less is displayed.
> 
> You consume 35% more power with it.


but i checked temp is lower before about 10 degree

it is grest for me


----------



## Petet1990

so i have the 6900xt liquid devil and i juist saw 105 junction temp, i had the power limit at 15% which saw 337 watts. is this normal or high or a watered card?


----------



## earphonelnwshop

Counterassy14 said:


> what voltages are you running at? Is that edge or hotspot temperature?
> If this is edge at 1.175v it seems really hot by comparison to my cards.
> 60C is probably in the high 90s or even 100C hotspot range (without liquid metal).
> 
> Those are the temps I reached by pushing 450-500w @1.218 - 1.25v through my chips.
> 
> If this is hotspot instead ... teach me please


is hotspot temperature. I run at 1.175v+ 0.025 offset

must use evc2sx card to tuner xdpe312g5 directly🙏


----------



## BHS1975

deadfelllow said:


> Guys Hello,
> 
> On LC BIOS which memory speed is sweetspot without performance drop.
> 
> e.g Stock bios 2120-2130 is sweet spot.


I would like to know too. Anyone?


----------



## RedF

Petet1990 said:


> so i have the 6900xt liquid devil and i juist saw 105 junction temp, i had the power limit at 15% which saw 337 watts. is this normal or high or a watered card?


You should change your thermal paste, that is much too high. Take a viscous paste.


----------



## alceryes

With next-gen right around the corner I definitely wouldn't recommend it but I just wanted to show how much GPU prices are crashing.
The RX 6950 XT, which just came out, is already $50 less than the RX 6900 XT MSRP. Crazy, crazy, prices!









MSI Gaming Radeon RX 6950 XT Video Card RX 6950 XT GAMING X TRIO 16G - Newegg.com


Buy MSI Gaming Radeon RX 6950 XT 16GB GDDR6 PCI Express 4.0 Video Card RX 6950 XT GAMING X TRIO 16G with fast shipping and top-rated customer service. Once you know, you Newegg!




www.newegg.com





I say keep on dropping! There's a couple great articles on how you can undervolt the Hell out of an RTX 3090 to turn it into a whisper quiet but still spectacular-performing GPU. Come on 75% off! Papa needs a new HTPC!


----------



## J7SC

alceryes said:


> (...)There's a couple great articles on how you can undervolt the Hell out of an RTX 3090 to turn it into a whisper quiet but still spectacular-performing GPU. Come on 75% off! Papa needs a new HTPC!


FYI, I have a 3090 in my gamer that is whisper quiet w/o any volt mods but w/ 500W+ vbios --- custom loop water cooling is a thing


----------



## reddevilchris

Hello guys,im following this thread for a long time,so much useful info, just changed my 6900xt from a nitro+ se to a red devil ultimate.. this is the card i wanted from the start since im coming from a 5700xt red devil, but couldnt find it at a normal price.
Anyway i just installed it and i see the vram bar on adrenaline can go up to 2624? i havent flashed any 6950xt or LC bios.. its just like that out of the box.
Is that normal?


----------



## Counterassy14

reddevilchris said:


> Hello guys,im following this thread for a long time,so much useful info, just changed my 6900xt from a nitro+ se to a red devil ultimate.. this is the card i wanted from the start since im coming from a 5700xt red devil, but couldnt find it at a normal price.
> Anyway i just installed it and i see the vram bar on adrenaline can go up to 2624? i havent flashed any 6950xt or LC bios.. its just like that out of the box.
> Is that normal?


Ah yes, seems to be about right for your card. It´s an XTXH Chip.









Powercolor RX 6900 XT VBIOS


16 GB GDDR6, 500 MHz GPU, 914 MHz Memory




www.techpowerup.com


----------



## Blist66

Counterassy14 said:


> Ah yes, seems to be about right for your card. It´s an XTXH Chip.
> 
> 
> 
> 
> 
> 
> 
> 
> 
> Powercolor RX 6900 XT VBIOS
> 
> 
> 16 GB GDDR6, 500 MHz GPU, 914 MHz Memory
> 
> 
> 
> 
> www.techpowerup.com


Thanks for the answer, yea i knew about the chip.but i thought that no matter if its xtx or xtxh the memory was locked on 2150-2200, so thats why people were flashing bios from 6950xt or lc model so they can push 18gbs of ram.
So i can push above 2200 without any bios flashes or MPT?

P.s its me the same person but got messed up with 2 accs from phone and pc!


----------



## Azazil1190

Blist66 said:


> Thanks for the answer, yea i knew about the chip.but i thought that no matter if its xtx or xtxh the memory was locked on 2150-2200, so thats why people were flashing bios from 6950xt or lc model so they can push 18gbs of ram.
> So i can push above 2200 without any bios flashes or MPT?
> 
> P.s its me the same person but got messed up with 2 accs from phone and pc!


Did you change from default timings to fast timings?
Try and check again if you are stable at 2150 and above.


----------



## FoamyV

Hey everybody, what's the consensus on MSI 6900xt Z Trio vs Powercolor Ultimate? got about 200 euros difference between them and looking to buy one. Thanks


----------



## Blist66

Azazil1190 said:


> Did you change from default timings to fast timings?
> Try and check again if you are stable at 2150 and above.


Ill try it today!but to get it right ,Even thought the bar shows you can go beyond 2150 its probably not stable unless you do a bios flash from the models mentioned above?
Havent experiment at all with the card yet installed it yesterday,so gotta see what the silicon lottery gave me!!


----------



## nilssohn

Blist66 said:


> so thats why people were flashing bios from 6950xt or lc model so they can push 18gbs of ram.


18 GB/s is hardware related. That speed is provided only on a 6900 XT LC and the 6950 KXTX. What you will gain from a LC bios flash on your XTXH is higher VRAM frequencies and the possibilty to set FT2.


----------



## Blist66

nilssohn said:


> 18 GB/s is hardware related. That speed is provided only on a 6900 XT LC or a 6950 KXTX. What you will gain from a LC bios flash on your XTXH is higher VRAM frequencies and the possibilty to set FT2.


Oh thanks , starting to understand now.
Started messing around with my ultimate yesterday with mpt but can't understand some things.
The card is looking good thermally, i pulled 380watts with around 90 degrees junc.
But the clocks dont go above 2550.
And even one time they did run stable on 9
3dmark at avrg 2705 ( witch is still 50 lower that what i have set as minimum) the score was worse than running at 2550.
Can anyone elaborate more on this or what setting to play around in mpt, or if its just kinda bad silicon lottery and it is what it is.


----------



## nilssohn

Blist66 said:


> Started messing around with my ultimate


If this is a PowerColor Radeon RX 6900 XT Red Devil Ultimate, you might also experience issues with the LC bios caused by the missing USB-C port on the RDU. 6900 XT LC has got one and its bios expects it to be present.


----------



## Blist66

After doing some benchmarks on 3dmark isee that the max frequency that i can hit is usually 50hz below my min value (i.e min 2800 max 2900/ clocks stay at 2750).i pulled up to 403watts and was at 90-97 degrees.1.2v the clocks where stable at ~2750mhz but scores are really bad in the low 20k while with much more chilled watts and volts and clocks i could hit 22.5k!
Seems weird , ive read a lot on this thread , but don't know what to do


----------



## deadfelllow

Blist66 said:


> After doing some benchmarks on 3dmark isee that the max frequency that i can hit is usually 50hz below my min value (i.e min 2800 max 2900/ clocks stay at 2750).i pulled up to 403watts and was at 90-97 degrees.1.2v the clocks where stable at ~2750mhz but scores are really bad in the low 20k while with much more chilled watts and volts and clocks i could hit 22.5k!
> Seems weird , ive read a lot on this thread , but don't know what to do


Are you looking effective clocks or normal gpu frequency? Because XTXH cards usually throttles at 95C. Maybe thats why your score is bad.


----------



## vegetagaru

deadfelllow said:


> For that air cooling card repasting should be easy. Sapphire doesnt give a **** about repasting your card  You need to do it.


well regarding this issue instead of me repasting it i sent it to where i bought it and detailed my problem (temps going above 110º) and they apparently sent it for warranty, in the end 30 days passed and then i received a 6950xt as replacement instead.


----------



## Gabriel Luchina

I have a Sapphire RX 6900 XT (Nitro+ SE).

For more than 6 months I couldn't play any games anymore, because my computer just crashed.

I tried everything, literally!

When suddenly I set the MSI -2% on PowerLimit and 2350Mhz on the core, unlike the original which was at 2540Mhz and 0% on powerLimit.

Simply, all games run smoothly now!

Has anyone seen this?


----------



## FoamyV

FoamyV said:


> Hey everybody, what's the consensus on MSI 6900xt Z Trio vs Powercolor Ultimate? got about 200 euros difference between them and looking to buy one. Thanks


Any help guys? Currently looking for one an not sure which to pick, same price currently. (920 euro) Or should i just wait some more?


----------



## PJVol

FoamyV said:


> Any help guys?


Based on this review, my answer to the title question is а clear "No, it's not!".


----------



## FoamyV

PJVol said:


> Based on this review, my answer to the title question is clear "No, it's not!".


Got it thanks, Powercolor Ultimate best choice or is there another?


----------



## PJVol

Idk about the certain Powercolor model, but in general Red Devil, as well as most of the Sapphire cards (i personally prefer their reference design boards) are the rule of thumb.


----------



## alceryes

From a very technical POV, Buildzoid can tell you what versions to get. This doesn't guarantee any particular performance, of course, but it helps.


----------



## spajdr

weleh said:


> HOW TO FLASH LC REF VBIOS ON XTXH CARD
> 
> *1. *make LINUX USB Liveboot
> *1.a* Download and Install RUFUS (Rufus - Create bootable USB drives the easy way)
> *1.b* Download and burn UBUNTU (https://ubuntu.com/download/desktop)
> 
> *2. *Download XTXH LC Ref Bios from TechPowerUp (can be put inside the same Linux boot USB)
> 
> *3. *Download AMDVBFLASH LINUX Version from TechPowerUp (can be put inside the same Linux boot USB)
> 
> *4. *Insert USB with Ubuntu
> *4.a* Boot from USB and inside Ubuntu select the option that says "Try Ubuntu" (https://ubuntucommunity.s3.dualstac...49a92ce6373041a7f8f50ddf6495f8ac539ad275.jpeg)
> 
> *5.* Inside Linux take AMDVBFLASH and the BIOS from the USB and put it somewhere easy to reach.
> *5.a* Put the rom inside AMDVBFLASH folder
> 
> *6.* Inside the AMDVBFLASH folder, right click and click Open Terminal.
> 
> *7. *Inside terminal run $ sudo amdvbflash -i
> *7.a* This should bring all the GPUs you have installed, there's a number behind everyone of them ,if you have only 1, the number will probably read #0
> 
> *8. *Run this command inside terminal: $ sudo amdvbflash -p 0 biosnamehere.rom to flash
> 
> Reboot, load up windows and this should work. I would strongly advise only doing this with double bios cards otherwise you might **** this up and get a bricked card and have to use external flasher.
> 
> If it doesn't work or you have any doubts, let me know.


I followed the steps, but the first issue was that it reports SSID mismatched
If I tried to bypass it via -fs or any -f commands, it immediately hangs and restart PC
Can someone help?

EDIT.: Fixed, needed to boot ubuntu with safe graphics option


----------



## spajdr

But it doesn't boot, if I flash LC bios, it keeps spamming some USB error about power delivery, which bios can then be flashed on Red Devil 6900XT Ultimate? any 6950xt don't work from what I tried.


----------



## jonRock1992

spajdr said:


> But it doesn't boot, if I flash LC bios, it keeps spamming some USB error about power delivery, which bios can then be flashed on Red Devil 6900XT Ultimate? any 6950xt don't work from what I tried.


Sounds like you have an ASUS motherboard. They do that. Only way to get around it is to replace the motherboard. I had the same issue with my RDU. I ended up selling my Dark Hero and getting an MSI X570S Carbon Max. Now it boots with the LC vBIOS. The only downside is that display port 1 doesn't work. Just got to use the second port. Also, there is a USB driver in the device manager that needs disabled as well. Currently running 2364 MHz with Fast-timings Level 2 on my Red Devil Ultimate with the LC vBIOS.


----------



## SpajdrEX

It's gigabyte b550 aorus pro-p mobo  what exactly needs to be disabled to stop getting spammed with USB warnings? @jonRock1992 and thanks for the helping me.


----------



## jonRock1992

SpajdrEX said:


> It's gigabyte b550 aorus pro-p mobo  what exactly needs to be disabled to stop getting spammed with USB warnings? @jonRock1992 and thanks for the helping me.


Oh so you're in the OS? Then just disable that driver in the device manager. I thought you meant you couldn't post because of the USB overcurrent protection error. If you had an ASUS mobo, you wouldn't be able to get into Windows.


----------



## SpajdrEX

jonRock1992 said:


> Oh so you're in the OS? Then just disable that driver in the device manager. I thought you meant you couldn't post because of the USB overcurrent protection error. If you had an ASUS mobo, you wouldn't be able to get into Windows.


Problem is that I don't know what exactly need to be disabled? There is no exclamation to be seen.
Could you please also share your mpt / RSX profile you use with LC Bios?


----------



## spajdr

Ok I found faulty USB-C device, but unfortunately it still spamming with USB overcharge faults.


----------



## Speed Potato

I dont like to be that guy but I disasembled my reference AMD 6800XT with a mounted bykski waterblock (for the reference navi 21) and I'mstuck with a bunch of torn up thermal pads and I need help.

I'm going to mount the 6800XT back on the stock cooler.
I'm installing the waterblock on a reference AMD 6900XT.

Now, for the part where I mount the stock cooler back on the card, I got myself some thicker paste (SYY 157) and 1mm nylon washers for the 4 spring-mounted die screws. Now my question is:The stock grey pasd are somewhat "fibrous" ? they look to be 1.5mm on the VRM and 1mm on the memory but I don't have the instruments to confirm that.
Can anyone confirm ?

Second step: I plan on remounting the waterblock on the 6900XT with the (I think) 2mm pads that came with the waterblock. I never re-used thermal pads before, is it a good idea ? Can someone confirm the thickness ?

Thanks !

[Edit]: I reassembled the 6800XT even if the pads looked like ****. I feel like the recommendation of using M3x1mm nylon washers on the spring-screws is dubitious, a 0.5mm washer would be much more appropriate (that carbon cloth pad isn't that thick...). I ordered some M2x0.5mm on Aliexpress (It's an M2 screw to begin with).

When disasembling my 6950XT, the pads were in a much better shape, I don't know what hapenned with my other card lol. The vrm and memory pads are clearly different but I'm unshure if it's 1.5 or 2mm for the VRM, it's hard to measure since they are compressed all around. memory is clearly 1mm. Is 1.5mm even common ?


----------



## alceryes

Speed Potato said:


> I dont like to be that guy but I disasembled my reference AMD 6800XT with a mounted bykski waterblock (for the reference navi 21) and I'mstuck with a bunch of torn up thermal pads and I need help.
> 
> I'm going to mount the 6800XT back on the stock cooler.
> I'm installing the waterblock on a reference AMD 6900XT.
> 
> Now, for the part where I mount the stock cooler back on the card, I got myself some thicker paste (SYY 157) and 1mm nylon washers for the 4 spring-mounted die screws. Now my question is:The stock grey pasd are somewhat "fibrous" ? they look to be 1.5mm on the VRM and 1mm on the memory but I don't have the instruments to confirm that.
> Can anyone confirm ?
> 
> Second step: I plan on remounting the waterblock on the 6900XT with the (I think) 2mm pads that came with the waterblock. I never re-used thermal pads before, is it a good idea ? Can someone confirm the thickness ?
> 
> Thanks !
> 
> [Edit]: I reassembled the 6800XT even if the pads looked like ****. I feel like the recommendation of using M3x1mm nylon washers on the spring-screws is dubitious, a 0.5mm washer would be much more appropriate (that carbon cloth pad isn't that thick...). I ordered some M2x0.5mm on Aliexpress (It's an M2 screw to begin with).
> 
> When disasembling my 6950XT, the pads were in a much better shape, I don't know what hapenned with my other card lol. The vrm and memory pads are clearly different but I'm unshure if it's 1.5 or 2mm for the VRM, it's hard to measure since they are compressed all around. memory is clearly 1mm. Is 1.5mm even common ?


Only advice I can offer on this is the pads I used for the back PCB. (I know you're working on the front side)








[Official] AMD Radeon RX 6900 XT Owner's Club


I've been looking into it. The Merc319 limited black is basically the same PCB as a red devil Ultimate, but with slightly better input filtering. The MSI gaming trio z has the second best PCB, but the cooler allows rather high junction temps and is rather noisy. AsRock formula OC is the best...




www.overclock.net





I used the GP-Extreme 12W 1mm for all the VRM components and 3mm for the blank PCB areas under the memory. As long as your pads are a bit squishy then it's waaaay better to be .5mm too thick than .5mm too thin.


----------



## Kaltenbrunner

How loud is the average 6900xt ? GPU fan noise is really bugging me, but the cards with better quality cooling, like AIO (assuming they work and are quiet)...they are all way more expensive.

Same with 6800xt, at the price of the good cards, there's 6900xt priced cheaper.

What a frustrating market, nevermind the market manipulation.


----------



## alceryes

Kaltenbrunner said:


> How loud is the average 6900xt ? GPU fan noise is really bugging me, but the cards with better quality cooling, like AIO (assuming they work and are quiet)...they are all way more expensive.
> 
> Same with 6800xt, at the price of the good cards, there's 6900xt priced cheaper.
> 
> What a frustrating market, nevermind the market manipulation.


My reference 6900 XT isn't that loud. I DON'T do the zero RPM fan setting as it's bad for hotspot temps. I think I start it at around 20% RPM and it rarely goes above 60%.
Have you thought about taking it apart and repasting? Not all aftermarket 6900 XTs have better cooling solutions. Some are actually worse.


----------



## Acegr

Hi,

are those temps normal in a watercooled GPU? This is after time spy bench. Haven't tested in game.


----------



## J7SC

Acegr said:


> View attachment 2567200
> 
> 
> Hi,
> 
> are those temps normal in a watercooled GPU? This is after time spy bench. Haven't tested in game.


First - what kind of water-cooling (AIO, rad size etc) ? In any case, your card's Hotspot in particular looks high; sometimes, that's a mounting / paste / contact issue. For comparison, below is my water-cooled 6900XT with extra MPT PL etc...total rad space is 1200x62mm (dual and triple-core rads), 2x D5 pumps, 1800 rpm push/pull Arctic P12.


----------



## Acegr

J7SC said:


> First - what kind of water-cooling (AIO, rad size etc) ? In any case, your card's Hotspot in particular looks high; sometimes, that's a mounting / paste / contact issue. For comparison, below is my water-cooled 6900XT with extra MPT PL etc...total rad space is 1200x62mm (dual and triple-core rads), 2x D5 pumps, 1800 rpm push/pull Arctic P12.
> View attachment 2567213


Custom loop, 1 d5 running 4800 rpm, 2 radiators 360, 1 radiator 280. The version of the 6900 xt is the ekwb zero, came preblocked.


----------



## J7SC

Acegr said:


> Custom loop, 1 d5 running 4800 rpm, 2 radiators 360, 1 radiator 280. The version of the 6900 xt is the ekwb zero, came preblocked.


...given the Hotspot temp at stock PL and a very decent cooling loop, I would consider repasting w/ sth. like Gelid GC Extreme, though there are obvious issues re. warranty, thermal pad thickness etc among others...may be start by first carefully checking and perhaps tighten (if needed) of all the screws holding the block onto the GPU...caution: don't overtighten, just make sure there are no outliers, starting with the four screws around the GPU die.


----------



## Acegr

J7SC said:


> ...given the Hotspot temp at stock PL and a very decent cooling loop, I would consider repasting w/ sth. like Gelid GC Extreme, though there are obvious issues re. warranty, thermal pad thickness etc among others...may be start by first carefully checking and perhaps tighten (if needed) of all the screws holding the block onto the GPU...caution: don't overtighten, just make sure there are no outliers, starting with the four screws around the GPU die.


I'll ask xfx first before opening it cause it has an illegal sticker on one screw. Don't wanna lose the warranty in any case. If I get the ok I'll open it on September when my wife is back from the season work and she can keep my son for a while otherwise its no bueno with a 2 year old around. Will be cleaning the loop too same time.

Do we have any ready kit about 6900 xt's with pads?

Could you please tell me the 3 best pastes for gpu that can last for a long time without opening it? For example grizzly is great but needs changing every 6 months or so.

Thanks


----------



## FLukawa90

Does anyone know why Radeon Wattman crash sometimes (that make 2 Adrenaline Icon)? All settings default. Is it GPU or something else like RAM or Motherboard?


----------



## J7SC

Acegr said:


> I'll ask xfx first before opening it cause it has an illegal sticker on one screw. Don't wanna lose the warranty in any case. If I get the ok I'll open it on September when my wife is back from the season work and she can keep my son for a while otherwise its no bueno with a 2 year old around. Will be cleaning the loop too same time.
> 
> Do we have any ready kit about 6900 xt's with pads?
> 
> Could you please tell me the 3 best pastes for gpu that can last for a long time without opening it? For example grizzly is great but needs changing every 6 months or so.
> 
> Thanks


...fyi - the temp differential shown in your earlier post between 'general' GPU and Hotspot '''might''' hint at uneven seating or thermal paste. As to pastes, I use quite a few different ones per below, but for the 6900XT (as well as a 3090) I use Gelid GC Extreme...it is thicker, lasts a long time (I've got some in year 4+) and performs very well.

...thermal pad thickness (ie. for VRAM) varies from card to card and even within, potentially (ie. VRAM > VRM), so best to check with the vendor / manual...I used TG-10 thermal putty instead of pads, but that particular type is no longer available. Google 'thermal putty' for other sources. Thermal putty not only works great, it 'conforms' easily to the available space.


----------



## Acegr

J7SC said:


> ...fyi - the temp differential shown in your earlier post between 'general' GPU and Hotspot '''might''' hint at uneven seating or thermal paste. As to pastes, I use quite a few different ones per below, but for the 6900XT (as well as a 3090) I use Gelid GC Extreme...it is thicker, lasts a long time (I've got some in year 4+) and performs very well.
> 
> ...thermal pad thickness (ie. for VRAM) varies from card to card and even within, potentially (ie. VRAM > VRM), so best to check with the vendor / manual...I used TG-10 thermal putty instead of pads, but that particular type is no longer available. Google 'thermal putty' for other sources. Thermal putty not only works great, it 'conforms' easily to the available space.
> View attachment 2567247


Thanks, will be using that paste. Best longlasting and good pads after putty?


----------



## J7SC

Acegr said:


> Thanks, will be using that paste. Best longlasting and good pads after putty?


...'best long-lasting' pads can easily degenerate into a 'brand fan' argument. Fujipoly pads are generally rated near the top, but for my latest two builds, I used the above-pictured Thermalright "Extreme Odyssey" pads for VRM components (VRAM on the 6900XT and 3090 got the thermal putty). The Thermalright pads are available in different thickness and typically are rated at 12.8 W/mk (which is quite good) though they are harder / a bit less squishy, and getting the right thickness is important.


----------



## tubs2x4

Does the gddr6 mem on the 6900/6950xt run much cooler than the gddr6x that say 3080ti runs? What’s the mem temp during a good gaming session on amd card that’s mildly overclocked? Thx


----------



## J7SC

tubs2x4 said:


> Does the gddr6 mem on the 6900/6950xt run much cooler than the gddr6x that say 3080ti runs? What’s the mem temp during a good gaming session on amd card that’s mildly overclocked? Thx


...only have temps for 6900XT (16 GB GDDR6) and 3090 (24 GB GDDR6X), the latter having double-sided VRAM. Still, both cards have an extra full-contact heatsink w/fan on the back. In general, 3090 GPU and Hotspot run cooler, but VRAM a bit hotter.


----------



## Acegr

J7SC said:


> ...'best long-lasting' pads can easily degenerate into a 'brand fan' argument. Fujipoly pads are generally rated near the top, but for my latest two builds, I used the above-pictured Thermalright "Extreme Odyssey" pads for VRM components (VRAM on the 6900XT and 3090 got the thermal putty). The Thermalright pads are available in different thickness and typically are rated at 12.8 W/mk (which is quite good) though they are harder / a bit less squishy, and getting the right thickness is important.


any opinion about gelid gc ultimate ones?


----------



## J7SC

Acegr said:


> any opinion about gelid gc ultimate ones?


...no opinion either way as I have never used them.


----------



## Enzarch

New driver (22.7.1) claims big OpenGL improvements, did a quick back-to-back with Superposition on OpenGL, and they weren't kidding. 
(for further reference, I get just under 19k on DirectX)


----------



## alceryes

Blist66 said:


> After doing some benchmarks on 3dmark isee that the max frequency that i can hit is usually 50hz below my min value (i.e min 2800 max 2900/ clocks stay at 2750).i pulled up to 403watts and was at 90-97 degrees.1.2v the clocks where stable at ~2750mhz but scores are really bad in the low 20k while with much more chilled watts and volts and clocks i could hit 22.5k!
> Seems weird , ive read a lot on this thread , but don't know what to do


What's your graphics score up to in Time Spy?
NM, I missed your post. Some cards just have a hardware frequency sweetspot, I guess. I get 22k graphics score in Time Spy with the AMD reference 6900 XT undervolted and just using the Radeon software (no MPT).


----------



## alceryes

Enzarch said:


> New driver (22.7.1) claims big OpenGL improvements, did a quick back-to-back with Superposition on OpenGL, and they weren't kidding.
> (for further reference, I get just under 19k on DirectX)
> 
> View attachment 2567599
> View attachment 2567600


Wow!
This kinda makes me ask WTH was wrong with AMD's OpenGL support up to this point?? I mean, a 45% increase?!


----------



## J7SC

...some more good news, at least if you are a FS2020 fan, Nvidia DLSS and AMD FSR in beta now and available when SU 10 releases (soon, apparently).


----------



## Enzarch

Whelp, unfortunately it seems the new driver breaks HDR. 
LG C2 is my main display, so ill be rolling back now.


----------



## MotomEniac

Enzarch said:


> Whelp, unfortunately it seems the new driver breaks HDR.
> LG C2 is my main display, so ill be rolling back now.


Elden ring HDR is running fine on my side, though


----------



## Azazil1190

Enzarch said:


> Whelp, unfortunately it seems the new driver breaks HDR.
> LG C2 is my main display, so ill be rolling back now.


Same at my oled c1

*Just test hdr on elden ring and yes it works.
But at cyberpunk doesn't.
The driver have issues with auto hdr at win 11


----------



## kratosatlante

Grindcore77 said:


> Ok im getting this....does this mean I have to use the -f command?


get same error and this work for me
sudo ./amdvbflash -p -f 0 filename
previous mark amdvbflash 2nclink "star" , and permision "execute this program"

pd: using bios lc in asrock rx 6900xt formula, when the hot spot exceeds 90c, it decreases the core to 1500-1800mhz, in mpt where should I modify the temperature limit or is it not possible?
core temp max 67, mem 63, hot spot 98-101 max
DISABLE tempvmin(set 1.250v core, 1.175soc) and run normal, i will try 6950x bios formula



http://imgur.com/a/Uy8T33M


----------



## HyperC

new drivers doing something crazy loading a youtube video and pc damn near locks up, turning off hardware acceleration in Chrome does fix this have not tested another browser.... EDIT happens playing games changing my volume ***


----------



## Talon2016

Anyone had the issue where MPT doesn't show a GPU in the drop down box? I've loaded my vBIOS and it shows all the details, but not showing the GPU in the drop down box. Any idea how to fix this?

Found a work around. Not a fix for the program, but can finally use it to edit my power.


----------



## guskline

Just joined with an AMD reference 6950XT that I bought from the AMD store to upgrade from my 6800. I ordered another EK fullblock and back plate and installed them on it.

MSFS2020 says THANK you! Running a custom water cooled (XSPC Twin D5 revo-480+360 thick rads - Optimus wb for my 5900x and EK wb+backplate for the 6950x).

I moved the RX6800 water cooled into my 3900x rig.


----------



## Michailov

Hello, Maybe you have any ideas to resolve my problem. I bought red devil ultimate rx6900xt. When I flash bios from red devil rx6950xt I cannot see any images and my bios start beeping strange. But Windows is loading and core clock stuck at 500MHz. I already read that should load bios by MPT but what bios? My original bios or that what I flashed? Is there also any chance to fix that no image in bios? My mobo is MSI b450 Gaming max plus. Thanks for any ideas.


----------



## Maulet//*//

Michailov said:


> Hello, Maybe you have any ideas to resolve my problem. I bought red devil ultimate rx6900xt. When I flash bios from red devil rx6950xt I cannot see any images and my bios start beeping strange. But Windows is loading and core clock stuck at 500MHz. I already read that should load bios by MPT but what bios? My original bios or that what I flashed? Is there also any chance to fix that no image in bios? My mobo is MSI b450 Gaming max plus. Thanks for any ideas.


Did you find and follow sucessful instructions for your same card? If not, return to stock bios and forget it


----------



## guskline

Stay with the stock bios


----------



## Azazil1190

Hi guys and have a great month!
One quick question...
Does anyone have issues at win 11 with dolby atmos via 6000?
Its impossible to enable the spartial sound for dolby Atmos.
On my 3090 everything its fine.


----------



## 8800GT

Azazil1190 said:


> Hi guys and have a great month!
> One quick question...
> Does anyone have issues at win 11 with dolby atmos via 6000?
> Its impossible to enable the spartial sound for dolby Atmos.
> On my 3090 everything its fine.


I had issues on and off even with windows 10. Try using HeSuVi.


----------



## Azazil1190

8800GT said:


> I had issues on and off even with windows 10. Try using HeSuVi.


Im gonna give a try!
Thanks a lot!


----------



## St0RM53

I'll soon have on hand 4x reference AMD 6900xt's. I can keep one of them.
I won't have much time to do deep benchmarking, only about 8 hours.
Which benchmarks and parameters should i watch in order to find out the overall best one?
There will be differences between them:

Vcore offset (positive or negative) vs frequency
Vram clock stability
Temperature variations due to cooler/carbon thermal pad placement from factory assembly
Age/degrading of die and/or passive components as they are all used cards
I own 3dmark and with MPT i know i can push them with the stock cooler with max fans to 380-400w for a benchmark run like time spy extreme/port royal.
I also heard with the recent drivers you can unlock vcore past 1.175V and Vram clock past 2150mhz if someone can confirm, which can open some more performance if you replace the carbon pad with liquid metal and/or a new cooler/waterblock. I don't know how much power can these VRM's handle however.
Also if someone has more advanced MPT settings please feel free to post your profile.


----------



## Blist66

St0RM53 said:


> I'll soon have on hand 4x reference AMD 6900xt's. I can keep one of them.
> I won't have much time to do deep benchmarking, only about 8 hours.
> Which benchmarks and parameters should i watch in order to find out the overall best one?
> There will be differences between them:
> 
> Vcore offset (positive or negative) vs frequency
> Vram clock stability
> Temperature variations due to cooler/carbon thermal pad placement from factory assembly
> Age/degrading of die and/or passive components as they are all used cards
> I own 3dmark and with MPT i know i can push them with the stock cooler with max fans to 380-400w for a benchmark run like time spy extreme/port royal.
> I also heard with the recent drivers you can unlock vcore past 1.175V and Vram clock past 2150mhz if someone can confirm, which can open some more performance if you replace the carbon pad with liquid metal and/or a new cooler/waterblock. I don't know how much power can these VRM's handle however.
> Also if someone has more advanced MPT settings please feel free to post your profile.
> [/Q
> 
> 
> St0RM53 said:
> 
> 
> 
> I'll soon have on hand 4x reference AMD 6900xt's. I can keep one of them.
> I won't have much time to do deep benchmarking, only about 8 hours.
> Which benchmarks and parameters should i watch in order to find out the overall best one?
> There will be differences between them:
> 
> Vcore offset (positive or negative) vs frequency
> Vram clock stability
> Temperature variations due to cooler/carbon thermal pad placement from factory assembly
> Age/degrading of die and/or passive components as they are all used cards
> I own 3dmark and with MPT i know i can push them with the stock cooler with max fans to 380-400w for a benchmark run like time spy extreme/port royal.
> I also heard with the recent drivers you can unlock vcore past 1.175V and Vram clock past 2150mhz if someone can confirm, which can open some more performance if you replace the carbon pad with liquid metal and/or a new cooler/waterblock. I don't know how much power can these VRM's handle however.
> Also if someone has more advanced MPT settings please feel free to post your profile.
> 
> 
> 
> I think pushing 400w on a reference 6900 on air is way too optimistic but i hope you can do it if you get a really good sample
Click to expand...


----------



## ZealotKi11er

St0RM53 said:


> I'll soon have on hand 4x reference AMD 6900xt's. I can keep one of them.
> I won't have much time to do deep benchmarking, only about 8 hours.
> Which benchmarks and parameters should i watch in order to find out the overall best one?
> There will be differences between them:
> 
> Vcore offset (positive or negative) vs frequency
> Vram clock stability
> Temperature variations due to cooler/carbon thermal pad placement from factory assembly
> Age/degrading of die and/or passive components as they are all used cards
> I own 3dmark and with MPT i know i can push them with the stock cooler with max fans to 380-400w for a benchmark run like time spy extreme/port royal.
> I also heard with the recent drivers you can unlock vcore past 1.175V and Vram clock past 2150mhz if someone can confirm, which can open some more performance if you replace the carbon pad with liquid metal and/or a new cooler/waterblock. I don't know how much power can these VRM's handle however.
> Also if someone has more advanced MPT settings please feel free to post your profile.


Check the card with the highest graphics clock in AMD Settings > tuning tab. That should be the best for OC.


----------



## nordskov

ZealotKi11er said:


> Check the card with the highest graphics clock in AMD Settings > tuning tab. That should be the best for OC.


Not so sure about that. Mine is only 2606 with amd oc. My brother 6900xt pushes 2642. But mine does 2900+ and his only 2820 or so. Hes able to do 24600 timespy vs mine 25799 (mine Sapphire toxic Extreme) (his Asus rog strix 6900xt) both watercooled fine temps and both xtxh


----------



## J7SC

nordskov said:


> Not so sure about that. Mine is only 2606 with amd oc. My brother 6900xt pushes 2642. But mine does 2900+ and his only 2820 or so. Hes able to do 24600 timespy vs mine 25799 (mine Sapphire toxic Extreme) (his Asus rog strix 6900xt) both watercooled fine temps and both xtxh


...the base value in AMD Settings > tuning tab is related to the old ASIC % quality rating GPUz used to offer years back (until too many folks RMAed their low ASIC value card). It is based on the chip's rating at a fixed voltage by the factory. Another way to to get it (and some extra info, ie. slope) is to use MSI AB on a stock setting GPU (no oc, no MPT) and push Ctrl/F. 

...differences in vbios and other parts of the card and the rest of the system impact overall 'max clocks' as well, so your and your brother's results are not unheard of.


----------



## nordskov

J7SC said:


> ...the base value in AMD Settings > tuning tab is related to the old ASIC % quality rating GPUz used to offer years back (until too many folks RMAed their low ASIC value card). It is based on the chip's rating at a fixed voltage by the factory. Another way to to get it (and some extra info, ie. slope) is to use MSI AB on a stock setting GPU (no oc, no MPT) and push Ctrl/F.
> 
> ...differences in vbios and other parts of the card and the rest of the system impact overall 'max clocks' as well, so your and your brother's results are not unheard of.


 Ohh i thought it was just the Max base value in the driver, if we presses auto oc he Got like 2730 where mine was 2710ish.


----------



## 99belle99

J7SC said:


> ...the base value in AMD Settings > tuning tab is related to the old ASIC % quality rating GPUz used to offer years back (until too many folks RMAed their low ASIC value card). It is based on the chip's rating at a fixed voltage by the factory. Another way to to get it (and some extra info, ie. slope) is to use MSI AB on a stock setting GPU (no oc, no MPT) and push Ctrl/F.
> 
> ...differences in vbios and other parts of the card and the rest of the system impact overall 'max clocks' as well, so your and your brother's results are not unheard of.


I just tried that on my reference card with stock cooler 2573MHz which is in and around the clocks I would get even using MPT in TS. When I try to get a higher clock in TS it will just crash.


----------



## nordskov

99belle99 said:


> I just tried that on my reference card with stock cooler 2573MHz which is in and around the clocks I would get even using MPT in TS. When I try to get a higher clock in TS it will just crash.


Using mpt and timespy i get 2872 and around 2825 effective clock. In firestrike im able to go 2964 thats around 2900+ effective clock. But without mpt im only able to do around 2740 effective clock in timespy otherwise it crash. But amd driver base value is only 2606mhz.


----------



## CfYz

Enzarch said:


> Whelp, unfortunately it seems the new driver breaks HDR.
> LG C2 is my main display, so ill be rolling back now.


I noticed it too, but this only happens whet you set 10-bit in Display settings in adrenalin control panel. After changing to 8-bit HDR available in windows display settings again...


----------



## nordskov

How is the performance in the new drivers and timespy ??


----------



## jonRock1992

I went all the way back to driver 22.2.3 yesterday just so I could play a new HL: Alyx mod. Oculus Link is basically broken with newer drivers. Really wish AMD would pay more attention to VR. Currently, there is just no AMD driver that's good at everything.


----------



## ZealotKi11er

nordskov said:


> Using mpt and timespy i get 2872 and around 2825 effective clock. In firestrike im able to go 2964 thats around 2900+ effective clock. But without mpt im only able to do around 2740 effective clock in timespy otherwise it crash. But amd driver base value is only 2606mhz.


This with stock voltage?


----------



## nordskov

ZealotKi11er said:


> This with stock voltage?


Yes. But had to up voltage from Stock 1.2v to 1.250v to go 2900+


----------



## ZealotKi11er

nordskov said:


> Yes. But had to up voltage from Stock 1.2v to 1.250v to go 2900+


I see. I used to have a pretty high clocking cards which did 2870MHz @ 1.2v for TimeSpy. Did not try to go higher with more voltage. Might try this winter to see how much I can go but then there is NV31 which probably will be more fun to OC since its new.


----------



## nordskov

ZealotKi11er said:


> I see. I used to have a pretty high clocking cards which did 2870MHz @ 1.2v for TimeSpy. Did not try to go higher with more voltage. Might try this winter to see how much I can go but then there is NV31 which probably will be more fun to OC since its new.


Stock 1.2v i could do 2872 aswell but used mpt to up watt limit. And gain higher scores. And tryed taking it further, actually to 1.325v but no gains after 1.287v. At 1.250v its stable at 2900+MHz effective in games and firestrike. Timespy only stable at 2872. But gets and avg at 2830ish. I put lm in the gpu and that keeps temps Down at 65c 78junction @ 1.287 and 500w limit


----------



## NiteNinja

Good evening.

I just got a Gigabyte Waterforce 6900XT. I'm having some interesting issues with it, and here is what I've been spending most of the night on.

It sometimes works for some games, but crashes immediately or black screens on others. Furmark can run it full tilt with no issues, temps are all good, and everything is stable, stock and overclocked. Some games like Doom 2016 and Genshin Impact runs fine. But others like Fallout 76 or Doom Eternal just crashes out. 3D Mark won't start Timespy or Firestrike either, just kicks me out and says an error occured. Quake II RTX freezes too.

I was in Windows 11 first, I DDU and installed both the WQHL and Optional drivers, no luck. I rolled my system back to Windows 10, and no luck. And now I'm trying older drivers, with no luck either. I've even seen some people disconnect their internet, go into safe mode, DDU, install the new driver and go, but that doesn't work either. I've tried underclocking, undervolting, everything.

I'm using an EVGA 1,000W Platinum power supply, with a 5900X CPU, 32gb of HP V10 3200mhz CL14 memory (Samsung B-Die), on an ASRock X470 Gaming K4. I've installed and reinstalled all the chipset drivers and whatnot too, BIOS on the MB is updated, VBIOS is re-flashed to factory just in case it was a miner card (It was bought open box new condition, still had the plastic peel on it). Most my system pulls is about 840w from the readout on my UPS, but it crashes on these games no matter how much or little power it pulls.

I'm chocking up to drivers being bad but I really hope that something on the card itself isn't bad either. I know different workloads can hit the core differently, and I can't seem to find anything in common. Doom Eternal doesn't work in DirectX or Vulcan.

Any help or advice will be highly appreciated and thanks in advance!

Edit: Couple more things I've done. I relocated the GPU from vertical mount to horizontal standard position, both PCI-E slots. Rolled all the way back to even 22.2.4. Disabled every overlay possible. GTA5 seems to work fine. I bought this card for VR but haven't tested any VR titles yet.


----------



## MotomEniac

NiteNinja said:


> Good evening.
> 
> I just got a Gigabyte Waterforce 6900XT. I'm having some interesting issues with it, and here is what I've been spending most of the night on.
> 
> It sometimes works for some games, but crashes immediately or black screens on others. Furmark can run it full tilt with no issues, temps are all good, and everything is stable, stock and overclocked. Some games like Doom 2016 and Genshin Impact runs fine. But others like Fallout 76 or Doom Eternal just crashes out. 3D Mark won't start Timespy or Firestrike either, just kicks me out and says an error occured. Quake II RTX freezes too.
> 
> I was in Windows 11 first, I DDU and installed both the WQHL and Optional drivers, no luck. I rolled my system back to Windows 10, and no luck. And now I'm trying older drivers, with no luck either. I've even seen some people disconnect their internet, go into safe mode, DDU, install the new driver and go, but that doesn't work either. I've tried underclocking, undervolting, everything.
> 
> I'm using an EVGA 1,000W Platinum power supply, with a 5900X CPU, 32gb of HP V10 3200mhz CL14 memory (Samsung B-Die), on an ASRock X470 Gaming K4. I've installed and reinstalled all the chipset drivers and whatnot too, BIOS on the MB is updated, VBIOS is re-flashed to factory just in case it was a miner card (It was bought open box new condition, still had the plastic peel on it). Most my system pulls is about 840w from the readout on my UPS, but it crashes on these games no matter how much or little power it pulls.
> 
> I'm chocking up to drivers being bad but I really hope that something on the card itself isn't bad either. I know different workloads can hit the core differently, and I can't seem to find anything in common. Doom Eternal doesn't work in DirectX or Vulcan.
> 
> Any help or advice will be highly appreciated and thanks in advance!


I had same issues when I bought my 6900xt, it was Afterburner installed in my system causing those issues, even without any OC in it.


----------



## alceryes

St0RM53 said:


> I'll soon have on hand 4x reference AMD 6900xt's. I can keep one of them.
> I won't have much time to do deep benchmarking, only about 8 hours.
> Which benchmarks and parameters should i watch in order to find out the overall best one?
> There will be differences between them:
> 
> Vcore offset (positive or negative) vs frequency
> Vram clock stability
> Temperature variations due to cooler/carbon thermal pad placement from factory assembly
> Age/degrading of die and/or passive components as they are all used cards
> I own 3dmark and with MPT i know i can push them with the stock cooler with max fans to 380-400w for a benchmark run like time spy extreme/port royal.
> I also heard with the recent drivers you can unlock vcore past 1.175V and Vram clock past 2150mhz if someone can confirm, which can open some more performance if you replace the carbon pad with liquid metal and/or a new cooler/waterblock. I don't know how much power can these VRM's handle however.
> Also if someone has more advanced MPT settings please feel free to post your profile.


What's your goal? Get the best one? The most stable one at low vcore...?

Start with a static OC of something like 2600MHz/2100MHz (with fast timings) with a voltage of only 1125mV. You want to check the hardware more than the cooling. Cooling can be replaced later, right?
With the above settings, loop Time Spy or something like that for 15 mins with each card, logging the loops with HWiNFO64. See which one comes out on top!

For comparison, mine is an AMD reference RX 6900 XT. I get over 22k graphics Time Spy score with the settings above.


----------



## jonRock1992

22.7.1 is not a good driver. It breaks HDR if 10-bit pixel format is enabled. Also, Death Stranding is STILL broken in this driver. Also, anything past 22.2.3 basically breaks Oculus Link. I've had enough of AMD's drivers. Probably going Nvidia for my next GPU. I love my 6900 XT, but damn these drivers have issues.


----------



## FLukawa90

I already repaste my 6900XT Toxic EE using Kingpin KPX after several times try. What do you guys think? Idle Temp Core 35C Hotspot 39C. Throttling is gone now.
Maybe just my thought but I think motherboard affected this too. This result with Gigabyte B450M Gaming, but when using Asus Dark Hero bit more hot and the core clock lower around 2250


----------



## Sufferage

FLukawa90 said:


> I already repaste my 6900XT Toxic EE using Kingpin KPX after several times try. What do you guys think? Idle Temp Core 35C Hotspot 39C. Throttling is gone now.
> Maybe just my thought but I think motherboard affected this too. This result with Gigabyte B450M Gaming, but when using Asus Dark Hero bit more hot and the core clock lower around 2250
> View attachment 2568522


What's your ambient temperature ?
I didn't see such high temps even when my Toxic Air Cooled was still on air and running @2500MHz 🤔🤔


----------



## FLukawa90

Sufferage said:


> What's your ambient temperature ?
> I didn't see such high temps even when my Toxic Air Cooled was still on air and running @2500MHz 🤔🤔


About 30C. When I first bought it just under 60C when stress test and after 4 month boom junction temp throttling in just 2 min


----------



## LtMatt

jonRock1992 said:


> 22.7.1 is not a good driver. It breaks HDR if 10-bit pixel format is enabled. Also, Death Stranding is STILL broken in this driver. Also, anything past 22.2.3 basically breaks Oculus Link. I've had enough of AMD's drivers. Probably going Nvidia for my next GPU. I love my 6900 XT, but damn these drivers have issues.


Turn 10bit pixel off, use the 10 bpc option under the display tab only. 10 bit pixel is for 10 bit support in apps like photoshop. You don’t need to enable it for games, as the 10 bpc option will give 10 bit Color’s.


----------



## 8800GT

FLukawa90 said:


> About 30C. When I first bought it just under 60C when stress test and after 4 month boom junction temp throttling in just 2 min


Funny, I have the same problem with my WB 6900. Seems every couple weeks I have to repaste and remount it or eventually the junction temps start ballooning to 30 or 40 above edge. Washer mods and multiple pastes didn't work. Having said that, try repaste and remount if you haven't. Make sure it's nice and tight. When mine was on air, the screws came pretty loose from the factory. Always something to check.


----------



## nordskov

FLukawa90 said:


> I already repaste my 6900XT Toxic EE using Kingpin KPX after several times try. What do you guys think? Idle Temp Core 35C Hotspot 39C. Throttling is gone now.
> Maybe just my thought but I think motherboard affected this too. This result with Gigabyte B450M Gaming, but when using Asus Dark Hero bit more hot and the core clock lower around 2250
> View attachment 2568522


Extreme hot for only 332w. I get like 57c and 72hotspot at 432w with my 6900xt toxic EE And only Fanspeed at 1050rpm Max. And thats with 1.2v Stock and not only 1.0v. And running 2825mhz boost Daily. Get 24200graphic score with only higher watt limit. Core set to 2825 memory fast timing 2140mhz. Stock 1.2v.

i had similiar temps before repaste. After repaste with thermal grizzly condonaut (liquid metal) it dropped massive 30-35c in hotspot and the difference between temp and junction dropped from 32 to like 17c. Made a huge difference to overall noise and performance. Fans dropped from 2000rpm to 1000 and was Way colder with lower fanspeeds Then before with higher speeds. My brothers 6900xt Asus strix240 aio was also same temps ge repasted with thermal grizzly aswell and dropped temps in the 30c range. Hes to is able to do 24000+ graphic score Daily with Max junction in the 70c area. With low fan settings. Cant recommend it enough

been running this for 6 month now no temps increased etc. Heck im even running it on my cpu. Keeps temps under 80c on my 5800x and makes it do 12900-13200 cpu score 👌


----------



## guskline

On the AMD 22.7.1 driver, I was failing the AMD stress test regardless of default setting etc. I reverted back to the recommended 22.5.1 driver and passed with flying colors.

I have a MSI Unify x570 mb with custom watercooled 5900x (Optimus WB) and EK fullblock and backplate for my reference AMD 6950XT. I am using a twin D5 revo pump (2 D5s in series) wiith a Corsair X9-480 thick rad and a HW Labs 360 thick rad. EVGA 1000W platinum PSU and Win 11.

Drivers have become SO important with perceived crashes. I am resisting using the latest drivers unless recommended by the provider.

BTW, despite high end hardware I do NOT OC -- no need to.


----------



## thomasck

What are you guys thinking about 22.7.1?
It gave me better scores in all 3dmark benchmarks.
All tests were full stock, ddu > remove old driver > install new driver > reboot > run all


----------



## jonRock1992

thomasck said:


> What are you guys thinking about 22.7.1?
> It gave me better scores in all 3dmark benchmarks.
> All tests were full stock, ddu > remove old driver > install new driver > reboot > run all
> 
> View attachment 2568578


Not a fan of that driver. I'm using NimeZ 22.5.1 with the dxnavi and opengl beta options. This fixes the issues i was having with freesync, and it gives the greater opengl performance from the newer drivers.


----------



## nordskov

thomasck said:


> What are you guys thinking about 22.7.1?
> It gave me better scores in all 3dmark benchmarks.
> All tests were full stock, ddu > remove old driver > install new driver > reboot > run all
> 
> View attachment 2568578


Nice work. Might try it out 🤩


----------



## ZealotKi11er

thomasck said:


> What are you guys thinking about 22.7.1?
> It gave me better scores in all 3dmark benchmarks.
> All tests were full stock, ddu > remove old driver > install new driver > reboot > run all
> 
> View attachment 2568578


That TimeSpy score might get me to try to break my PR.

My 6900 XT: Result


----------



## nordskov

ZealotKi11er said:


> That TimeSpy score might get me to try to break my PR.
> 
> My 6900 XT: Result


Whats the difference in graphic score if u run same clocks ??


----------



## alceryes

thomasck said:


> What are you guys thinking about 22.7.1?
> It gave me better scores in all 3dmark benchmarks.
> All tests were full stock, ddu > remove old driver > install new driver > reboot > run all
> 
> View attachment 2568578


Is this just the graphics score or overall score?


----------



## nyk20z3

Has any one put a block on a Strix 6900xt LC? They have a Phanteks and EK block but I don’t see anything anywhere about it actually being done. I am most likely going to block it to cut down on the bulk because the stock tubing is just to long. I was curious to see if it’s been done beforehand just for the visuals.


----------



## MotomEniac

nyk20z3 said:


> Has any one put a block on a Strix 6900xt LC? They have a Phanteks and EK block but I don’t see anything anywhere about it actually being done. I am most likely going to block it to cut down on the bulk because the stock tubing is just to long. I was curious to see if it’s been done beforehand just for the visuals.


I've just checked EKWB's site and their configurator reference next parts: 3831109836781 and 3831109836774 by their catalog.
If you not want that brand, you can find cross reference by checking which cards are supported by EKWB's waterblock and then you know...


----------



## guskline

I think that the EKWB for the 6900/6950 will only work on the reference PCB unless EK says otherwise.


----------



## BOBKOC

J7SC said:


> ASIC %


 OCTT&hwinfo


----------



## MotomEniac

guskline said:


> I think that the EKWB for the 6900/6950 will only work on the reference PCB unless EK says otherwise.


Just checked and their description to block says: "EK-Quantum Vector Strix RX 6800/6900 is a 2nd generation Vector GPU water block from the EK® Quantum Line. It is made for ROG Strix RX 6800, 6800XT, 6900XT graphics cards based on the latest AMD® RDNA2™ architecture."
So I think it is compatible.


----------



## x7007

CS9K said:


> Radeon 6000 GPU's do not support HDMI Forum VRR, only freesync. AMD has not made comment as to why this is so, unfortunately.
> 
> The b/f and I both have an LG CX, I with my RX 6900 XT LC, he with a 3080Ti FE. AMD requires freesync to be enabled to get VRR on the LG OLEDs


so still no fix for that? because I have HDFURY Vrroom and this thing can't work with Freesync! so I can't use this 550$ device for my TV and Projector as I want them both connected..... first AMD doesn't do 2 HDMI ports on their GPU, the sapphire, and then they invent technology that doesn't passthrough correctly with their drivers even though Nvidia can!!

it is so annoying that I will need to get Nvidia 4xxx series which I don't want.


----------



## Aristeidis

does anyone have "gigabyte rx 6900 xt gaming oc" i have an issue (dont know if it is an issue" when it is powered on even during stress testing only 2 out of 3 power leds are on the only time i see all 3 on is during boot for a couple of seconds is this normal???


----------



## Scorpion667

Think I figured out why my toxic EE has HS temp degradation between repastes. I noticed 4-5c HS temp increase after fiddling with vertical GPU mount which I ended up reverting but the HS temp didn't drop back. Ended up just tightening the 4 screws around GPU and temp is back to normal. HOWEVER, the screw was a bit loose on the GPU corner which always shows bad TIM coverage. I suspect it's getting loose over time as I was super OCD about tightening in X pattern and appropriate torque when I re pasted a month ago. I used a permanent marker to draw a line across the backplate and screw so next time I re paste I'll see if the screw came loose. If so I'll just use some loctite on the screws EZ


----------



## NiteNinja

Aristeidis said:


> does anyone have "gigabyte rx 6900 xt gaming oc" i have an issue (dont know if it is an issue" when it is powered on even during stress testing only 2 out of 3 power leds are on the only time i see all 3 on is during boot for a couple of seconds is this normal???
> View attachment 2568774


Try switching the center and left cables and see if the light moves. If the off light moves to the middle one, then you have a bad cable.

When I had my waterforce card it had all three lit.


----------



## earphonelnwshop

Today i try to flash LC bios in to my XFX 6900XT ZERO
can be done and can be used
waiting for long term results

Test results


----------



## Acegr

earphonelnwshop said:


> Today i try to flash LC bios in to my XFX 6900XT ZERO
> can be done and can be used
> waiting for long term results
> 
> Test results
> View attachment 2568850
> View attachment 2568851
> View attachment 2568852
> View attachment 2568853
> View attachment 2568854


nice, got the same card. Which bios did you flash? Can I have a link? Whats your gpu temp and junction temp? I'm gonna be opening mine for a repaste and different pads since it reaches 97c junction.


----------



## nordskov

earphonelnwshop said:


> Today i try to flash LC bios in to my XFX 6900XT ZERO
> can be done and can be used
> waiting for long term results
> 
> Test results
> 
> 
> View attachment 2568867
> View attachment 2568851
> View attachment 2568852
> View attachment 2568853
> View attachment 2568854


Dont get it 😅 whats the point when thats like what u Can get or even better with normal 6900xt bios 🤷‍♂️


----------



## MotomEniac

earphonelnwshop said:


> Test results


Insane RIG!


----------



## thomasck

alceryes said:


> Is this just the graphics score or overall score?


 Just graphic!


----------



## earphonelnwshop

Acegr said:


> nice, got the same card. Which bios did you flash? Can I have a link? Whats your gpu temp and junction temp? I'm gonna be opening mine for a repaste and different pads since it reaches 97c junction.


6900XT LC 6900 xt lc reference (Memory Speed 18.5Gbps /Memory Bandwidth 591 GB/s) 








Navi 21


MediaFire is a simple to use free service that lets you put all your photos, documents, music, and video in a single place so you can access them anywhere and share them everywhere.



www.mediafire.com





must use AMDVBFLASH linux version and flash in linux os only 
( you can use download and burn ubuntu .ISO in flash drive then use this one boot in to ubuntu OS)

then Rock 'n' Roll


----------



## kairi_zeroblade

Aristeidis said:


> does anyone have "gigabyte rx 6900 xt gaming oc" i have an issue (dont know if it is an issue" when it is powered on even during stress testing only 2 out of 3 power leds are on the only time i see all 3 on is during boot for a couple of seconds is this normal???
> View attachment 2568774


short answer, NO..not normal..are your cables on a piggyback config??


----------



## Acegr

Ιm so freaking tired but I guess the result was worth it... This is about 6900 XT EKWB ZERO. Take a look especially at the temps.

Results before changing pads and paste on stock settings.









After changing pads and paste.









Let's see how it was since I opened it I took some photos.



















Cleany cleany


















Putting Gelid Ultimate Pads

























Lets see some temps with some basic oc before and after pads and repaste. Oc was 15% power limit, 1200mv, 2100 fast timing dram, 2500 min, 2650 max.

Before









After










What do you think guys? Everything seems ok?


----------



## nordskov

Acegr said:


> Ιm so freaking tired but I guess the result was worth it... This is about 6900 XT EKWB ZERO. Take a look especially at the temps.
> 
> Results before changing pads and paste on stock settings.
> View attachment 2568929
> 
> 
> After changing pads and paste.
> View attachment 2568930
> 
> 
> Let's see how it was since I opened it I took some photos.
> 
> 
> View attachment 2568932
> 
> View attachment 2568933
> 
> 
> Cleany cleany
> 
> View attachment 2568934
> 
> View attachment 2568935
> 
> 
> Putting Gelid Ultimate Pads
> 
> View attachment 2568937
> View attachment 2568929
> View attachment 2568930
> View attachment 2568932
> View attachment 2568931
> View attachment 2568933
> View attachment 2568934
> View attachment 2568935
> View attachment 2568936
> View attachment 2568937
> 
> View attachment 2568936
> 
> 
> Lets see some temps with some basic oc before and after pads and repaste. Oc was 15% power limit, 1200mv, 2100 fast timing dram, 2500 min, 2650 max.
> 
> Before
> View attachment 2568938
> 
> 
> After
> 
> View attachment 2568939
> 
> 
> What do you think guys? Everything seems ok?


The actual temps seems okay. But i personally thanks the hotspot is High at that power level. U Got 25c difference between temp and hotspot. After i repasted my toxic and went Max powerlimit 440w. (It usually peaks at 432w Max. With 2840mhz boost on 1.2v. Ram 2140mhz fast timing. I get like 58c and 73c hotspot with fan speed at Max 1050rpm. So without knowing how fast ur fans goes i say it could be better. But if its not liquid metal i guess its pretty normal temps. I used thermal grizzly condonaut lm


----------



## Acegr

nordskov said:


> The actual temps seems okay. But i personally thanks the hotspot is High at that power level. U Got 25c difference between temp and hotspot. After i repasted my toxic and went Max powerlimit 440w. (It usually peaks at 432w Max. With 2840mhz boost on 1.2v. Ram 2140mhz fast timing. I get like 58c and 73c hotspot with fan speed at Max 1050rpm. So without knowing how fast ur fans goes i say it could be better. But if its not liquid metal i guess its pretty normal temps. I used thermal grizzly condonaut lm


it's gelid extreme, not lm and my fans are only on push, no push pull. I would guess my temps cant get any better with that setup for now or maybe I did a poor job with paste or pads. I dont know. Did an X on paste and added a really small blob on middle too. It's still 19c down and I can stable push up to 2700 now without issues and touching MPT.


----------



## Acegr

Question. The pcb was scratched after i cleaned the paste. The scratch was small though, no dent either. You could only see it from a side. Pc works fine. Is it possible this affects temps? I obviously lost my warranty too now since its scratched and I cant prove I didn't do it.


----------



## alceryes

Acegr said:


> Ιm so freaking tired but I guess the result was worth it... This is about 6900 XT EKWB ZERO. Take a look especially at the temps.
> 
> Results before changing pads and paste on stock settings.
> View attachment 2568929
> 
> 
> After changing pads and paste.
> View attachment 2568930
> 
> 
> Let's see how it was since I opened it I took some photos.
> 
> 
> View attachment 2568932
> 
> View attachment 2568933
> 
> 
> Cleany cleany
> 
> View attachment 2568934
> 
> View attachment 2568935
> 
> 
> Putting Gelid Ultimate Pads
> 
> View attachment 2568937
> View attachment 2568936
> 
> View attachment 2568936
> 
> 
> Lets see some temps with some basic oc before and after pads and repaste. Oc was 15% power limit, 1200mv, 2100 fast timing dram, 2500 min, 2650 max.
> 
> Before
> View attachment 2568938
> 
> 
> After
> 
> View attachment 2568940
> 
> 
> What do you think guys? Everything seems ok?


Just a little note on the thermal pads.

You should always keep the plastic protection on thermal pads right up until you are ready to apply. Never press down on them with your fingers (or other objects) and never apply them and then remove them to 'see how it looks'. (basically the same rules as thermal paste)
The reason for never pressing down with your finger is that you are transferring oils/dirt to the pads, causing them to lose thermal conductivity. These pads are meant to transfer heat not hold onto it. ANY oils/dirt will cause them to hold heat more. Yes, even freshly washed and dried hands will transfer oils/dirt.
The reason for not removing pads after initial placement (or pressing on them with objects other than your finger) once applied is that the initial placement will compress the pads. Pulling them off and replacing them again will create gaps where contact may not be as good as oroginal placement. I guarantee you that you can't place them in the exact same place that you did on your first placement - you will be off. Maybe by only .01mms or so, but you will be off. The best option is to always fully replace thermal pads after use.

[Sidebar on thermal paste application]
I always shake my head when I see videos of people pulling the CPU/GPU heatsink off after applying TIM to 'see how it spread' and then just putting it back on. NEVER do this. Once you paste (very thin layer buttered toast method) and apply the heatsink you're done.

I know this may be a bit of a nitpick but we're all chasing numbers here.  Lowering temp by another 1-2ºC may get you a new record!


----------



## Acegr

alceryes said:


> Just a little note on the thermal pads.
> 
> You should always keep the plastic protection on thermal pads right up until you are ready to apply. Never press down on them with your fingers (or other objects) and never apply them and then remove them to 'see how it looks'. (basically the same rules as thermal paste)
> The reason for never pressing down with your finger is that you are transferring oils/dirt to the pads, causing them to lose thermal conductivity. These pads are meant to transfer heat not hold onto it. ANY oils/dirt will cause them to hold heat more. Yes, even freshly washed and dried hands will transfer oils/dirt.
> The reason for not removing pads after initial placement (or pressing on them with objects other than your finger) once applied is that the initial placement will compress the pads. Pulling them off and replacing them again will create gaps where contact may not be as good as oroginal placement. I guarantee you that you can't place them in the exact same place that you did on your first placement - you will be off. Maybe by only .01mms or so, but you will be off. The best option is to always fully replace thermal pads after use.
> 
> [Sidebar on thermal paste application]
> I always shake my head when I see videos of people pulling the CPU/GPU heatsink off after applying TIM to 'see how it spread' and then just putting it back on. NEVER do this. Once you paste (very thin layer buttered toast method) and apply the heatsink you're done.
> 
> I know this may be a bit of a nitpick but we're all chasing numbers here.  Lowering temp by another 1-2ºC may get you a new record!


Didn't do either of them.


----------



## deadfelllow

Acegr said:


> Didn't do either of them.


Btw there is a power difference between tests but not that much.

In stock Gpu draws like 440W and after repasting it draws like 400W. I marked the power draw with red rectangle. I know that 40W core power is not much but i guess it should effect HS temp more or less 5C?


----------



## alceryes

Acegr said:


> Didn't do either of them.


Got it.


----------



## nordskov

Acegr said:


> Question. The pcb was scratched after i cleaned the paste. The scratch was small though, no dent either. You could only see it from a side. Pc works fine. Is it possible this affects temps? I obviously lost my warranty too now since its scratched and I cant prove I didn't do it.


Nah its actually fine when its not lm. You only reach 80c hotspot where mine Stock for Those speeds easy hit 95 and was throttling. Id say up the watt limit a little and push 2775-2800mhz and u be getting into the 24000 score for Daily usage. My gaming temps is waaaay lower Then compared to timespy. Timespy makes it pretty hot vs gaming. So id say if u Can get to 2800mhz at Max 90 hotspot ur able to Play on Daily.


----------



## Acegr

nordskov said:


> Nah its actually fine when its not lm. You only reach 80c hotspot where mine Stock for Those speeds easy hit 95 and was throttling. Id say up the watt limit a little and push 2775-2800mhz and u be getting into the 24000 score for Daily usage. My gaming temps is waaaay lower Then compared to timespy. Timespy makes it pretty hot vs gaming. So id say if u Can get to 2800mhz at Max 90 hotspot ur able to Play on Daily.


so my temps are actually fine? Any idea why temp is slowly going up? I wonder if it will stop or it will keep climbing...
Any settings for PMT and amd soft for daily use 24/7 I could try? With previous pads and paste I've seen up to 100c junction temp with just 2650 max and 2500 min.

Thanks


----------



## Acegr

nordskov said:


> Nah its actually fine when its not lm. You only reach 80c hotspot where mine Stock for Those speeds easy hit 95 and was throttling. Id say up the watt limit a little and push 2775-2800mhz and u be getting into the 24000 score for Daily usage. My gaming temps is waaaay lower Then compared to timespy. Timespy makes it pretty hot vs gaming. So id say if u Can get to 2800mhz at Max 90 hotspot ur able to Play on Daily.


I think I'll leave it at that. I only changed my watt limit from 332 to 365 and set those. I'm content with 80c junction, it's not like I'll ever see this in gaming. Time for some cpu clocks now 









edit with my final benches and temps from all. I'll stay at this for now and make it my daily. I doubt Ill see those temps in games. Should be fine.
Pc for anyone interested is
12700k @ 4.1-5.4ghz
XFX 6900 XT Ekwb zero @ 2550min - 2700max - 2100 fast - 365w
DDR5 Kingston 32gb @ 6400 30-38-38-28
be quiet 1200w toughpower 11
3 nemesis radiators and d5 with push on fans - no lm anywhere.


----------



## nordskov

Acegr said:


> so my temps are actually fine? Any idea why temp is slowly going up? I wonder if it will stop or it will keep climbing...
> Any settings for PMT and amd soft for daily use 24/7 I could try? With previous pads and paste I've seen up to 100c junction temp with just 2650 max and 2500 min.
> 
> Thanks


Hmm i cant tell for normal paste as mine Stock with normal paste easy Stock hit throttle point. But after repaste with lm its been stable with my temps for 6-7 month now. But my guess is that these power hungry chips dry out normal paste pretty Quick. Thats why i went with lm. I use Stock amd software 1.2v 2140ft ram and 2776mhz boost i believe it is and gets 24200 aprox. For Daily usage. Only thing in mpt i changed was raise the Max pl to 380w. And 15% powerlimit in amd Software 👍 i get good temps with my toxic with this. And gets Max power draw of 432w. My temps in gaming sessions doesnt go past 65hotspot. Normally its lile 50c and 61 hotspot. Really heavy games its more lile 54-66 Maybe. Think u should consider lm if temps keeps rising. My brother did the same to his Asus strix 240 6900xt and Hes temps dropped 30+ vs normal noctua nh-1 paste. He repasted 5 times and couldnt get better temps with normal paste went with thermal grizzly like me. And Saw temps drop insane 30c


----------



## Acegr

nordskov said:


> Hmm i cant tell for normal paste as mine Stock with normal paste easy Stock hit throttle point. But after repaste with lm its been stable with my temps for 6-7 month now. But my guess is that these power hungry chips dry out normal paste pretty Quick. Thats why i went with lm. I use Stock amd software 1.2v 2140ft ram and 2776mhz boost i believe it is and gets 24200 aprox. For Daily usage. Only thing in mpt i changed was raise the Max pl to 380w. And 15% powerlimit in amd Software 👍 i get good temps with my toxic with this. And gets Max power draw of 432w. My temps in gaming sessions doesnt go past 65hotspot. Normally its lile 50c and 61 hotspot. Really heavy games its more lile 54-66 Maybe. Think u should consider lm if temps keeps rising. My brother did the same to his Asus strix 240 6900xt and Hes temps dropped 30+ vs normal noctua nh-1 paste. He repasted 5 times and couldnt get better temps with normal paste went with thermal grizzly like me. And Saw temps drop insane 30c


Maybe I worried for nothing. It seems temps got stable. Maybe as it goes on and there are fewer bubbles on the loop till it clears it will get even better. Paste probably needed to settle in. I've read good things about that one keeping the temps stable even 2 years later. Will see.


----------



## Rajncajn26

New to the forum. My name is john and im from Cut Off in south east louisiana. My set up is as follws :
• Lian Li 011 Dynamic
• AMD Ryzen 7 5800X with Corsair 360 AIO OC TO 4.6 through BIOS
• MSI Tomahawk X570 WiFi MOBO
• G.skillz 32GB(16x16) 3600 RAM
• AMD 6900XT converted to water with EKWB water block and 360 rad OC TO 2650 daily
• EVGA 1000watt titanium PSU
• 10 x corsair ML120 fans

Looking forward to talking with everyone.

Will post some time spy and fire strike results when i get a chance.

This one is short lived. Will be building an AM5 system when available.


----------



## alceryes

Rajncajn26 said:


> View attachment 2569015
> 
> New to the forum. My name is john and im from Cut Off in south east louisiana. My set up is as follws :
> • Lian Li 011 Dynamic
> • AMD Ryzen 7 5800X with Corsair 360 AIO OC TO 4.6 through BIOS
> • MSI Tomahawk X570 WiFi MOBO
> • G.skillz 32GB(16x16) 3600 RAM
> • AMD 6900XT converted to water with EKWB water block and 360 rad OC TO 2650 daily
> • EVGA 1000watt titanium PSU
> • 10 x corsair ML120 fans
> 
> Looking forward to talking with everyone.
> 
> Will post some time spy and fire strike results when i get a chance.
> 
> This one is short lived. Will be building an AM5 system when available.


Welcome!
Question for you. What fans are intake and what ones are exhaust in this pic?


----------



## Rajncajn26

alceryes said:


> Welcome!
> Question for you. What fans are intake and what ones are exhaust in this pic?


The back three are intake and the top and bottom rads are exhaust. Steady supply of cold air. The one in the middle is just some air to blow on the VRMs.


----------



## alceryes

Rajncajn26 said:


> The back three are intake and the top and bottom rads are exhaust. Steady supply of cold air. The one in the middle is just some air to blow on the VRMs.


Do you also have 3 intake at the front of the case to the right? Nice set up!

I'll be building an AM5 as well but I don't think there are any 2 memory slot boards coming out at launch. I'll never need more than 32GBs for this build and want the best possible overclocking so I'm holding off till the enthusiast 2 slot boards are available.


----------



## Rajncajn26

This case doesnt have fan


alceryes said:


> Do you also have 3 intake at the front of the case to the right? Nice set up!
> 
> I'll be building an AM5 as well but I don't think there are any 2 memory slot boards coming out at launch. I'll never need more than 32GBs for this build and want the best possible overclocking so I'm holding off till the enthusiast 2 slot boards are available.


This case doesn't have fans on the front, only a sheet of glass. The three fans for intake, takes air from behind the mother board where the PSU sits through perforations on the back panel.


----------



## Acegr

Is there a way for the amd custom profile to load on start up? It defaults back to normal when I restart or shut down.

Also why is this happening? why doesn't it follow what I've set?


----------



## MotomEniac

I noticed that you trying to set up profile for specific game, try to choose Global Settings instead of Spider Man


----------



## Acegr

MotomEniac said:


> I noticed that you trying to set up profile for specific game, try to choose Global Settings instead of Spider Man


still the same. Reaches 1200mv.


----------



## Rajncajn26

Try saving a profile and if it reverts back, just load profile back up.


----------



## MotomEniac

Acegr said:


> still the same. Reaches 1200mv.


Can you check what voltage on GPU die using HWInfo64, because AMD wattman working awkward time to time. Also try to delete SPPT in MPT, so you can be sure that it is not it causing the problem


----------



## Acegr

MotomEniac said:


> Can you check what voltage on GPU die using HWInfo64, because AMD wattman working awkward time to time. Also try to delete SPPT in MPT, so you can be sure that it is not it causing the problem


here


----------



## ViruS001

Hello everyone,

Anyone of you have the best settings for RX6900 XT Ultimate? I am more concerned with getting the best gaming results with less noise and quieter fans. 
However, I would not like to heat it too much. Is it achievable?


----------



## nordskov

ViruS001 said:


> Hello everyone,
> 
> Anyone of you have the best settings for RX6900 XT Ultimate? I am more concerned with getting the best gaming results with less noise and quieter fans.
> However, I would not like to heat it too much. Is it achievable?


you mean sapphire toxic extreme 6900xt? xtxh chip??
If so then yes easy achieveable.. mine can do 2900+ mhz daily and not go higher then 64c, 82c hotspot and thats with fanspeed of only 1050 rpm (aprox 31-33% fanspeed)

Only thing to do to achieve that is repaste with thermal grizzly condonaut (liquid metal) and use more powertool, to put 1.250v and higher watt limit

heres a vid for 2900mhz daily gaming i use, there you can watch how the temps goes, its at _2560_ × 1440 resolution everything maxed, exept ray tracing.


----------



## Acegr

nordskov said:


> you mean sapphire toxic extreme 6900xt? xtxh chip??
> If so then yes easy achieveable.. mine can do 2900+ mhz daily and not go higher then 64c, 82c hotspot and thats with fanspeed of only 1050 rpm (aprox 31-33% fanspeed)
> 
> Only thing to do to achieve that is repaste with thermal grizzly condonaut (liquid metal) and use more powertool, to put 1.250v and higher watt limit
> 
> heres a vid for 2900mhz daily gaming i use, there you can watch how the temps goes, its at _2560_ × 1440 resolution everything maxed, exept ray tracing.


I almost got a heart attack till you said you used LM. I thought this was with regular paste..


----------



## nordskov

Acegr said:


> I almost got a heart attack till you said you used LM. I thought this was with regular paste..


Oh dear no 😂. With regular paste this would give throttle issues already at 2700+


----------



## Acegr

nordskov said:


> Oh dear no 😂. With regular paste this would give throttle issues already at 2700+


I guess I should be happy with my 2700 max and 83c max temp junction with just normal paste...


----------



## nordskov

Acegr said:


> I guess I should be happy with my 2700 max and 83c max temp junction with just normal paste...


Yeah its not bad as mentioned. But liquid metal is the **** when it comes to High clocks low fan speeds and low temps. Just been at Lan with 12 hours straight gaming and gpu was pushed at 2900+MHz. And never Saw temps above 70 junction. Usually it was in the 60-68. And normal temp was 48-53c


----------



## deadfelllow

nordskov said:


> Yeah its not bad as mentioned. But liquid metal is the **** when it comes to High clocks low fan speeds and low temps. Just been at Lan with 12 hours straight gaming and gpu was pushed at 2900+MHz. And never Saw temps above 70 junction. Usually it was in the 60-68. And normal temp was 48-53c


What is your temps under full load? I mean above 400W? Also room temp, cooling solution? 

It would be better if you do furmark test and post hwinfo ss here.

Thank you.


----------



## LtMatt

deadfelllow said:


> What is your temps under full load? I mean above 400W? Also room temp, cooling solution?
> 
> It would be better if you do furmark test and post hwinfo ss here.
> 
> Thank you.


How about a 20 run stress test of Firestrike ultra? We can all do that then for comparison.


----------



## nordskov

deadfelllow said:


> What is your temps under full load? I mean above 400W? Also room temp, cooling solution?
> 
> It would be better if you do furmark test and post hwinfo ss here.
> 
> Thank you.


Havent testet furmark. Used timespy for around 10x in a roe to heat it up it shows up at around 480w peak in gt2. with a junction in the 84c and temp around 65c. The room temp atm around 25c. And Stock Sapphire Extreme toxic 6900xt xtxh cooling solution. Just repasted with thermal grizzly condonaut (liquid metal)


----------



## nordskov

LtMatt said:


> How about a 20 run stress test of Firestrike ultra? We can all do that then for comparison.


Firestrike normal (dunno Bout ultra) aint getting as hot and i Can run like 100mhz higher Then timespy 🤔 is that normal ? I Can go as High as 2996 boost (thats like 2955 avg or so) wheres with timespy i cant go much higher then 2878mhz. (2830mhz avg) if i go higher then 2878mhz in driver it crashes even though temps is fine. I even tryed push 1.350v and 580w just as a try didnt work 🤷‍♂️
This is my best run for firestrike









I scored 46 354 in Fire Strike


AMD Ryzen 7 5800X, AMD Radeon RX 6900 XT x 1, 65536 MB, 64-bit Windows 11}




www.3dmark.com


----------



## deadfelllow

nordskov said:


> Havent testet furmark. Used timespy for around 10x in a roe to heat it up it shows up at around 480w peak in gt2. with a junction in the 84c and temp around 65c. The room temp atm around 25c. And Stock Sapphire Extreme toxic 6900xt xtxh cooling solution. Just repasted with thermal grizzly condonaut (liquid metal)


I mean my card with stock paste was like 83C @487W 5xloop i guess. And after 3 months of using i noticed that temps were like +5 higher compare to stock card and decided to repaste it. This f***** sapphire cards have a huge contact issue.

After repasting like 8546845 times its 93C with 470Watts with MX4 right now. I'm not sure do i need to apply LM.....


Stock paste temp below


----------



## DrzkaCZ

Hi,
I have an RX 6900 XT Merc319. When I compare the values from the RX6950 in MPT with my 6900, there are a lot of differences.

Which frequencies are worth (possible) to overclock with MPT to the values with RX6950?

My MPT values: 6900-XT-MPT


----------



## LtMatt

deadfelllow said:


> I mean my card with stock paste was like 83C @487W 5xloop i guess. And after 3 months of using i noticed that temps were like +5 higher compare to stock card and decided to repaste it. This f***** sapphire cards have a huge contact issue.
> 
> After repasting like 8546845 times its 93C with 470Watts with MX4 right now. I'm not sure do i need to apply LM.....
> 
> 
> Stock paste temp below
> View attachment 2569161


For the stock cooler this temps seem okay to me. It’s basically 500 W or more going through the cooler.


----------



## deadfelllow

LtMatt said:


> For the stock cooler this temps seem okay to me. It’s basically 500 W or more going through the cooler.


Currently with MX-4 paste temps are like this(2 min furmark). I'm thinking LM. I dont want to see junction above 90C XD.


----------



## LtMatt

But his often in games does 482W go through the card? For me I barely see over 400W, and temps mid 70s at worst. I am going to put up a video Firestrike stress test that we can all copy to compare temps.


----------



## nordskov

deadfelllow said:


> I mean my card with stock paste was like 83C @487W 5xloop i guess. And after 3 months of using i noticed that temps were like +5 higher compare to stock card and decided to repaste it. This f***** sapphire cards have a huge contact issue.
> 
> After repasting like 8546845 times its 93C with 470Watts with MX4 right now. I'm not sure do i need to apply LM.....
> 
> 
> Stock paste temp below
> View attachment 2569161


Also depends on how fast your fans is spinning. Mine is Damn slow cause i hate noise. So locked from 800rpm-1050 as Max. And with 480w Max included 15% pl. Im about 63-82 or something. But if i ramt up the fans it does go lower like 58-76. But preffer a stable fast low noise system. Didnt buy such expensive card to hear a j’et engine take off everyday 😂👌


----------



## nordskov

DrzkaCZ said:


> Hi,
> I have an RX 6900 XT Merc319. When I compare the values from the RX6950 in MPT with my 6900, there are a lot of differences.
> 
> Which frequencies are worth (possible) to overclock with MPT to the values with RX6950?
> 
> My MPT values: 6900-XT-MPT


From your pictures its only runing 286w or something as Max. I would go 330w atleast. And if temps is still fine up the powerlimit towards 15% in the driver and up the boost. Aim for 2650-2750 if your on Air. Watercooled you might be able to do 2800+. But might require you Mark temp dependent vmin or whats its called under features. Then go to power settings where you put 330w and find the field named vmin High/low and put 1200/1200 (if your card is xtx or 1225/1225 if your card is xtxh Then your able to boost higher


----------



## ZealotKi11er

nordskov said:


> Whats the difference in graphic score if u run same clocks ??


They are same clocks.


----------



## FLukawa90

Guys wanna ask. Can I replace Toxic 6900XT 5-pin fan with other 4-pin? I think replace it with Arctic P12 fan will reduce the temp better


----------



## chrisde

FLukawa90 said:


> Guys wanna ask. Can I replace Toxic 6900XT 5-pin fan with other 4-pin? I think replace it with Arctic P12 fan will reduce the temp better


I have done it with xfx merc and 3 p12. The temps are pretty much the same, but the noise is all gone. Highly recommended because of that.


----------



## thomasck

Can some of you guys please post yours stock 3dmark scores along with what cpu was used? Much appreciated.


----------



## alceryes

thomasck said:


> Can some of you guys please post yours stock 3dmark scores along with what cpu was used? Much appreciated.


Specs in sig.
My very first TS score with new 6900 XT - 17490 total (19313 graphics, 11395 CPU)
Current TS score tweaked - 19474 total (22049 graphics, 11719 CPU)

I'm not doing anything crazy with my reference 6900 XT. Just the backside PCB thermal pad mod and a little OCing in Radeon software. Haven't loaded up MPT yet.


----------



## thomasck

@alceryes Are these scores with stock clocks? I also have a reference one. I did not even try MPT this time, back when it was launched along with the Radeon VII I spent many, many hours tweaking, testing and benching to find the sweet spot, instead of enjoying the gaming potencial of the card. Although, I still want to bump the TDP to 350W. 
What about this PCB thermal pad, any good results? What pad thickness are you using?


----------



## Erazor0

thomasck said:


> Can some of you guys please post yours stock 3dmark scores along with what cpu was used? Much appreciated.


Just upgraded my system to an AORUS 6900 XT MASTER today. Here are my results with the card undervolted to 1150mV. Default was 1200mV.


----------



## Acegr

thomasck said:


> Can some of you guys please post yours stock 3dmark scores along with what cpu was used? Much appreciated.


 stock










overclocked


----------



## nordskov

Graphic card: toxic extreme repastet liquid metal OC bios with only 15% pl stock 22900-23200 graphic score (stock boost on this thing is 2626 mhz

overclocked with 475w limit +15% pl and 2872mhz boost 2140 mhz ram fast timing and SLOW fan speed, max they hit during a run is only 1250 here. if i let it go for like 1500 in this summerheat (28c) then it drops junction to around 82 only at 500+ w


----------



## alceryes

thomasck said:


> @alceryes Are these scores with stock clocks? I also have a reference one. I did not even try MPT this time, back when it was launched along with the Radeon VII I spent many, many hours tweaking, testing and benching to find the sweet spot, instead of enjoying the gaming potencial of the card. Although, I still want to bump the TDP to 350W.
> What about this PCB thermal pad, any good results? What pad thickness are you using?


First one is with stock clocks.
Second is with an OC of 2600 core, 2100 mem (w/FT), PL +15%, and an undervolt of 1125mV. Note that undervolting with Radeon software just lowers the overall range of the voltage fluctuation it doesn't set it at a static voltage. My TDP still maxes at 300W.
The pcb back thermal pad mod definitely helped -








[Official] AMD Radeon RX 6900 XT Owner's Club


I've been looking into it. The Merc319 limited black is basically the same PCB as a red devil Ultimate, but with slightly better input filtering. The MSI gaming trio z has the second best PCB, but the cooler allows rather high junction temps and is rather noisy. AsRock formula OC is the best...




www.overclock.net


----------



## nordskov

Swenzzon86 said:


> ran a little bench yesterday With liquid metal and cold weather, the gpu maxed out at 2980mhz Average about 2900mhz Am very happy with the overclocking, but ....
> Do not understand why my score is so low with the relatively high overclocking .. Have tried lower clocks, but get lower scores Tried different fclk settings etc. post a comparison with one that has the same cpu, more than 1000p difference Result Thoughts?


Damnnn thats a insane score. How much voltage etc. Can u share the settings used ? I cant Seem to go much higher Then 2878 even at 1.3v. And maxed out 25300 graphic score


----------



## alceryes

Swenzzon86 said:


> ran a little bench yesterday With liquid metal and cold weather, the gpu maxed out at 2980mhz Average about 2900mhz Am very happy with the overclocking, but ....
> Do not understand why my score is so low with the relatively high overclocking .. Have tried lower clocks, but get lower scores Tried different fclk settings etc. post a comparison with one that has the same cpu, more than 1000p difference Result Thoughts?


Your CPU score is lowering your overall score.
As far as your GPU score goes, it's excellent! You've probably just hit the silicon limit of your card.


----------



## nordskov

alceryes said:


> Your CPU score is lowering your overall score.
> As far as your GPU score goes, it's excellent! You've probably just hit the silicon limit of your card.


yea the cpu score could be better on that one, its around same i get with a 5800x


----------



## thomasck

Thanks @Acegr, @Erazor0 and @alceryes. You scores are 1K-2K higher than mine, I'm always in the 20K mark for the graphics score. But I also noticed that your power draw is way higher than mine. Through out the bench mine tops at 250W always, if I move the power tuning slider to +15, then I get scores like you guys, so it's all normal here.


----------



## alceryes

thomasck said:


> Thanks @Acegr, @Erazor0 and @alceryes. You scores are 1K-2K higher than mine, I'm always in the 20K mark for the graphics score. But I also noticed that your power draw is way higher than mine. Through out the bench mine tops at 250W always, if I move the power tuning slider to +15, then I get scores like you guys, so it's all normal here.


Yup. Thanks makes sense.


----------



## MotomEniac

Guyz I recently found a way to raise voltage over stock without sitting on it all the time. So I'm just selecting temp dependent Vmin flag in MPT but don't disable any power saving flags. Resulting voltage(under load) will be like what you will set in Vmin Low & Vmin High minus your Vdrop. In my case on regular xtx i get 1.206 under full load of Time Spy




















That's pretty neat solution to me, because I won't lose dynamic voltage, so without load GPU will drop voltages to very low values, and this allow to overcome voltage ceiling without any hardware mods








😎


----------



## thomasck

Something weird happened today, I could not help much but I tried troubleshooting by area.
Problem: I could navigate in between windows no problem, opening new tabs, no problem, alt - tabbing windows no problem. But as soon as I played/resumed a video on youtube, everything would lag for few seconds, the yt video would stop and then catch up to the point it should be if the system was not freezing. The cursor also felt around 5FPS. Alt - tabbing had a very long input lag to just switch in between the windows previews. I could play a game, like warzone without issues, as long as I would not alt - tab anything, or have a video playing. Weird.

I disabled hardware acceleration within chrome, did not help. "Oh, must be a chrome issue", nope, all browsers.
So then I started by clearing bios and reloading a known profile. No dice, issue persists.
Then I jumped to the ram, ran karhu ramtest, TestMem5 with anta's profile, nothing, same thing.
Stress tested the cpu, nothing.
Stress tested the gpu, nothing.
Reseated ram and gpu, nothing.
At this point I was starting to get worried.
Then the only thing left, drivers. Removed chipset driver, ddu gpu drivers, installed the previous version 22.7.1, and the issue was gone. But.. But.. How? Yesterday was just another day were I played few matches of warzone and hell let loose, no drama, switched the pc off went to bad. Today, turn it on, and I got this surprise.
I really hope that was just the driver, I don't really believe in that, but I hope..


----------



## alceryes

Hmmm, before you did the driver reinstall did you test just taking your GPU down to stock speeds/voltages?
Overclocking can cause all sorts of weirdness. Doesn't matter than you can run the 'fuzzy donut' all day. It could be that it's 99.99% stable but not 100% stable.

Have you started using MPT?


----------



## darwinz2000

deadfelllow said:


> This ones is for backplate. I need retention bracket screws :/( image below.


Do you know where to those things. I need those. Thank you.


----------



## DJGoodMusic

does anyone know what settings is best to use for a phantom gaming overclocked 16gb asrock card in the amd adrenalin software for overclocking, i have been super hesitant about changing anything on these settings. thanks sorry about the noob question. i've had this gpu for a year.


----------



## 99belle99

DJGoodMusic said:


> does anyone know what settings is best to use for a phantom gaming overclocked 16gb asrock card in the amd adrenalin software for overclocking, i have been super hesitant about changing anything on these settings. thanks sorry about the noob question. i've had this gpu for a year.


Increase the power limit and then the memory and use fast timings and then increase the core to a figure that is stable.


----------



## thomasck

alceryes said:


> Hmmm, before you did the driver reinstall did you test just taking your GPU down to stock speeds/voltages?
> Overclocking can cause all sorts of weirdness. Doesn't matter than you can run the 'fuzzy donut' all day. It could be that it's 99.99% stable but not 100% stable.
> 
> Have you started using MPT?


Yes I started using MPT. Only changed TDP to 350 and that's it. But the day I was trying out new clocks and all it was all fine. Turned off and next day I got the issue. But before troubleshooting I did remove all MPT settings and brought adrenaline's settings back to stock too. MPT, or new limits, or new clocks with new tdp (2650mhz max, 2550mhz min, 350W, vram stock, 1.155v) might have corrupted the driver. Who knows..


----------



## ptt1982

Got a new problem on my waterblocked 6900xt Red Devil. Everything hardware accelerated slows my computer to halt, and also start producing visual artifacts. Can be browser, Steam etc. but if I run Steam as an Admin and unable HW Acceleration on browser, they work fine. Games and Timespy runs fine, haven't done long-term testing yet. 

What happened: I moved to a new place, and took my loop apart for the move, including GPU out and radiators etc. One thing I did was tightening the CPU block screws (they have washers) quite a bit, hoping to get better temps once reinstalled. Maybe I cracked the board while at it?

I googled a bit and once your board is cracked, then there's no going back, it will go bad over time, and that is it. I'll still ask though: Would it help anything if I were to loosen the screws?

I've done the usual: Prime95 Small FTs, Memtest86, reseated RAM, reseated GPU, ran Timespy, took temps and all looks good. Tried a couple of games, all work flawlessly (at least the first few minutes).

Any assistance on this? Should I just prepare myself for the loss of 1200€ + 200€ block, and simply get ready to buy a new GPU?


----------



## J7SC

ptt1982 said:


> Got a new problem on my waterblocked 6900xt Red Devil. Everything hardware accelerated slows my computer to halt, and also start producing visual artifacts. Can be browser, Steam etc. but if I run Steam as an Admin and unable HW Acceleration on browser, they work fine. Games and Timespy runs fine, haven't done long-term testing yet.
> 
> What happened: I moved to a new place, and took my loop apart for the move, including GPU out and radiators etc. One thing I did was tightening the CPU block screws (they have washers) quite a bit, hoping to get better temps once reinstalled. Maybe I cracked the board while at it?
> 
> I googled a bit and once your board is cracked, then there's no going back, it will go bad over time, and that is it. I'll still ask though: Would it help anything if I were to loosen the screws?
> 
> I've done the usual: Prime95 Small FTs, Memtest86, reseated RAM, reseated GPU, ran Timespy, took temps and all looks good. Tried a couple of games, all work flawlessly (at least the first few minutes).
> 
> Any assistance on this? Should I just prepare myself for the loss of 1200€ + 200€ block, and simply get ready to buy a new GPU?


...I don't quite understand the part about 'Games and Timespy runs fine, haven't done long term testing yet' in the above summary and context (end of first paragraph). You mean they work as long as hardware acceleration is turned off ?

In any case, reseating the GPU block and carefully tighten (but not overtighten) the screws again is worth it. If that doesn't work, google 'baking your GPU' to reflow the solder - it can work re. cracked traces. But this has to be done carefully re. max temps in the oven (even toaster oven), and make sure there are no plastic parts exposed. A bit more on that > here. There are also folks who repair GPUs professionally for a fee. Check > here .

Finally, I would also check CPU and RAM seating along with cables since you moved the computer recently. May be even reload Windows (stranger things have happened).


----------



## alceryes

ptt1982 said:


> Got a new problem on my waterblocked 6900xt Red Devil. Everything hardware accelerated slows my computer to halt, and also start producing visual artifacts. Can be browser, Steam etc. but if I run Steam as an Admin and unable HW Acceleration on browser, they work fine. Games and Timespy runs fine, haven't done long-term testing yet.
> 
> What happened: I moved to a new place, and took my loop apart for the move, including GPU out and radiators etc. One thing I did was tightening the CPU block screws (they have washers) quite a bit, hoping to get better temps once reinstalled. Maybe I cracked the board while at it?
> 
> I googled a bit and once your board is cracked, then there's no going back, it will go bad over time, and that is it. I'll still ask though: Would it help anything if I were to loosen the screws?
> 
> I've done the usual: Prime95 Small FTs, Memtest86, reseated RAM, reseated GPU, ran Timespy, took temps and all looks good. Tried a couple of games, all work flawlessly (at least the first few minutes).
> 
> Any assistance on this? Should I just prepare myself for the loss of 1200€ + 200€ block, and simply get ready to buy a new GPU?


Reset MB BIOS - take everything back to stock. Reset your GPU overclocks/overvolts to stock. (if you want take pics of your various settings so you can reproduce)
How does everything perform at stock?


----------



## ptt1982

J7SC said:


> ...I don't quite understand the part about 'Games and Timespy runs fine, haven't done long term testing yet' in the above summary and context (end of first paragraph). You mean they work as long as hardware acceleration is turned off ?
> 
> In any case, reseating the GPU block and carefully tighten (but not overtighten) the screws again is worth it. If that doesn't work, google 'baking your GPU' to reflow the solder - it can work re. cracked traces. But this has to be done carefully re. max temps in the oven (even toaster oven), and make sure there are no plastic parts exposed. A bit more on that > here. There are also folks who repair GPUs professionally for a fee. Check > here .
> 
> Finally, I would also check CPU and RAM seating along with cables since you moved the computer recently. May be even reload Windows (stranger things have happened).


Thanks for the replies.

(SOLVED !!!)

UPDATE FINAL: DDU'd the drivers and did a factory reset, and went back to recommended 22.5.1 and now everything works perfectly. Warning for you guys, latest two drivers are crap!

Problems start, however, when I launch steam app itself, the moment it is online and gets to the place where GPU is needed (the store display) my computer grinds to a halt, mouse and keyboard become unresponsive, some visual artifacts are displayed. Same happens when browsing on Opera facebook or any website that uses Hardware Acceleration. For example on FB when I scroll the feed, it works fine until a video shows up with a play sign, then it nearly crashes the computer and display driver crashes as well. If I scroll down really fast, it stops at the HW Acceleration needed part and crashes the driver but starts working again when I am somehow able to scroll past it (takes 5-10 seconds as mouse is not responsive.) Same happened with the previous driver I had installed before moving, and after moving installing the new ones (tested last two drivers). Tested also stock settings for both CPU and GPU and UV and OC settings. Same symptoms.

EDIT 1: I will also try and reinstall windows, it has certainly become a bit bloated, and maybe something weird happened when I took all the parts out and reinstalled.

EDIT 2: Also, now that I think about it, I actually had problems with youtube playback and HW acceleration before this. The graphics driver would crash randomly when for example putting youtube in full screen. The screen would flash and go black for a second. I also hear this windows sound when it crashes now, the same as you'd plug in a new USB stick for example (recognizing a new device.) I'm going to do a full GPU driver wipe and reinstall.

(I'll leave the issue description below in case someone has the same problems. All started from simply reinstalling the loop, and this is just a driver problem)
To clarify, so when running games and Timespy normally, they work just fine. Temps are normal as well. I didn't have time to test them too long, just a few runs of Timespy and ran here and there in Elden Ring with unlocked fps. Tested Cyberpunk with RT on, and no problems occured at all, temps are normal as well.


----------



## nordskov

Question. My 6900xt seems to give me headache. Timespy Daily gpu score 25000. Firestrike 70000. But for some reason i only get 11800 in port royal. Is there some magic tricks for that benchmark or isnt 2800 avg boost enogugh to go past 12000??? Tryed to go 2850boost but just crashes even though timespy and firestrike runs just fine


----------



## Sufferage

nordskov said:


> Question. My 6900xt seems to give me headache. Timespy Daily gpu score 25000. Firestrike 70000. But for some reason i only get 11800 in port royal. Is there some magic tricks for that benchmark or isnt 2800 avg boost enogugh to go past 12000??? Tryed to go 2850boost but just crashes even though timespy and firestrike runs just fine


Port Royal had a pretty hefty performance loss with the latest drivers for me, ~300 points less compared to 22.3.1, so 12000 should be doable for you with 22.3.1


----------



## nordskov

Sufferage said:


> Port Royal had a pretty hefty performance loss with the latest drivers for me, ~300 points less compared to 22.3.1, so 12000 should be doable for you with 22.3.1


Ahh okay im using 22.8.1 so might be that. Hope they fix it in new drivers. And Then i might try again. Even though 11800 seems to be ahead of most 3080’s


----------



## joyzao

Hello everyone,

Can you help me? I recently got a rx 6900 xt asus tuf oc, I haven't had this brand for a long time. That's why I have doubts.

I downloaded the more power tool, I'm doing tests to get the best performance consuming less and with good temperature, I can't get anything beyond 2700 mhz even with overclock. Now I'm at about 2500 mhz stable with voltage at 1.1 would that be good or could it be improved? Any valuable tips?

Is it worth changing the bios to another card? Thank you all.


----------



## deadfelllow

joyzao said:


> Hello everyone,
> 
> Can you help me? I recently got a rx 6900 xt asus tuf oc, I haven't had this brand for a long time. That's why I have doubts.
> 
> I downloaded the more power tool, I'm doing tests to get the best performance consuming less and with good temperature, I can't get anything beyond 2700 mhz even with overclock. Now I'm at about 2500 mhz stable with voltage at 1.1 would that be good or could it be improved? Any valuable tips?
> 
> Is it worth changing the bios to another card? Thank you all.


It's a XTX chip thats why you cant go over 2700. Even tho you change your VBIOS it doesnt matter. You cant flash LC&KXTX BIOS to your card. Because XTX chips are kinda suck.


















ASUS TUF RX 6900 XT GAMING OC Specs


AMD Navi 21, 2310 MHz, 5120 Cores, 320 TMUs, 128 ROPs, 16384 MB GDDR6, 2000 MHz, 256 bit




www.techpowerup.com


----------



## nordskov

joyzao said:


> Hello everyone,
> 
> Can you help me? I recently got a rx 6900 xt asus tuf oc, I haven't had this brand for a long time. That's why I have doubts.
> 
> I downloaded the more power tool, I'm doing tests to get the best performance consuming less and with good temperature, I can't get anything beyond 2700 mhz even with overclock. Now I'm at about 2500 mhz stable with voltage at 1.1 would that be good or could it be improved? Any valuable tips?
> 
> Is it worth changing the bios to another card? Thank you all.


You just need to put 400w limit in mpt and set to default 1.2v as the xtxh card Then you should easy could go 2800 stable all day Long 👍


----------



## LtMatt

Since I moved from the 5950X to the 5800X3D, I was able to overcome the CPU bottleneck in Firestrike.

Previously and albeit with a 6900 XT Toxic (which clocked 100Mhz further on the core than my 6950 XT), I was stuck on around 62K graphics score. 

That has now climbed to 72K, with slightly slower core clocks but higher memory clocks. 
AMD Radeon RX 6950 XT video card benchmark result - AMD Ryzen 7 5800X3D,ASUSTeK COMPUTER INC. ROG CROSSHAIR VIII DARK HERO (3dmark.com)


----------



## joyzao

deadfelllow said:


> It's a XTX chip thats why you cant go over 2700. Even tho you change your VBIOS it doesnt matter. You cant flash LC&KXTX BIOS to your card. Because XTX chips are kinda suck.
> 
> 
> View attachment 2569901
> 
> 
> 
> 
> 
> 
> 
> 
> 
> ASUS TUF RX 6900 XT GAMING OC Specs
> 
> 
> AMD Navi 21, 2310 MHz, 5120 Cores, 320 TMUs, 128 ROPs, 16384 MB GDDR6, 2000 MHz, 256 bit
> 
> 
> 
> 
> www.techpowerup.com





nordskov said:


> You just need to put 400w limit in mpt and set to default 1.2v as the xtxh card Then you should easy could go 2800 stable all day Long 👍


I'm testing at 2500-2550 stable with 1.08v, would that be good?

Timespy with these settings, Graphics Score 22 227

Can I improve something? I have memory at 2150 too.

Asics Quality 83%


----------



## nordskov

If the temps arent to High you Can up voltage and watt limit more and set boost even higher. My 6800xt on Air with repaste did 21950 just shy of 22000. But it was still with junction only around 94c. So there actually was 16c before throttling. So you should if temps allows it hit around 23000 pretty easy. But it might be unstable at higher clockspeeds with such low voltage (if thats what u put into mpt) if u thinking about the driver its more like a offset


----------



## 99belle99

joyzao said:


> Hello everyone,
> 
> Can you help me? I recently got a rx 6900 xt asus tuf oc, I haven't had this brand for a long time. That's why I have doubts.
> 
> I downloaded the more power tool, I'm doing tests to get the best performance consuming less and with good temperature, I can't get anything beyond 2700 mhz even with overclock. Now I'm at about 2500 mhz stable with voltage at 1.1 would that be good or could it be improved? Any valuable tips?
> 
> Is it worth changing the bios to another card? Thank you all.


You actually have a decent XTX card I have a reference model and cannot do any where near what you get but I am on stock cooling so...


----------



## LtMatt

Speaking of Asic quality. I've tried a fair few different 6950 XT Toxics now. They are all lower asic (sub 84%) compared to 6900 XTXH. All of them have clocked worse than the various 6900 XTXHs I've had.

The fastest combination would be 6900 XTXH, flashed with the Liquid Cooled MBA 6900 XTXH BIOS for increased memory frequency.

6950 XT has a higher default power limit and faster memory vs the 6900 XTXH, but once you flash that Liquid Cooled MBA 6900 XTXH BIOS, increase the power limit via MPT, it should become faster overall.


----------



## 99belle99

LtMatt said:


> Speaking of Asic quality. I've tried a fair few different 6950 XT Toxics now. They are all lower asic (sub 84%) compared to 6900 XTXH. All of them have clocked worse than the various 6900 XTXHs I've had.
> 
> The fastest combination would be 6900 XTXH, flashed with the Liquid Cooled MBA 6900 XTXH BIOS for increased memory frequency.
> 
> 6950 XT has a higher default power limit and faster memory vs the 6900 XTXH, but once you flash that Liquid Cooled MBA 6900 XTXH BIOS, increase the power limit via MPT, it should become faster overall.


So the XTXH die is actually better binned than a 6950 die but the 6950 just has the faster memory. I wonder would they release a 6950 XTXH so faster die along with the better memory but it's probably far too late for that now with new cards due out sometime in the future.


----------



## LXP-F

LtMatt said:


> The fastest combination would be 6900 XTXH, flashed with the Liquid Cooled MBA 6900 XTXH BIOS for increased memory frequency.


What VBIOS is this, and can I flash it onto my XFX Zero?


----------



## nordskov

just tryed cod mw with radeon super resolution 1080p --> 1440p high settings for the sake of fun take a look, btw dont judge my gameplay i suck xD, im more like a csgo guy u know

but surprised to see it run mostly above 300fps at high 1440p upscaled..


----------



## guskline

Interesting findings on the 6900xt vs the 6950xt.

I have been flying MSFS 2020 with a 5900x/RX6800 combo and had almost leaped to the 6900 when the 6950xt was released. Was able to snag an AMD reference 6950XT from their store for retail. I applied an EK waterblock to it (same as the 6800).

Very solid card and no need for me to overclock as it is very powerful even for MSFS 2020. I am using a Samsung 49" widescreen 5120x1440. Can run on ultra settings without stuttering.


----------



## Azazil1190

Here is my best one!
Its an old score 
Stock voltage 1.2v and 450w via mptool 

















I scored 23 531 in Time Spy


Intel Core i9-10900K Processor, AMD Radeon RX 6900 XT x 1, 16384 MB, 64-bit Windows 10}




www.3dmark.com


----------



## nordskov

Azazil1190 said:


> Here is my best one!
> Its an old score
> Stock voltage 1.2v and 450w via mptool
> View attachment 2570178
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> I scored 23 531 in Time Spy
> 
> 
> Intel Core i9-10900K Processor, AMD Radeon RX 6900 XT x 1, 16384 MB, 64-bit Windows 10}
> 
> 
> 
> 
> www.3dmark.com


How do u gain that much by only avg 2769mhz ?? Any special driver ? Any mpt tweaks etc ? I only gain 25400 But at the cost of 1.250v and 500w limit and all deep sleep off and with 2140mhz memory fast timing. You do even higher at lower voltage lower clockspeed etc. Mine avg was 2830 and boost set to 2882mhz.


----------



## Azazil1190

nordskov said:


> How do u gain that much by only avg 2769mhz ?? Any special driver ? Any mpt tweaks etc ? I only gain 25400 But at the cost of 1.250v and 500w limit and all deep sleep off and with 2140mhz memory fast timing. You do even higher at lower voltage lower clockspeed etc. Mine avg was 2830 and boost set to 2882mhz.


Was official drivers i dont remember the version.
The run is about one year old at win10 .
Win10 are better for benchmark vs win11.
I dont know if intel CPUs are pushing more to ts like the fs









Those are the settings and fclk 2176 min and max.and all deep sleep settings.
I have all the settings in my pc i can send you later all.
Im at work now


----------



## Azazil1190

I found the old driver's.


https://www.amd.com/en/support/kb/release-notes/rn-rad-win-21-8-1


----------



## lestatdk

My best score is similar to that. I've not been OC'ing that much since and looks like either the drivers are regressing a bit wrt TS score or it was a bug.









I scored 20 404 in Time Spy


AMD Ryzen 7 5800X, AMD Radeon RX 6900 XT x 1, 16384 MB, 64-bit Windows 10}




www.3dmark.com





nevermind just saw you had 25k + gfx, I'm only at 23.5 lol


----------



## nyk20z3

Ordered a Phanteks GLACIER G6000 STRIX block & backplate for my Strix LC. I will update the thread after i install it, still deciding on a 360 rad to push it.


----------



## nordskov

Azazil1190 said:


> Was official drivers i dont remember the version.
> The run is about one year old at win10 .
> Win10 are better for benchmark vs win11.
> I dont know if intel CPUs are pushing more to ts like the fs
> 
> View attachment 2570189
> 
> Those are the settings and fclk 2176 min and max.and all deep sleep settings.
> I have all the settings in my pc i can send you later all.
> Im at work now


Would be cool. Cant figure out whats making yours so much faster at lower speed and voltage. Would be cool to save some electricity and gain same results 😁. Ill try later to up it a bit my gfx and soc is only at 380-63 so Maybe its that ??


----------



## Azazil1190

nordskov said:


> Would be cool. Cant figure out whats making yours so much faster at lower speed and voltage. Would be cool to save some electricity and gain same results 😁. Ill try later to up it a bit my gfx and soc is only at 380-63 so Maybe its that ??











I scored 23 653 in Time Spy


Intel Core i9-10900K Processor, AMD Radeon RX 6900 XT x 1, 32768 MB, 64-bit Windows 10}




www.3dmark.com




Check also this guy.
Lower clocks than mine but better score.
I think is the intel cpu and the rams that push the gpu scores.


----------



## J7SC

I realize the item below is based on rumours (add salt), as much as some things have been solidifying... I have been expecting a mGPU (appearing as a single for the driver, also per AMD patent). 

One thing is for sure - get big PSUs, whether for next-gen NVidia or AMD


----------



## nordskov

Azazil1190 said:


> I scored 23 653 in Time Spy
> 
> 
> Intel Core i9-10900K Processor, AMD Radeon RX 6900 XT x 1, 32768 MB, 64-bit Windows 10}
> 
> 
> 
> 
> www.3dmark.com
> 
> 
> 
> 
> Check also this guy.
> Lower clocks than mine but better score.
> I think is the intel cpu and the rams that push the gpu scores.


maybe your right, else i guess its windows 10 vs 11, because he seems to have another driver then you, but hit high aswell, but he got windows 10 similar to yours, were im at windows 11, but guess i should be satisfied enough with 25100 graphic score as daily driver on windows 11 maybe


----------



## MotomEniac

nordskov said:


> maybe your right, else i guess its windows 10 vs 11, because he seems to have another driver then you, but hit high aswell, but he got windows 10 similar to yours, were im at windows 11, but guess i should be satisfied enough with 25100 graphic score as daily driver on windows 11 maybe


At 21.8 driver time i was 1k higher on score on same system(win11) than now on 22.7.1


----------



## nordskov

MotomEniac said:


> At 21.8 driver time i was 1k higher on score on same system(win11) than now on 22.7.1


Hmm using 22.8.1 right now. Dunno if its worth going back to that driver. I Think the new Radeon super res etc is worth imo over that 1k more 🙈 this
Is my system as it runs Daily. 1.250v 450w limit gpu 450a soc 75a now tryed settings as mentioned. Gave me 100 more points. Think ill make it stay here as it is. Works flawless , any idea to improve cpu score even more ?? cpu is 5800x and did Co -30 4 worst -26 2 second best -22 2 best. +175mhz override liquid metal on and x63 rgb. My ram is 4x16gb 3600mhz gskill Tridentz neo cl16-16-16-16 1t put Down to cl14-15-12-12 1t still at 3600mhz (higher wont boot with4 sticks in)

is


----------



## lestatdk

Can some explain what's going on here. This took me upping the voltage and power beyond anything I've ever tried to even match the old GPU score. My average frequency is almost 100 MHz higher now to get the same score ?
Seriously considering rolling back to that 21.8.1 driver again this is really weird. Higher in all but GT2 and that was only by half a percent .


----------



## nyk20z3




----------



## Henry Owens

Still no way to increase voltage to 1.2 on a reference card?


----------



## 99belle99

Henry Owens said:


> Still no way to increase voltage to 1.2 on a reference card?


You can with MPT.


----------



## MotomEniac

Just in case anyone will face the same issue in future. So the story is I've started facing the *black screen with complete system hang*, only hard power off is the option when playing games. So I started to test with different stability tests and found that Time Spy Extreme in stress test mode is the best to encounter an issue. So my PC black screen after 1-4 runs of it. That was scary time to me, as I clearly understand that it is a GPU issue and I've already tried everything driver related(changed few versions, DDU installs, etc) and overclocking related(erase SPPT via MPT, reset all overclocking in AMD Software). So I almost prepared myself to GPU change...😠And then I occasionally close *Riva Tuner Statistics* Server which I'm using for OSD - and BOOM *all the symptoms gone*, 20 successful runs of TSE on every overclock I previously use.


----------



## MotomEniac

Henry Owens said:


> Still no way to increase voltage to 1.2 on a reference card?


Check my post about it few pages ago, there is a method and it is working very good


----------



## Azazil1190

Does anyone try the 22.8.2 driver?
It made my stable uv- oc unstable.
now im 50mhz lower than before.
Or my strix starting degradation


----------



## LtMatt

I've tried 22.8.2 but not noticed any instability with my typical gaming overclock.

I'll run my Timespy Stress Test with my max stable overclock and will let you know mate if it passes.

Previously, If I went 10Mhz higher on the core clock and would fail.


----------



## LtMatt

So far so good, 10 of 20 runs passed.


----------



## LtMatt

Azazil1190 said:


> Does anyone try the 22.8.2 driver?
> It made my stable uv- oc unstable.
> now im 50mhz lower than before.
> Or my strix starting degradation


Seems fine for me at max previous stable overclock settings. 

24/7 Max Gaming Overclock Stock Voltage
2698/2798Mhz - 2394Mhz Fast Timings 
22.8.2
AMD Radeon RX 6950 XT video card benchmark result - AMD Ryzen 7 5800X3D,ASUSTeK COMPUTER INC. ROG CROSSHAIR VIII DARK HERO (3dmark.com)


----------



## Azazil1190

Thnx Matt for your time!
I will made some test again.
Because only to two games i have the issue.
The ascent and the spiderman and before that i change some voltages on my 5950 at vddg and those thing's.im gonna put them like before.Hope that is the reason for my unstable oc and not the gpu and the funny thing is that the strix has a very low usage daily and i have it as a daily only uv max 380-400w.


----------



## Azazil1190

Update..
After about 1.5hour of cod black ops zombies 4k ultra uncap .everything works fine(2775/2875 1.2v 390w, max)but i set the voltages values on my 5950 like i had before.
Tomorrow more tests to find out what is going on


----------



## ZealotKi11er

Everyone talking about overclock and high power draw. Beat my idle power.


----------



## Reaper29

edit


----------



## LtMatt

Azazil1190 said:


> Update..
> After about 1.5hour of cod black ops zombies 4k ultra uncap .everything works fine(2775/2875 1.2v 390w, max)but i set the voltages values on my 5950 like i had before.
> Tomorrow more tests to find out what is going on


What's the highest clocks you can pass 20 runs of Timespy stress test successfully with out of curiosity?


----------



## Azazil1190

LtMatt said:


> What's the highest clocks you can pass 20 runs of Timespy stress test successfully with out of curiosity?


Who knows?
Never try it on strix 
But at any game was stable at 2775/2900 1.2v and 380-400w.Never had a crash or anything.
I know only for toxic 2725/2825 2150memory fast stock voltages and watts

Probably is the only way to find out my real stable clocks at strix but i cant understand now if is degraded


----------



## Sufferage

ZealotKi11er said:


> Everyone talking about overclock and high power draw. Beat my idle power.


Done


----------



## deadfelllow

Sufferage said:


> Done
> 
> View attachment 2570407


Which settings forces to downclock vram clock?

Are you doing that via MPT or smth else?


----------



## Sufferage

deadfelllow said:


> Which settings forces to downclock vram clock?
> 
> Are you doing that via MPT or smth else?


No, this is normal behavior of the card, but clocks won't be as low with more than 1 screen connected i think.


----------



## nyk20z3

I think y’all overclock more then actual gaming lol


----------



## PJVol

nyk20z3 said:


> I think y’all overclock more then actual gaming lol


Those who mostly gaming usually don't post on OCN, so yeah...


----------



## nordskov

nyk20z3 said:


> I think y’all overclock more then actual gaming lol





PJVol said:


> Those who mostly gaming usually don't post on OCN, so yeah...


well i do But most i Play is csgo cod mw and d2r 😂 and i sure need a 6900xt 64gb ram and 5800x doing Stock 5900x speeds for that lol 😂😂😂


----------



## LtMatt

Here is COD at 4K running on my overclocked 6950 XT with a 450W power limit.
The 5800X3D and the 6950 XT averaging 216 FPS with 1% low of 127 in a 30 minute Caldera benchmark at 4K competitive settings. 
Warzone Caldera | 5800X3D + RX 6950 XT Toxic | 2160P Competitive Settings - YouTube

Here is COD at 4K running on my overclocked 3090 TI with a 600W BIOS.
The 5800X3D and the 3090 TI averaging 161 FPS with 1% low of 129 in a 25 minute Caldera benchmark at 4K competitive settings.
Warzone Caldera 5800X3D + 3090 TI OC | 2160P Competitive Settings - YouTube


----------



## nordskov

LtMatt said:


> Here is COD at 4K running on my overclocked 6950 XT with a 450W power limit.
> The 5800X3D and the 6950 XT averaging 216 FPS with 1% low of 127 in a 30 minute Caldera benchmark.
> Warzone Caldera | 5800X3D + RX 6950 XT Toxic | 2160P Competitive Settings - YouTube
> 
> Here is COD at 4K running on my overclocked 3090 TI with a 600W BIOS.
> The 5800X3D and the 3090 TI averaging 161 FPS with 1% low of 129 in a 25 minute Caldera benchmark.
> Warzone Caldera 5800X3D + 3090 TI OC | 2160P Competitive Settings - YouTube


How does it work with High settings (low competition settings looks like **** tbh) and super resolution on?? im getting around 300+ avg at 1440p high with super res on (1080 --> upscaled to 1440p) with 1440p and no super res and high im getting around 220-240 avg


----------



## LtMatt

nordskov said:


> How does it work with High settings (low competition settings looks like **** tbh) and super resolution on?? im getting around 300+ avg at 1440p high with super res on (1080 --> upscaled to 1440p) with 1440p and no super res and high im getting around 220-240 avg


Warzone Fortunes Keep runs at 146 average FPS, with 1% lows of 90 on the RX 6950 XT and the 5800X3D at 4K maximum settings. 
5800X3D Benchmark Warzone Fortunes Keep RX 6950 XT Toxic 2160P Maximum Settings - YouTube

Warzone Fortunes Keep runs at 122 average FPS, with 1% lows of 79 on the RTX 3090 TI and the 5800X3D at 4K maximum settings. 
5800X3D Benchmark Warzone Fortunes Keep RTX 3090 TI 2160P Maximum Settings - YouTube


----------



## nordskov

LtMatt said:


> Warzone Fortunes Keep runs at 146 average FPS, with 1% lows of 90 on the RX 6950 XT and the 5800X3D at 4K maximum settings.
> 5800X3D Benchmark Warzone Fortunes Keep RX 6950 XT Toxic 2160P Maximum Settings - YouTube
> 
> Warzone Fortunes Keep runs at 122 average FPS, with 1% lows of 79 on the RTX 3090 TI and the 5800X3D at 4K maximum settings.
> 5800X3D Benchmark Warzone Fortunes Keep RTX 3090 TI 2160P Maximum Settings - YouTube


You dont have cod mw Maxed 1440p upscaled with rsr to 4K ? Or 1080 to 1440p ?


----------



## ZealotKi11er

LtMatt said:


> Here is COD at 4K running on my overclocked 6950 XT with a 450W power limit.
> The 5800X3D and the 6950 XT averaging 216 FPS with 1% low of 127 in a 30 minute Caldera benchmark at 4K competitive settings.
> Warzone Caldera | 5800X3D + RX 6950 XT Toxic | 2160P Competitive Settings - YouTube
> 
> Here is COD at 4K running on my overclocked 3090 TI with a 600W BIOS.
> The 5800X3D and the 3090 TI averaging 161 FPS with 1% low of 129 in a 25 minute Caldera benchmark at 4K competitive settings.
> Warzone Caldera 5800X3D + 3090 TI OC | 2160P Competitive Settings - YouTube


But Frame Chasers says AMD has way lower mins than Nvidia. All the AMD COD gamers are idiots.


----------



## LtMatt

nordskov said:


> You dont have cod mw Maxed 1440p upscaled with rsr to 4K ? Or 1080 to 1440p ?


I don't have any benchmarks using RSR, but have other COD videos at 1080P/1440P on my channel.



ZealotKi11er said:


> But Frame Chasers says AMD has way lower mins than Nvidia. All the AMD COD gamers are idiots.


I disagree with most things he says on anything AMD tbh. His tuning knowledge of AMD CPUs/GPUs is questionable too.


----------



## nordskov

LtMatt said:


> I don't have any benchmarks using RSR, but have other COD videos at 1080P/1440P on my channel.
> 
> 
> I disagree with most things he says on anything AMD tbh. His tuning knowledge of AMD CPUs/GPUs is questionable too.


Couldnt find any 4K videos or 1440p with or without rsr under your videos. With High settings. Would love to see how 6950 does 😅✌ only with comp settings thats at low and looks crap 🙈


----------



## LtMatt

nordskov said:


> Couldnt find any 4K videos or 1440p with or without rsr under your videos. With High settings. Would love to see how 6950 does 😅✌ only with comp settings thats at low and looks crap 🙈


Other than a few videos I've uploaded at maximum settings (couple linked above) I don't have any using High Settings. It's all about optimising settings for maximum possible FPS. This will highlight just how well your GPU, CPU and system RAM are tuned. Moving everything to High settings or above just makes everything GPU limited and lowers FPS. As a competitive shooter, higher FPS = better. Lower latency in Warzone at higher FPS, so the higher FPS the lower latency and better performance. Image quality is not really important in this scenario.


----------



## nordskov

LtMatt said:


> Other than a few videos I've uploaded at maximum settings (couple linked above) I don't have any using High Settings. It's all about optimising settings for maximum possible FPS. This will highlight just how well your GPU, CPU and system RAM are tuned. Moving everything to High settings or above just makes everything GPU limited and lowers FPS. As a competitive shooter, higher FPS = better. Lower latency in Warzone at higher FPS, so the higher FPS the lower latency and better performance. Image quality is not really important in this scenario.


True. But figured out i was able to do all High exept Ray tracing with 1080 —> 1440p rsr on with 300plus fps so was like hell yea 😂👌


----------



## MotomEniac

ZealotKi11er said:


> But Frame Chasers says AMD has way lower mins than Nvidia. All the AMD COD gamers are idiots.


I was literally burn my ass today, when i saw his video regarding yesterday AMD presentation. When he started yelling on unfair comparison by Lisa because they turned SAM on, and that this gives Ryzen+Radeon system 15-20% gain over Intel+Radeon. What a troll 😠


----------



## ZealotKi11er

MotomEniac said:


> I was literally burn my ass today, when i saw his video regarding yesterday AMD presentation. When he started yelling on unfair comparison by Lisa because they turned SAM on, and that this gives Ryzen+Radeon system 15-20% gain over Intel+Radeon. What a troll 😠


Really? I have tested SAM with Intel and its all on the GPU not the platform.


----------



## LtMatt

ZealotKi11er said:


> Really? I have tested SAM with Intel and its all on the GPU not the platform.


That's because there are optimisations that go directly into the GPU driver for SAM on various games, which means to benefit from those you'll need an AMD CPU and GPU.


----------



## gtz

LtMatt said:


> That's because there are optimisations that go directly into the GPU driver for SAM on various games, which means to benefit from those you'll need an AMD CPU and GPU.


Don't think this is true. I have had a 5800X and 2 5900X and various X299 platforms (and a 12900KF). SAM worked similarly on all platforms.


----------



## LtMatt

gtz said:


> Don't think this is true. I have had a 5800X and 2 5900X and various X299 platforms (and a 12900KF). SAM worked similarly on all platforms.


I know it's true that specific optimisations go into SAM via driver updates, don't think it applies to rebar though. However if you want to test Rebar vs SAM with a 6900 XT, Death Stranding (very high settings 1440p) or Watchdogs legions (High settings 1440p) should be a suitable example to see if the performance increases are the same with enablement.


----------



## MotomEniac

Regarding Sam vs Re-bar, that resarch shows that threre is not even close to 15-20%. I would say it is very close in gains: Benchmarking AMD Smart Access Memory on Intel's Z490 Chipset - Legit Reviews


----------



## LtMatt

MotomEniac said:


> Regarding Sam vs Re-bar, that resarch shows that threre is not even close to 15-20%. I would say it is very close in gains: Benchmarking AMD Smart Access Memory on Intel's Z490 Chipset - Legit Reviews


Article written December 27, 2020. Lots of updates into drivers since then, with lots of added optimisations.

Up to 12% increase in performance with new SAM optimizations in Death Stranding™ @ 1440p Very High settings and up to 24% increase in performance with new SAM optimizations in Watch Dogs™: Legion, using AMD Software: Adrenalin Editiosn 22.5.2 on the Radeon RX 6750 XT, versus AMD Software: Adrenalin Edition 22.5.1.


----------



## MotomEniac

LtMatt said:


> Article written December 27, 2020. Lots of updates into drivers since then, with lots of added optimisations.
> 
> Up to 12% increase in performance with new SAM optimizations in Death Stranding™ @ 1440p Very High settings and up to 24% increase in performance with new SAM optimizations in Watch Dogs™: Legion, using AMD Software: Adrenalin Editiosn 22.5.2 on the Radeon RX 6750 XT, versus AMD Software: Adrenalin Edition 22.5.1.


Even if that is such an uplift declaration in driver release notes, doesn't mean that it will not be the same with Alder lake+Radeon(reBAR enabled), though


----------



## LtMatt

MotomEniac said:


> Even if that is such an uplift declaration in driver release notes, doesn't mean that it will not be the same with Alder lake+Radeon(reBAR enabled), though


I don't believe Intel will have access to the AMD specific driver optimisations, so if you can try and test those games to see if Intel get the same performance gains. I doubt they will.


----------



## Azazil1190

LtMatt said:


> I don't believe Intel will have access to the AMD specific driver optimisations, so if you can try and test those games to see if Intel get the same performance gains. I doubt they will.


Matt i think for some games the performance is the same or close enough....(1-3 fps max lower for intel system)At 4k and 1440p
I see it to my second system 10900k + 6900 toxic.

*I speak about the avg.fps because for the lows amd system cpu+gpu are way better .
Maybe 15-20fps higher for amd in some games.

Strong example is the sotr.


----------



## MotomEniac

LtMatt said:


> I don't believe Intel will have access to the AMD specific driver optimisations, so if you can try and test those games to see if Intel get the same performance gains. I doubt they will.


I don't have the possibility to test it on my own, but a quick search shows that watchdogs get pretty significant increase in fps on Intel + Radeon on that 22.5.1->22.5.2. I'm assuming it is not relevant which processor is behind the optimized memory access approach. From my understanding it is more like driver optimize certain memory access routines , so that whenever certain game scene requires access to huge memory chunk AND platform is capable of addressing that between CPU and GPU - it just works.


----------



## LtMatt

It would be nice to see some direct comparison testing using the latest drivers, but it does look like the gains are closer than i thought and similar. The optimisations are done on the GPU driver side, and the CPU used is less important as long as either ReBar or SAM is supported by said CPU. This would then explain why ReBar largely does nothing on Nvidia apart from in half a dozen whitelisted games.


----------



## alceryes

ZealotKi11er said:


> Everyone talking about overclock and high power draw. Beat my idle power.


...and done.  Bonus = this is my 24x7x365 setting. Undervolted and great performance!


----------



## alceryes

You know, now that the gauntlet has been tossed, I can already see peps massively undervolting and unplugging monitors just so they can post a 4W screenie!


At least mine is my everyday driver setting.


----------



## Alexxxx#€

hi how are things? How can I get the bios of XFX Speedster MERC319 RX 6900 XT Limited? since it comes with 2490mhz and so I don't have to apply oc, greetings!


----------



## J7SC

LtMatt said:


> It would be nice to see some direct comparison testing using the latest drivers, but it does look like the gains are closer than i thought and similar. The optimisations are done on the GPU driver side, and the CPU used is less important as long as either ReBar or SAM is supported by said CPU. This would then explain why ReBar largely does nothing on Nvidia apart from in half a dozen whitelisted games.


...on my 3090 Strix, I 'force' r_BAR via NVInspector in the general tab - significant gains in some apps such as Port Royal and also FS2020.


----------



## Alexxxx#€

Does anyone know where there is a tutorial or could you do me the favor of modifying my bios to have 2500mhz?


----------



## guskline

Alexxxx#€ said:


> Does anyone know where there is a tutorial or could you do me the favor of modifying my bios to have 2500mhz?


Before trying to hard modify the BIOS, did you try these settings in the performance section of the AMD Adrenaline software?

I would not suggest tinkering with the BIOS until you are sure that after a stress test the new setting works.


----------



## Alexxxx#€

guskline said:


> Before trying to hard modify the BIOS, did you try these settings in the performance section of the AMD Adrenaline software?
> 
> I would not suggest tinkering with the BIOS until you are sure that after a stress test the new setting works.


I already did several tests at 2550mhz and everything was perfect, that's why I would like to modify it and forget about the drivers, but I can't find how to modify the bios and add those mhz


----------



## 99belle99

Unless things have changed and I haven't heard about it you cannot modify the bios.


----------



## Alexxxx#€

99belle99 said:


> Unless things have changed and I haven't heard about it you cannot modify the bios.


And how could I get the XFX Speedster MERC319 RX 6900 XT Limited bios? because it is compatible with the xtx which is the one I currently have, but I can't find that bios :-(


----------



## Sufferage

Alexxxx#€ said:


> And how could I get the XFX Speedster MERC319 RX 6900 XT Limited bios? because it is compatible with the xtx which is the one I currently have, but I can't find that bios :-(


You should be able to find the bios here.


----------



## Alexxxx#€

Sufferage said:


> You should be able to find the bios here.


I already looked for it there a long time ago and it is not there, there are only 2 bios and one is 2200mhz and the other 2360mhz but the 2490 is not there, it is the one I am looking for :-(


----------



## toxick

When I view clips on YouTube, Telegram and any other video, I get a gray screen, more and more recently. Does anyone else have this?


----------



## thomasck

I don't get what is going on in here.
System specs are 5900X, 2x8gb 3800mhz good timings, flat 15, 30, 45, tRFC 280, around 54ns and stable, as well the rest of system with reasonable voltages for soc, vddg, vddp etc. A rm850x pushing the rig. Cooled by 3x360 in the case plus a 1260mm external rad. Mine 6900xt is a reference model, latest drivers.

When installing a new driver I always DDU the previous driver and run timespy getting 20.5K in the graphics score give or take, a 15K in the cpu score. That is, no OC in the gpu or tweaks in the cpu, no changes using MPT as well.

If I increase the power limit to the max, I start getting 21.5K in the graphics score. However, if I set a clock for example of 2700 and 2600 I get a lower score, way lower, around 18K. If I set 2700 as max, and 2200 as min, I get the same score as when having just the power limit increased that is, 21.5K. It feels like I can not breach 21.5K no matter what, and seems the best option is just to have the power limit all the way up without setting any clock.

If I set MPT to 300W max + power slicer all the way up (that is, 350W total), I get 21.9K.

Then I've tried to up the limit of the max voltage to 1200mV using MPT along with 350W max power draw but the card would bench at 500mhz only.

What's the matter? I've googled about this for quite a while now, and even here, but I don't know what I am missing.

Feels like the card has a limit, which I can not break. If the card is with auto clock but power limit all the way up, the card manages to push everything it can, and, if I keep the power slider all the way up and I dial a higher clock than the card would do when on auto clock, it would score less in timespy. Makes sense?\

EDIT

Some clocks and scores:
2650/2550 1175mV 350W total: 21495
2650/2400 1175mV 350W total: 21930
2550/2400 1175mV 350W total: 21920
2550/2300 1175mV 350W total: 21917
2550/2300 1150mV 350W total: 22002
2550/2300 1135mV 350W total: 22013
2650/2300 1150mV 350W total: 22090


----------



## nyk20z3




----------



## Azazil1190

nyk20z3 said:


> View attachment 2570885
> 
> View attachment 2570886
> 
> View attachment 2570884


Very nice block!!


----------



## Sufferage

thomasck said:


> I don't get what is going on in here.
> System specs are 5900X, 2x8gb 3800mhz good timings, flat 15, 30, 45, tRFC 280, around 54ns and stable, as well the rest of system with reasonable voltages for soc, vddg, vddp etc. A rm850x pushing the rig. Cooled by 3x360 in the case plus a 1260mm external rad. Mine 6900xt is a reference model, latest drivers.
> 
> When installing a new driver I always DDU the previous driver and run timespy getting 20.5K in the graphics score give or take, a 15K in the cpu score. That is, no OC in the gpu or tweaks in the cpu, no changes using MPT as well.
> 
> If I increase the power limit to the max, I start getting 21.5K in the graphics score. However, if I set a clock for example of 2700 and 2600 I get a lower score, way lower, around 18K. If I set 2700 as max, and 2200 as min, I get the same score as when having just the power limit increased that is, 21.5K. It feels like I can not breach 21.5K no matter what, and seems the best option is just to have the power limit all the way up without setting any clock.
> 
> If I set MPT to 300W max + power slicer all the way up (that is, 350W total), I get 21.9K.
> 
> Then I've tried to up the limit of the max voltage to 1200mV using MPT along with 350W max power draw but the card would bench at 500mhz only.
> 
> What's the matter? I've googled about this for quite a while now, and even here, but I don't know what I am missing.
> 
> Feels like the card has a limit, which I can not break. If the card is with auto clock but power limit all the way up, the card manages to push everything it can, and, if I keep the power slider all the way up and I dial a higher clock than the card would do when on auto clock, it would score less in timespy. Makes sense?\
> 
> EDIT
> 
> Some clocks and scores:
> 2650/2550 1175mV 350W total: 21495
> 2650/2400 1175mV 350W total: 21930
> 2550/2400 1175mV 350W total: 21920
> 2550/2300 1175mV 350W total: 21917
> 2550/2300 1150mV 350W total: 22002
> 2550/2300 1135mV 350W total: 22013
> 2650/2300 1150mV 350W total: 22090


You are power limited, 350W is way too low to max the card out in timespy, i'm benching timespy with a 450W power limit @2700MHz, 1.193v.


----------



## MotomEniac

thomasck said:


> I don't get what is going on in here.
> ...
> Some clocks and scores:
> 2650/2550 1175mV 350W total: 21495
> 2650/2400 1175mV 350W total: 21930
> 2550/2400 1175mV 350W total: 21920
> 2550/2300 1175mV 350W total: 21917
> 2550/2300 1150mV 350W total: 22002
> 2550/2300 1135mV 350W total: 22013
> 2650/2300 1150mV 350W total: 22090


What is your actual underload voltage though, i.e. VDDCR_GFX in HWInfo64. Maybe your cards voltage regulation is to gentle and you have very massive V drop under load. Also your card has stock cooling or watercooled?


----------



## J7SC

nyk20z3 said:


> View attachment 2570885
> 
> View attachment 2570886
> 
> View attachment 2570884


Nice ! I've been using a similar block on my Strix 3090, no complaints after a year+, looks pristine internally and provides really good cooling even at 520W and beyond...I really like Phanteks blocks and apart from the Ampere one, I also use two of their Glacier series for X570 CPUs (3950x and 5950X). My custom 3x8 pin 6900XT uses the Bykski block, though - also a very good product.


----------



## thomasck

Sufferage said:


> You are power limited, 350W is way too low to max the card out in timespy, i'm benching timespy with a 450W power limit @2700MHz, 1.193v.


So what I wrote kinda makes sense. I'm gonna try a higher power limit of 450w to see how that goes, and the rm850x should handle that just fine, at least for benching only. 



MotomEniac said:


> What is your actual underload voltage though, i.e. VDDCR_GFX in HWInfo64. Maybe your cards voltage regulation is to gentle and you have very massive V drop under load. Also your card has stock cooling or watercooled?


It's never even close to 1175mV, it's around 1000-1090mV while benching, but if I set 1060mV to game it would crash. I will try a higher power limit as said above. The card is watercooled, a bit overkill.


----------



## POMAHCHRONOS

So I did GC Extreme but full cover method on my 6900XT Toxic Air-cooled

Should I tear it apart again and do this line method plus heating up the paste and GPU with a hair dryer?



https://www.tomshardware.com/news/sausage-style-gpu-thermal-paste-application-results-in-lowset-temps


----------



## Benjoo

@S


Sufferage said:


> You are power limited, 350W is way too low to max the card out in timespy, i'm benching timespy with a 450W power limit @2700MHz, 1.193v.


Hey sufferage kannst du mir helfen bezüglich der 9600xt ?


----------



## MotomEniac

thomasck said:


> It's never even close to 1175mV, it's around 1000-1090mV while benching, but if I set 1060mV to game it would crash. I will try a higher power limit as said above. The card is watercooled, a bit overkill.


This is kind of what I predicted even 1090 under load of 380W is very big Vdrop. So unfortunately I suspect you card will not to any higher until it will hold at least 1120-1130 under load. You can raise your voltage using Temp Dependant Vmin of MPT


----------



## MotomEniac

MotomEniac said:


> Just in case anyone will face the same issue in future. So the story is I've started facing the *black screen with complete system hang*, only hard power off is the option when playing games. So I started to test with different stability tests and found that Time Spy Extreme in stress test mode is the best to encounter an issue. So my PC black screen after 1-4 runs of it. That was scary time to me, as I clearly understand that it is a GPU issue and I've already tried everything driver related(changed few versions, DDU installs, etc) and overclocking related(erase SPPT via MPT, reset all overclocking in AMD Software). So I almost prepared myself to GPU change...😠And then I occasionally close *Riva Tuner Statistics* Server which I'm using for OSD - and BOOM *all the symptoms gone*, 20 successful runs of TSE on every overclock I previously use.


Continuing my epopy regarding black screen issue which started to happen when I switched back and forth *22.8.2->22.7.1* drivers. Previously I thought it was *Riva Tuner Statistics Server* causng it. But then I understood that it was just coincidence because I reset to default of AMD Software and it caused *Re-BAR* to disable. So now I'm sure that* Re-BAR is the reason for my black screen problem* because this time I've made back-to-back tests 2 times. So the result is when I activate Re-BAR my system goes to black screen in Time Spy Extreme and just some specific games(Anno 1800) and with it disabled completes 20 runs of it.


----------



## Frosted racquet

MotomEniac said:


> So the result is when I activate Re-BAR my system goes to black screen in Time Spy Extreme and just some specific games(Anno 1800) and with it disabled completes 20 runs of it.


In my experience this indicates that the VRAM OC isn't stable.


----------



## Sufferage

Benjoo said:


> @S
> 
> 
> Hey sufferage kannst du mir helfen bezüglich der 9600xt ?


Most likely, but keep it in english as there are lots of helpful people around here that may as well be able to help you, but won't understand german


----------



## MotomEniac

Frosted racquet said:


> In my experience this indicates that the VRAM OC isn't stable.


Thank you for an advice 🍻 , will check that. I'm pretty sure that it blackscreens even default clocks, but I will recheck


----------



## Nechana

Hi guys i have i big request, is here someone with Sapphire RX 6900 XT Nitro+ SE edition? I need vga bios for this card, i have suspection my card was used for mining with different bios and because original SE bios is nowhere to find someone slap there some other bios and card behave wierdly. Also what is your default Core MHz with this card. Thank you very much.


----------



## MotomEniac

Frosted racquet said:


> In my experience this indicates that the VRAM OC isn't stable.


Ok, I rechecked and unfortunately my system is doing 100% repro black screen on TSE with re-BAR enabled on default memory clocks. On other hand it is doing 20 successful runs of TSE on 2120 MHz with re-BAR disabled. I suspect software issue rather than hardware here.


----------



## Blameless

MotomEniac said:


> Ok, I rechecked and unfortunately my system is doing 100% repro black screen on TSE with re-BAR enabled on default memory clocks. On other hand it is doing 20 successful runs of TSE on 2120 MHz with re-BAR disabled. I suspect software issue rather than hardware here.


Problems with ReBAR enabled can be VRAM, but they can also be too high of an SoC or LCLK on the GPU, or an unstable main memory subsystem or PCI-E controller on the CPU. In your case it sounds like the issue is not the card itself, but still in the realm of hardware or hardware configuration.

Can't rule it out at this point, but it's generally not a software issue if other people using the same software lack the issue.


----------



## Benjoo

Sufferage said:


> Most likely, but keep it in english as there are lots of helpful people around here that may as well be able to help you, but won't understand german


haha ok but my english Is Not so good
But can you help ne on discord to overclock the CPU/gpu Too improve my Performance ?


----------



## Sufferage

Benjoo said:


> haha ok but my english Is Not so good
> But can you help ne on discord to overclock the CPU/gpu Too improve my Performance ?


Well, there's tons of guides out there explaining in great detail what to do, i suggest you start here.
Pretty good guide that covers the basics as well as advanced overclocking using MPT - and it's in german too


----------



## MotomEniac

Blameless said:


> Problems with ReBAR enabled can be VRAM, but they can also be too high of an SoC or LCLK on the GPU, or an unstable main memory subsystem or PCI-E controller on the CPU. In your case it sounds like the issue is not the card itself, but still in the realm of hardware or hardware configuration.
> 
> Can't rule it out at this point, but it's generally not a software issue if other people using the same software lack the issue.


Yes, I agree with your justification as a theoretical cause. But first thing is that memory subsystem is completely stable tested via multiple tests(Karhu, Memtest Pro, etc), moreover issue happens with completely stock CPU and RAM settings in BIOS. Second thing is even if this doesn't happen to others doesn't completely remove the possibility of software issue, maybe something is broken on registry level, who knows? Why I'm still leaning towards software rather than hardware is because of HOW the issue happening. So with disabled re-BAR it is rock solid and when it is enabled it crashes very fast and this period is constant on completely stock everything compared to pretty tight OC. Only thing I need to double-check is to delete all MPT registry settings and recheck again.


----------



## sry.fat.irl

is it feasible for a 6900 xt to come with a 6950 xt bios. my sapphire toxic 6900 xt air cooled comes with the 6950 xt bios and reads back as a kxtx..


----------



## NiteNinja

Well it's been a month. I sent in my 6900XT Waterforce to Gigabyte through warranty and even though I bought it second hand off eBay and I found out it had a Crypto miner modified BIOS on it that prevented it from running Fullscreen games, they graciously repaired it and I got it back.

Everything works great, just having issues with SteamVR but it seems that's common with 6XXX series cards.

Heck this miner rescued card is near the top of it's class with just clicking the auto overclock GPU button. Glad I didn't need to eBay claim it.


----------



## guskline

NiteNinja, thanks for the post and kudos to Gigabyte for th repair!


----------



## NiteNinja

Fixed my steam VR issues, found out I had virtual desktop streamer set to HEVC encoding instead of H264. Which is bizarre because my 5700 XT would do HEVC no problem but seems like the 6900 XT does not.

Still a wonderful VR experience, everything's cranked to the max with 120 FPS in everything I tried. 

I also played quake 2 rtx, I was getting about 50 FPS at 3440x1440 and admiring some of the ray tracing details was pretty nice. Minecraft RTX was a little less enjoyable as I used to play with Sonic ethers path traced global illumination which I think that does the ray tracing much better. Either way the people who say that this card does bad for Ray tracing, sure compared to nvidia's newest offerings yeah, but it's not a bad experience from my personal experience.


----------



## MustangIsBoss

Does anyone know if there's a way to overcome the VRAM clock limitation? I have an Asrock RX 6900 XT Phantom Gaming D and it happily does 2150mhz with performance scaling to 2150mhz with Fast timings. I've tried Fast Timings Level 2, but that seems to not apply properly most of the time, and when it does, it produces heavy artifacting and then a system lockup. Today I tried raising the max VRAM clock limit, and the driver does the failsafe even at 2152mhz set VRAM clock. Card didn't freak out at 2250mhz VRAM set for desktop usage, which I guess is a little bit promising even though the core was restricted to 500mhz. Interestingly it goes back to normal if I set <=2150mhz VRAM and it remembers the core clock setting I had before it limits the core to 500mhz. 

I've found the LC VBIOS, and am now considering trying to flash that. I have a GTX 1660 I can use for a re-flash in the probable scenario of the RX 6900 XT getting soft bricked. I've looked at the hardwareluxx threads a decent amount already, gonna try increasing the FCLK in the meantime.

I'm focusing on the VRAM for now because it's a low-power (and therefore heat) increase for the performance boost, which is desirable when trying to cool 400W constant draw in Metro Exodus on air cooling. I do have a G12 and AIO to use, but I'm concerned that the VRAM and VRMs may need more substantial heatsinks than the ones I bought for running my RX 5700 XT with the G12. (400W software draw + little 1cm^3 heatsinks per mosfet laughed at even my 118CFM 120mm x 38mm fan's attempt to keep the VRM cool)

The core does around 2700mhz or so max clock stable, at least from my testing in Metro Exodus. Going higher would require the temp_dependent_vmin trick and increased voltages, which aren't of use on the air cooler. I do have liquid metal from using it on my 5700 XT (on both the reference cooler and the G12), so that may be beneficial even on the stock air cooler.


----------



## Justye95

Hi everyone, I hope you can help me, I am having a strange problem on my 6900 XT. When I am playing or even just surfing the internet the PC freezes completely, I have to manually restart the PC and when I restart the gpu drivers are gone, to restore them I have to reinstall them. What can it be?
I have tried the 22.8.2 drivers and it happens to me often, on 22.5.1 it happened to me only once
I have windows 11
to uninstall the drivers use ddu unistaller


----------



## alceryes

Justye95 said:


> Hi everyone, I hope you can help me, I am having a strange problem on my 6900 XT. When I am playing or even just surfing the internet the PC freezes completely, I have to manually restart the PC and when I restart the gpu drivers are gone, to restore them I have to reinstall them. What can it be?
> I have tried the 22.8.2 drivers and it happens to me often, on 22.5.1 it happened to me only once
> I have windows 11
> to uninstall the drivers use ddu unistaller


This can be caused by GPU, CPU, RAM, motherboard, or even drives failing.
Best thing to do is start testing each component separately. Test RAM by booting from a Memtest86+ USB. Test CPU by seeing if it can get through at least 15 mins of a Prime95 blended test (all AVX off). Test GPU by running the fuzzy donut (furmark). Run each drive's manufacturer diagnostic test.


----------



## sry.fat.irl

Justye95 said:


> Hi everyone, I hope you can help me, I am having a strange problem on my 6900 XT. When I am playing or even just surfing the internet the PC freezes completely, I have to manually restart the PC and when I restart the gpu drivers are gone, to restore them I have to reinstall them. What can it be?
> I have tried the 22.8.2 drivers and it happens to me often, on 22.5.1 it happened to me only once
> I have windows 11
> to uninstall the drivers use ddu unistaller


any chance you play with the soc or fclk settings within mpt? it sounds like what was happening to me few months ago and it turned out my psu fan was no longer kicking on. if you have the option have your psu checked out by a repair shop. but if you can afford one might as well just pick one up just to be on the sure side that its not the issue. 

also recheck and reseat your gpu sometimes it gets wonky even if you dont move it. reseat ram sticks, just reseat everything that may have moved. computer hardware do weird **** when we go to sleep lol. check the psu to pcie connections too. hope that helps a little on the check list[/QUOTE]


----------



## Justye95

sry.fat.irl said:


> hai qualche possibilità di giocare con le impostazioni soc o fclk all'interno di mpt? sembra quello che mi stava succedendo qualche mese fa e si è scoperto che il mio fan dell'alimentatore non si accendeva più. se hai la possibilità, fai controllare il tuo alimentatore da un'officina. ma se puoi permettertene uno potresti anche prenderne uno solo per essere sicuro che non sia il problema.
> 
> anche ricontrolla e riposiziona la tua gpu a volte diventa traballante anche se non la sposti. riposizionare i bastoncini di ariete, riposizionare tutto ciò che potrebbe essersi spostato. l'hardware del computer fa strani **** quando andiamo a dormire lol. controlla anche le connessioni da psu a pcie. spero che questo aiuti un po' sulla lista di controllo


[/CITAZIONE]
the power supply appears to be working correctly. the pc does not turn off by itself but only freezes. can it be a driver problem? I'm noticing it only does this with 22.8.2 drivers


----------



## sry.fat.irl

Justye95 said:


> [/CITAZIONE]
> the power supply appears to be working correctly. the pc does not turn off by itself but only freezes. can it be a driver problem? I'm noticing it only does this with 22.8.2 drivers


oh. i tried the 22.8.1 and i was getting stop errors. so i went back to using the amernime modded drivers. running the 22.7.1 drivers, a lot of people are still using the 22.6.X drivers too. i cant remember where i read it but the 22.8.X drivers for the 6800 and 6900xt have a pcie bug i believe so it may be your issue. you could try the modded version the installation package is really easy theres a step by step version or the cli version which you do yourself. or just get the amd 22.6.2 but it can also be a microsoft issue because a lot of the amd updated drivers some have issues with the version of windows youre running because ms defender is short bus special. hope you figure it out. cuz nothing sucks more than being mid game and you freeze


----------



## Justye95

I hope it is a software and not a hardware problem, also because the video card works very well
if it's a software problem I don't care, they will fix it


----------



## Justye95

sry.fat.irl said:


> oh. i tried the 22.8.1 and i was getting stop errors. so i went back to using the amernime modded drivers. running the 22.7.1 drivers, a lot of people are still using the 22.6.X drivers too. i cant remember where i read it but the 22.8.X drivers for the 6800 and 6900xt have a pcie bug i believe so it may be your issue. you could try the modded version the installation package is really easy theres a step by step version or the cli version which you do yourself. or just get the amd 22.6.2 but it can also be a microsoft issue because a lot of the amd updated drivers some have issues with the version of windows youre running because ms defender is short bus special. hope you figure it out. cuz nothing sucks more than being mid game and you freeze


I hope it is a software and not a hardware problem, also because the video card works very well
if it's a software problem I don't care, they will fix it


----------



## MotomEniac

sry.fat.irl said:


> oh. i tried the 22.8.1 and i was getting stop errors. so i went back to using the amernime modded drivers. running the 22.7.1 drivers, a lot of people are still using the 22.6.X drivers too. i cant remember where i read it but the 22.8.X drivers for the 6800 and 6900xt have a pcie bug i believe so it may be your issue. you could try the modded version the installation package is really easy theres a step by step version or the cli version which you do yourself. or just get the amd 22.6.2 but it can also be a microsoft issue because a lot of the amd updated drivers some have issues with the version of windows youre running because ms defender is short bus special. hope you figure it out. cuz nothing sucks more than being mid game and you freeze


That "PCI bug" is something interesting, as my problem with blackscreen when reBAR enabled started when I install 22.8.2. It can help to find some clues here. I saw different problems users are reporting related to latest dirvers, so if this "PCI bug" is real it will explain everything


----------



## sry.fat.irl

MotomEniac said:


> That "PCI bug" is something interesting, as my problem with blackscreen when reBAR enabled started when I install 22.8.2. It can help to find some clues here. I saw different problems users are reporting related to latest dirvers, so if this "PCI bug" is real it will explain everything


it affects certain gpu and mobo combinations. primarily a lot of asus boards and gigabyte/aorus boards get the issue. its on guru3d if you look up the amernime drivers. it goes into detail on the forums what triggers it. kind of like a cstate / pcie issue. you need to download updated pcie gpio drivers or something. i cant remember what steps i took cuz it was a few months ago. (included in the amernime modded zip file.) i ll try and look for the initial post in the developer forums.


----------



## sry.fat.irl

MotomEniac said:


> That "PCI bug" is something interesting, as my problem with blackscreen when reBAR enabled started when I install 22.8.2. It can help to find some clues here. I saw different problems users are reporting related to latest dirvers, so if this "PCI bug" is real it will explain everything


its a driver issue on your end. anything that has to do with smart access memory is 98% of the time compatibility mismatch on the id of the driver and the agesa list. a lot of boards have not received any bios updates which also update the list of compatible hardware id's. so thats also one of the reasons it takes forever and sometimes its safer to stay on the last working driver. and for our cards. the safe safe last update was the 22.6.2 you can always get updated features separately like you can stay on the oldest driver and example only update the opengl, or vulkan drivers or whichever. you dont always have to get the fresh drivers especially if they have nothing in their notes about benefiting your card and if the hotfix affects you because its a specific game you play.


----------



## MotomEniac

sry.fat.irl said:


> it affects certain gpu and mobo combinations. primarily a lot of asus boards and gigabyte/aorus boards get the issue. its on guru3d if you look up the amernime drivers. it goes into detail on the forums what triggers it. kind of like a cstate / pcie issue. you need to download updated pcie gpio drivers or something. i cant remember what steps i took cuz it was a few months ago. (included in the amernime modded zip file.) i ll try and look for the initial post in the developer forums.


Wow that is very insightful, I will dig around this info, thank you very much. Because I was pretty shure it is something with driver, cos complete stability without smart access. BTW i tried to rollback to 22.5.1 which are recommended version by AMD right now, and it had no effect unfortunately. I suppose DDU\AMD cleanup does not clean PCI related stuff, and after 22.8.2 was installed - it is there. As a remark I have Asus board, hope I will find a solution on Guru3D forum


----------



## sry.fat.irl

MotomEniac said:


> Wow that is very insightful, I will dig around this info, thank you very much. Because I was pretty shure it is something with driver, cos complete stability without smart access. BTW i tried to rollback to 22.5.1 which are recommended version by AMD right now, and it had no effect unfortunately. I suppose DDU\AMD cleanup does not clean PCI related stuff, and after 22.8.2 was installed - it is there. As a remark I have Asus board, hope I will find a solution on Guru3D forum


here ya go.. you can dl it from my gdrive link. the pcie drivers. just read the include .txt file on directions.

AMDGPIO_PciBus.rar 

i mean right now i have the 22.7.1 drivers with the vulkan and opengl updates. havent had any major issues.


----------



## sry.fat.irl

i want to ask you guys on your thoughts of my not really a problem but kind of curious if my 6900xt is secretly a 6950xt but missed the binning requirement lol.

so the 6950xt runs the 020.001.000.071.000000 bios version. according to techpowerup database. and the 6900xt is different. but my 6900xt has the same bios as the 6950xt.. and when you set the bios switch to not really a position but in between the notches. and run gpuz and pull the bios and run it through a biosreader (hex) i clearly shows that i have the kxtx board (again which is the unlocked xtx.)







>>
















just curious to what your thoughts are.


----------



## NiteNinja

Justye95 said:


> Hi everyone, I hope you can help me, I am having a strange problem on my 6900 XT. When I am playing or even just surfing the internet the PC freezes completely, I have to manually restart the PC and when I restart the gpu drivers are gone, to restore them I have to reinstall them. What can it be?
> I have tried the 22.8.2 drivers and it happens to me often, on 22.5.1 it happened to me only once
> I have windows 11
> to uninstall the drivers use ddu unistaller


From experience, this could be caused by a corrupted v-bios. My first line of troubleshooting I tell people on reddit, is if your card has a toggle bios selector switch on it, switch to your silent bios and see if it stabilizes.

If it's fine after that, then you have a corrupted performance V-bios.

Windows 11 is also notorious for installing its own drivers after you install the drivers from the website. When this happens, you can go into your device manager, right click on your graphics card, go to properties, click on the driver tab, and then press the button that says rollback driver. This will roll it back to your previously installed driver, and tell Windows update to knock it off when it comes to auto updating your graphics drivers for that particular version.



sry.fat.irl said:


> i want to ask you guys on your thoughts of my not really a problem but kind of curious if my 6900xt is secretly a 6950xt but missed the binning requirement lol.
> 
> so the 6950xt runs the 020.001.000.071.000000 bios version. according to techpowerup database. and the 6900xt is different. but my 6900xt has the same bios as the 6950xt.. and when you set the bios switch to not really a position but in between the notches. and run gpuz and pull the bios and run it through a biosreader (hex) i clearly shows that i have the kxtx board (again which is the unlocked xtx.)
> View attachment 2571764
> >>
> View attachment 2571760
> View attachment 2571765
> 
> 
> just curious to what your thoughts are.


My guess is, the 6900XT became so refined that they were just cranking out what they would qualify as XTXH cores, so they called those the 6950XT instead. My Gigabyte Waterforce 6900xt which uses the XTXH core is on par with any of the 6950XT.


----------



## colourcode

Can someone explain this to me...
I have the 6900xt Nitro+ SE. (NITRO+ AMD Radeon RX 6900 XT SE 16G GDDR6 (sapphiretech.com) )

Supposedly it's running at these clocks: 

GPU: Boost Clock: Up to 2365 MHz
GPU: Game Clock: Up to 2135 MHz

However when I run it without any settings changed it boosts around 2500mhz when gaming - and dips constantly due to overheating (my guess).
In the AMD tuning page the default value for max clock is set at ~2527 or so.

I'm running it undervolted and set the max freq to 2365 which seems to work, and instead of running itself into the ground it's steady around 80 degrees. 
Why is it running so much higher at stock?

Another question - did anyone change the paste on this model? 
Looks like the parts with cooling pads are separate from the main cooler but I havent really found any defintive information.


----------



## sry.fat.irl

colourcode said:


> Can someone explain this to me...
> I have the 6900xt Nitro+ SE. (NITRO+ AMD Radeon RX 6900 XT SE 16G GDDR6 (sapphiretech.com) )
> 
> Supposedly it's running at these clocks:
> 
> GPU: Boost Clock: Up to 2365 MHz
> GPU: Game Clock: Up to 2135 MHz
> 
> However when I run it without any settings changed it boosts around 2500mhz when gaming - and dips constantly due to overheating (my guess).
> In the AMD tuning page the default value for max clock is set at ~2527 or so.
> 
> I'm running it undervolted and set the max freq to 2365 which seems to work, and instead of running itself into the ground it's steady around 80 degrees.
> Why is it running so much higher at stock?
> 
> Another question - did anyone change the paste on this model?
> Looks like the parts with cooling pads are separate from the main cooler but I havent really found any defintive information.


sapphire is my go to brand. always have had really good luck with binning and overall customer service is one of the best. pertaining to their boost settings if you leave boost off (trixx software) and use the custom setting in radeon. it wont go anywhere close to hitting limit on heat if you leave the memory timings as default. just remember when you put it on fast. automatically it wil increase the voltage for memory to its maximum which then leads to your second question about paste.. its not more so the paste thats the limitation but the vrm cooling itself. whenever i get a new card regardless of brand but specifically for sapphire i change the thermal pads. what i usually do is i get thermal putty. (thm-50) just google it cant remember the brand. but i use putty + glid thermal pads the new ones are rated to 15.1w/m. i put the putty more around the outer edges ensuring that contact for heat transfer is more efficient. not just from the top. (newer sapphire cards have this black shiny goopy paste.. cant remember what its called but its a thermal benefiting sealant of somesort) i just recently did this to my toxic 6900xt which is also air cooled. 

with sapphire cards at least in my experience especially the nitro line and the toxic line. the stock thermal pads even brand new come somewhat dryer than usual. so thats why i replace them right off the bat. thermal paste i use thermaltake tx9 which are 14.9w/mk.


----------



## sry.fat.irl

colourcode said:


> Can someone explain this to me...
> I have the 6900xt Nitro+ SE. (NITRO+ AMD Radeon RX 6900 XT SE 16G GDDR6 (sapphiretech.com) )
> 
> Supposedly it's running at these clocks:
> 
> GPU: Boost Clock: Up to 2365 MHz
> GPU: Game Clock: Up to 2135 MHz
> 
> However when I run it without any settings changed it boosts around 2500mhz when gaming - and dips constantly due to overheating (my guess).
> In the AMD tuning page the default value for max clock is set at ~2527 or so.
> 
> I'm running it undervolted and set the max freq to 2365 which seems to work, and instead of running itself into the ground it's steady around 80 degrees.
> Why is it running so much higher at stock?
> 
> Another question - did anyone change the paste on this model?
> Looks like the parts with cooling pads are separate from the main cooler but I havent really found any defintive information.


 oh and also default settings are usually not efficient since its the AMD one size fits all type of program. Undervolting your card although great on thermal usually puts stress on the on the memory controller. You’ll have to be comfortable adjusting settings. While undervolting is typically safer than overclocking, it does require some knowledge and tinkering to get right. you need to know the limitations of your card, like if you drop the voltage.. the power curve changes..so vrm will take more power to achieve the new settings and so on. with each adjustment there is a correlating positive or negative to another component. 

best way i can describe it (goes for overclocking or undervolting) if you have say a race car that has 400hp.. all the parts in that car ie. fuel pump, injectors, pistons, cooling are specifically set for that 400hp. the race car from factory has a turbo and needs say 10psi to maintain that level of performance. now you decrease the the psi to 5... the car now is trying to compensate (car computer) so its now trying to find a different group of settings to adjust for the loss of 5psi. so theoretically its easier for the gpu to adjust for more power than it is to adjust for less power. because with less power each component thats pulling from that 1175mv is pulling from say a lower 1100. 

hope that kind of makes sense haha.


----------



## colourcode

sry.fat.irl said:


> sapphire is my go to brand. always have had really good luck with binning and overall customer service is one of the best. pertaining to their boost settings if you leave boost off (trixx software) and use [...]


The VRMS are not so hot according to HWinfo. Running without touching any of the tuning settings the hot spot usually go 20+ degrees higher than normal core temp.
Guess i'll see what this Trixx software can do. My undervolting is pretty low. Running it at 1150 instead of 1175 with 2365 on the memory.

I would give up all of this if the card was quiet but since it gets hot the gpu fans are quite annoying.


----------



## 99belle99

colourcode said:


> The VRMS are not so hot according to HWinfo. Running without touching any of the tuning settings the hot spot usually go 20+ degrees higher than normal core temp.
> Guess i'll see what this Trixx software can do. My undervolting is pretty low. Running it at 1150 instead of 1175 with 2365 on the memory.
> 
> I would give up all of this if the card was quiet but since it gets hot the gpu fans are quite annoying.


That's normal about hotspot 20 degrees higher then core temp especially on stock air cooling. When I game I turn the fans up above 75% on the chart in radeon settings and just turn the sound up on Tv. Or even better when I used to game with headphones you do not hear the fans on GPU at all then.


----------



## colourcode

99belle99 said:


> That's normal about hotspot 20 degrees higher then core temp especially on stock air cooling. When I game I turn the fans up above 75% on the chart in radeon settings and just turn the sound up on Tv. Or even better when I used to game with headphones you do not hear the fans on GPU at all then.


I mean there is almost no performance drop for undervolting slightly woth mich lowe temp and fan noise. But reading that previous reply makes me reconsider.

Downloaded Trixx now but there arent even any buttons to turn boost off 😵

I do get better benchmark scores with my undervolt than with stock, maybe ill just keep it this way.


----------



## J7SC

...I prefer watercooling for 'always quiet performance', whether undervolted (or overvolted, per below)


----------



## Sufferage

colourcode said:


> I mean there is almost no performance drop for undervolting slightly woth mich lowe temp and fan noise. But reading that previous reply makes me reconsider.
> 
> Downloaded Trixx now but there arent even any buttons to turn boost off 😵
> 
> I do get better benchmark scores with my undervolt than with stock, maybe ill just keep it this way.


You can probably go a good deal lower with your undervolt, my Toxic can run most games @1062mv, just some older games may need 1081mv, boost clock @2504MHz.


----------



## colourcode

Sufferage said:


> You can probably go a good deal lower with your undervolt, my Toxic can run most games @1062mv, just some older games may need 1081mv, boost clock @2504MHz.


Toxic is the watercooled, no? It has a better chip afaik.


----------



## Sufferage

colourcode said:


> Toxic is the watercooled, no? It has a better chip afaik.


No, XTX chip just like on the Nitro+, mine (as always😒) no big winner in the silicon lottery.
I have the Air Cooled Edition but slapped a Bykski Waterblock on the card, for the undervolt it's pretty meaningless whether the card is air or water cooled, but the motivation to keep voltages low is clearly higher when on air - lower volts, lower temperatures - less noise.


----------



## sry.fat.irl

colourcode said:


> I mean there is almost no performance drop for undervolting slightly woth mich lowe temp and fan noise. But reading that previous reply makes me reconsider.
> 
> Downloaded Trixx now but there arent even any buttons to turn boost off 😵
> 
> I do get better benchmark scores with my undervolt than with stock, maybe ill just keep it this way.


you dont have the option?








i meant to click the trixx boost. but its just a slider also if you have any of the options like fsr or rsr on the amd software it will disable the trixx boost option


----------



## sry.fat.irl

colourcode said:


> The VRMS are not so hot according to HWinfo. Running without touching any of the tuning settings the hot spot usually go 20+ degrees higher than normal core temp.
> Guess i'll see what this Trixx software can do. My undervolting is pretty low. Running it at 1150 instead of 1175 with 2365 on the memory.
> 
> I would give up all of this if the card was quiet but since it gets hot the gpu fans are quite annoying.


2365 on memory..? or on core clock? 

you can also check this page out it has really informative info. more knowledge is always good. helps us all in the long run  









Guide to GPU Core Clocks & Memory Clocks - Everything You Need To Know


Comparing GPUs by looking at their clock speeds can be confusing. There are a couple of things you have to keep in mind.




www.cgdirector.com


----------



## colourcode

sry.fat.irl said:


> 2365 on memory..? or on core clock?
> 
> you can also check this page out it has really informative info. more knowledge is always good. helps us all in the long run
> 
> 
> 
> 
> 
> 
> 
> 
> 
> Guide to GPU Core Clocks & Memory Clocks - Everything You Need To Know
> 
> 
> Comparing GPUs by looking at their clock speeds can be confusing. There are a couple of things you have to keep in mind.
> 
> 
> 
> 
> www.cgdirector.com


----------



## MustangIsBoss

Spent most of today trying to get the AMD Reference Liquid Cooled VBIOS flashed onto my RX 6900 XT Phantom Gaming D (an XTX card). Ended up doing so after needing to install Linux Mint to get AMDVBFLASH to recognize the card, kept saying that there was no adapter when I tried the list or flash commands. Also had to enable, then disable Secure Boot, after which it successfully flashed the new VBIOS.

With this VBIOS, the card displays no output in BIOS or Safe Mode, up until you reach the Windows auto-login screen. My GTX 1660 does work, so I can use both for now and access the BIOS for example. I've DDUed the drivers for both manufacturers in Safe Mode, after which I reinstalled the AMD driver. (no display output on the monitors the RX 6900 XT is connected to until part-way through the Adrenaline install) I was hoping that it was my old MPT settings that were causing the 500mhz core clock limit to kick in, but no such luck. I enabled temp_dependent_vmin and set the max/min voltages for core, soc, memory in MPT to the values of my original VBIOS, but no change. (tested temp_dependent_vmin at first by itself to check) I'll likely have to reflash in a few days and just run 2150mhz Fast Timings (level 2 hard crashes even at 2000mhz) and finally try increasing the FCLK/cache speed.

That's all I can think of right now, spent way too many hours on this today.

@*sry.fat.irl *
I played around with HxD and Atom Bios Reader early in the day, didn't really find anything of note.


----------



## lestatdk

If you try to go above the 2150 on memory on an XTX card it'll cap the core at 500 MHz max . No way around that ,for now anyway


----------



## Blameless

I just realized, after seeing a reference to spread spectrum in some Linux SMU documentation, that the memory clock differential between what is set and actual is completely explained by a -0.5% spread spectrum level.

Anyone know if spread spectrum is able to be disabled via the soft power play tables? MPT doesn't have any such option yet, but it has so many other obscure ones that almost anything accessible via the SMU driver might reasonably be changed via SPPTs.


----------



## hellm

Blameless said:


> I just realized, after seeing a reference to spread spectrum in some Linux SMU documentation, that the memory clock differential between what is set and actual is completely explained by a -0.5% spread spectrum level.
> 
> Anyone know if spread spectrum is able to be disabled via the soft power play tables? MPT doesn't have any such option yet, but it has so many other obscure ones that almost anything accessible via the SMU driver might reasonably be changed via SPPTs.


There is a lot about spread spectrum found in the (S)PPT, only no RDNA2 card has these values set, they are all zeroed out, everything in " MAJOR SECTION: BOARD PARAMETERS" is zeroed. So i never implemented these options into MPT.


Code:


  // SECTION: Clock Spread Spectrum

  // GFXCLK PLL Spread Spectrum
  uint8_t      PllGfxclkSpreadEnabled;   // on or off
  uint8_t      PllGfxclkSpreadPercent;   // Q4.4
  uint16_t     PllGfxclkSpreadFreq;      // kHz

  // GFXCLK DFLL Spread Spectrum
  uint8_t      DfllGfxclkSpreadEnabled;   // on or off
  uint8_t      DfllGfxclkSpreadPercent;   // Q4.4
  uint16_t     DfllGfxclkSpreadFreq;      // kHz

  // UCLK Spread Spectrum
  uint16_t     UclkSpreadPadding;
  uint16_t     UclkSpreadFreq;      // kHz

  // FCLK Spread Spectrum
  uint8_t      FclkSpreadEnabled;   // on or off
  uint8_t      FclkSpreadPercent;   // Q4.4
  uint16_t     FclkSpreadFreq;      // kHz

  ...

  // UCLK Spread Spectrum
  uint8_t      UclkSpreadPercent[16];

.. and i wonder what 16 values for Uclk (= memory) are meant for. Only clock that still has discrete levels, and only 4 of them.

Maybe if i find the time, but now i also have MCT, and some XOC people are waiting for PerPart curve-stuff i want to do this week, soo.. i don't know if i find the time before RDNA3. My guess is, people only would want to deactivate it, and since it isn't activated through PPT, we can't do anything about it anyway.

Edit:
..also. spread spectrum doesn't mean less than your set value. With this option enabled you would see the clock fluctuating, and the average would still be the value you set the clock rate to.


----------



## sry.fat.irl

@ mustanglsboss 

you might want to check the registry. igorslab has this program. either the program or the steps going into registry manually. (windows still doesnt know how to delete old drivers from the driverstore or system32 sometimes they can overlap. happened to me mpt was showing i had 4 different 6900xt and one day it rebooted and i got the dreaded d2 then a bunch of other codes) 









igor’sLAB VGA Device-Manager Freeware - Detect and delete annoying graphics card duplicates in the registry | Practice | igor'sLAB


Who doesn't know them, the multiple entries of the supposedly same graphics card and the annoying consequences of this registry collecting mania? Sometimes you don't even consciously notice this and…




www.igorslab.de





its good to have as well cuz sometimes ddu misses things especially when its registry related. 

i only really had a major difference when hard modded the evc2sx from elmor labs. and a few shunts. 



Blameless said:


> I just realized, after seeing a reference to spread spectrum in some Linux SMU documentation, that the memory clock differential between what is set and actual is completely explained by a -0.5% spread spectrum level.
> 
> Anyone know if spread spectrum is able to be disabled via the soft power play tables? MPT doesn't have any such option yet, but it has so many other obscure ones that almost anything accessible via the SMU driver might reasonably be changed via SPPTs.


there's a lot of information on this page pertaining to the adjustments and the other settings. i dunno if you ve checked it out. but if not here ya go 

[Guide] - Navi 21 Max Overclocking Tutorial [6800 XT / 6900 XT] | Hardwareluxx


----------



## Blameless

hellm said:


> My guess is, people only would want to deactivate it, and since it isn't activated through PPT, we can't do anything about it anyway.


Even if it can't be disabled, tightening it might help.



hellm said:


> ..also. spread spectrum doesn't mean less than your set value. With this option enabled you would see the clock fluctuating, and the average would still be the value you set the clock rate to.


Down-spread spectrum clocking, which is typical in computing applications, does mean less than set value (reducing performance), which would be the cap. Spread spectrum in general also tends to harm overclocking potential...it's purely for EMI regulation.


----------



## deadfelllow

Hello guys,

I've been suffering high temps with my Sapphire Toxic 6900XT EE. Tried 2394234 pastes and they didnt even work. Just one thing worked and it is Liquid metal. Comparasions below.

This is before with Kyronaut Extreme, Room temp 25C ;










And this is now with Conductanut, Room temp 25C












Before using LM i polished all the SMD's.










Then i covered SMD's with electrical tape










Then applied LM both GPU die and contact plate.

















And the results are incredible.

I guess for all that time i didnt have a contact problem. The problem was the pastes. Even tho i tried all pastes(MX-4,Kyronaut,Gelid,Thermalright)


----------



## hellm

Blameless said:


> Even if it can't be disabled, tightening it might help.
> 
> Down-spread spectrum clocking, which is typical in computing applications, does mean less than set value (reducing performance), which would be the cap. Spread spectrum in general also tends to harm overclocking potential...it's purely for EMI regulation.


If this is spread spectrum, which i doubt. Even if it would be less, it should be still fluctuating, this is what spread spectrum or bit scrambling should do, afaik. But there is also no activate/deactivate byte for the UCLK memory stuff, so lets give it a try.

You can edit the setting in the SPPT by yourself, if you want to. See what it does, i could implement it later in MPT if it actually works. Might as well be completely ignored by the driver.
What you need is a hex editor like HxD or something similar. Save your SPPT as .mpt-file with MPT and open the file with the hex editor.
Here are the offsets:

0x0A16 1 byte DfllGfxclkSpreadEnabled 
0x0A17 1 byte DfllGfxclkSpreadPercent
0x0A18 2 bytes DfllGfxclkSpreadFreq

0x0A1c 2 bytes UclkSpreadFreq
0x0A4a - 0x0A59 1 byte UclkSpreadPercent [16]

The frequency is in kHz, and everything in the table is in little endian, this means if you want to save 1000 kHz use windows calculator (programming) to see the hex number 0x3E8 and write the lower byte first: E8 03.


----------



## deadfelllow

I just broke my personal record. I can push more actually this was a test run.









I scored 20 092 in Time Spy


AMD Ryzen 5 5600X, AMD Radeon RX 6900 XT x 1, 16384 MB, 64-bit Windows 10}




www.3dmark.com


----------



## Blameless

hellm said:


> If this is spread spectrum, which i doubt. Even if it would be less, it should be still fluctuating, this is what spread spectrum or bit scrambling should do, afaik.


I have no real way of know if it's spread spectrum or not, without testing it, but the offset does look suspiciously like down-spread.

Most everything expected to be sold in the US without modification ships with spread spectrum enabled by default in order to meet FCC Part 15 limits on EMI. I can't even disable spread spectrum on half of my motherboards without manually setting the reference clock and I assumed it to be enabled on most GPUs and the like, though either with center spread on the main performance-influencing clocks, or an offset included to account for it.

You normally can't see the fluctuations without an oscilloscope either; without a sufficiently fast polling period, you'll just get the average frequency.



hellm said:


> But there is also no activate/deactivate byte for the UCLK memory stuff, so lets give it a try.
> 
> You can edit the setting in the SPPT by yourself, if you want to. See what it does, i could implement it later in MPT if it actually works. Might as well be completely ignored by the driver.
> What you need is a hex editor like HxD or something similar. Save your SPPT as .mpt-file with MPT and open the file with the hex editor.
> Here are the offsets:
> 
> 0x0A16 1 byte DfllGfxclkSpreadEnabled
> 0x0A17 1 byte DfllGfxclkSpreadPercent
> 0x0A18 2 bytes DfllGfxclkSpreadFreq
> 
> 0x0A1c 2 bytes UclkSpreadFreq
> 0x0A4a - 0x0A59 1 byte UclkSpreadPercent [16]
> 
> The frequency is in kHz, and everything in the table is in little endian, this means if you want to save 1000 kHz use windows calculator (programming) to see the hex number 0x3E8 and write the lower byte first: E8 03.


Thanks. I'll play with it a bit to see if it does anything.


----------



## carmack

Deadfelllow, why only 30 seconds, push it at least for 3 minutes. 

This is the temperatures (74 C edge, 107 C hotspot) i got with thermalright archon SB-e with 2x1500 rpm push-pull on 6900 xt also with liquid metal (460 gpu power, 547 W gpt) at 27 ambient temp.


----------



## deadfelllow

carmack said:


> Deadfelllow, why only 30 seconds, push it at least for 3 minutes.
> 
> This is the temperatures (74 C edge, 107 C hotspot) i got with thermalright archon SB-e with 2x1500 rpm push-pull on 6900 xt also with liquid metal (460 gpu power, 547 W gpt) at 27 ambient temp.
> 
> View attachment 2572190
> View attachment 2572191


This is after 5 mins of Furmark. 470W core power. 550W total. Istanbul is still hot tho. Temp outside is 29C, Room 25-26C.


----------



## Blameless

@hellm 

I started by setting a UclkSpreadPercent of "1" to all sixteen (channels?) to see if it did anything. It didn't change the reported memory clock in the slightest, but it definitely did something, as it made the memory on my card unstable. So, not sure what it's doing and I don't have the tools to analyze spread spectrum directly, but it's being parsed by the driver.


----------



## MustangIsBoss

sry.fat.irl said:


> @ mustanglsboss
> 
> you might want to check the registry. igorslab has this program. either the program or the steps going into registry manually. (windows still doesnt know how to delete old drivers from the driverstore or system32 sometimes they can overlap. happened to me mpt was showing i had 4 different 6900xt and one day it rebooted and i got the dreaded d2 then a bunch of other codes)
> 
> 
> 
> 
> 
> 
> 
> 
> 
> igor’sLAB VGA Device-Manager Freeware - Detect and delete annoying graphics card duplicates in the registry | Practice | igor'sLAB
> 
> 
> Who doesn't know them, the multiple entries of the supposedly same graphics card and the annoying consequences of this registry collecting mania? Sometimes you don't even consciously notice this and…
> 
> 
> 
> 
> www.igorslab.de
> 
> 
> 
> 
> 
> its good to have as well cuz sometimes ddu misses things especially when its registry related.


It's funny that you posted that, I ended up doing exactly that yesterday after I flashed the original VBIOS.


----------



## PJVol

Blameless said:


> Down-spread spectrum clocking, which is typical in computing applications, does mean less than set value (reducing performance), which would be the cap


It should be drifting within -0.5% up and down as per PCIe specs, so I doubt it'd affect performance in any meaningful way. More so when interconnect points have their own clock source and both support SSC.

I think it's not necessarily a bad thing overall, despite the recent habit of turning it off wherever possible )
Afaik most of Zen serdes pll's have built-in SSC modulators.


----------



## Blameless

PJVol said:


> It should be drifting within -0.5% up and down as per PCIe specs, so I doubt it'd affect performance in any meaningful way. More so when interconnect points have their own clock source and both support SSC.


The average clock rate would be a quarter of a percent less, which would be a larger performance differential in and of itself than a lot of the tweaks that can be done to these parts with MPT. For most though, it's not the direct impact on performance that is the problem, it's the loss in overclocking margins.



PJVol said:


> I think it's not necessarily a bad thing overall, despite the recent habit of turning it off wherever possible


If the EMI mitigation provides no advantages (all of these clocks are way above the point they'd impact audio,well below the point they'd harm a 5-6GHz network, and I'm not sure where else it would be an issue, for my setups) then the loss in overclocking potential makes turning off spread-spectrum worth it.

Enabling it (or increasing it to 1% presumably) on the UCLK of my 6800 XT took my maximum unconditionally stable memory clock (no errors and consistently higher benchmark results while looping Superposition 4k or benching large DAG sizes with gminer) from 2106 to less than stock (~2000).



PJVol said:


> Afaik most of Zen serdes pll's have built-in SSC modulators.


I'm sure they do; Navi certainly seems to. I also suspect they'd OC better if the spread spectrum feature could be disabled.


----------



## ilmazzo

Why is furmark still around ffs?


----------



## alceryes

ilmazzo said:


> Why is furmark still around ffs?


Because, at max settings, it can still cause powerful GPUs to struggle.
Also, the Furmark of today is not the same as the Furmark of 10 years ago. Furmark has become more and more demanding, in step with more and more powerful cards.


----------



## PJVol

Blameless said:


> If the EMI mitigation provides no advantages (all of these clocks are way above the point they'd impact audio,well below the point they'd harm a 5-6GHz network, and I'm not sure where else it would be an issue, for my setups) then the loss in overclocking potential makes turning off spread-spectrum worth it.


EMI may not be the only reason to implement SSC in a clock generators, outside of bus interconnect scope.
Here is the quote from the AMD patent (*Adaptive dco vf curve slope control)* where SS generator implemented in an oscillator circuit, which purpose is to minimize the impact of voltage noise on the clock stability.


> Deterministic noise sources may include noise such as, without limitation, crosstalk between adjacent signal traces, *electromagnetic interference radiation*, substrate noise, multiple gate switching, and simultaneously switching gates. Random noise sources may include noise such as, without limitation, thermal noise associated with electron flow, shot noise due to potential barriers in semiconductors, flicker noise associated with crystal surface defects in semiconductors.


May not be directly related, but probably you'll find something interesting here as well:

US20190319609A1 - Adaptive oscillator for clock generation - Google Patents


----------



## ilmazzo

alceryes said:


> Because, at max settings, it can still cause powerful GPUs to struggle.
> Also, the Furmark of today is not the same as the Furmark of 10 years ago. Furmark has become more and more demanding, in step with more and more powerful cards.


Why use a Power virus as a real case scenario, this is my concern...


----------



## Blameless

ilmazzo said:


> Why use a Power virus as a real case scenario, this is my concern...


It's not meant to be a real-world scenario. It's not even a good stability test as it will run fine with settings that would be highly unstable in many other apps. However, it is a great test of cooling and power delivery, able to push most GPUs to any arbitrary power limit set.

Wouldn't do without it in my list of test applications.


----------



## ilmazzo

So basically a boost verifier at the worst case for temp and power usage right now?
Well I don’t disagree here if it sits in your oc workflow, I just think thst is a validation of a gaming workload you basically will never have.

i don’t use it since bho 2013-14 …to each his own


----------



## Trevbev

I'm having weird time spy graphics scores and I can't work out why.
I've got a 5600x and 6900xt and in the past I could get 24k in a very cold room and I'm sure I could 21-22k in normal amient temperature.
Now I got can't above 1900s TS graphics. 
Also the clock speed drops quite a lot at the beginning of GT2 to like 2500Mhz is this normal?
Here's a recent run I scored 15 711 in Time Spy
Here's an old run I scored 18 683 in Time Spy
I wasn't using MPT for either result.
I noticed that old drivers aren't valid any more. Have they changed the way scoring works?
Maybe I'm just using bad drivers - 22.6.1?


----------



## MotomEniac

Trevbev said:


> I'm having weird time spy graphics scores and I can't work out why.
> I've got a 5600x and 6900xt and in the past I could get 24k in a very cold room and I'm sure I could 21-22k in normal amient temperature.
> Now I got can't above 1900s TS graphics.
> Also the clock speed drops quite a lot at the beginning of GT2 to like 2500Mhz is this normal?
> Here's a recent run I scored 15 711 in Time Spy
> Here's an old run I scored 18 683 in Time Spy
> I wasn't using MPT for either result.
> I noticed that old drivers aren't valid any more. Have they changed the way scoring works?
> Maybe I'm just using bad drivers - 22.6.1?


hm, that's strange, I'm wondering what kind of clocks do you see on detailed monitoring after run. Thats my dayly OC with average core clock at ~2600Mhz, and it scores above 23k.







Difference is that I'm using MPT and my power limit is set to 365+15% in wattman = 418W. And if I'm not wrong to maintain that clock on some places of TS scene 2 it draws ~400W. So I'm curious to see your clock graph, it should drop to low 2400 in scene 2 with stock power limit...


----------



## 99belle99

@Trevbev That happens me from time to time but it is due to me locking frame rate to 120Hz as my 4 k Tv is 120Hz so when playing games I lock it and sometimes forget when I run a bench but I know what it is straight away.


----------



## Trevbev

I’ll double check that there’s no limit on. I also turned off free sync.
The gpu usage was 99% for the whole run so I don’t think that’s the cause.

Average frequency was apparently 2695 but was dropping to 2500 in gt2


----------



## 99belle99

Trevbev said:


> I’ll double check that there’s no limit on. I also turned off free sync.
> The gpu usage was 99% for the whole run so I don’t think that’s the cause.
> 
> Average frequency was apparently 2695 but was dropping to 2500 in gt2


Before you were having problems you were getting great scores is that a XTXH model?

This is my best GPU score. Reference model with stock cooler: I scored 20 172 in Time Spy

This is my best overall score due to CPU scoring higher: I scored 20 223 in Time Spy


----------



## lestatdk

I don't get how it can average almost 2700 with that score ?

My best is with 2605 average and I got 23672 GPU score










I scored 20 958 in Time Spy


AMD Ryzen 7 5800X3D, AMD Radeon RX 6900 XT x 1, 32768 MB, 64-bit Windows 10}




www.3dmark.com


----------



## lestatdk




----------



## deadfelllow

lestatdk said:


> I don't get how it can average almost 2700 with that score ?
> 
> My best is with 2605 average and I got 23672 GPU score
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> I scored 20 958 in Time Spy
> 
> 
> AMD Ryzen 7 5800X3D, AMD Radeon RX 6900 XT x 1, 32768 MB, 64-bit Windows 10}
> 
> 
> 
> 
> www.3dmark.com


Maybe clock stretching?


----------



## Trevbev

It’s an xtxh sapphire toxic extreme


99belle99 said:


> Before you were having problems you were getting great scores is that a XTXH model?
> 
> This is my best GPU score. Reference model with stock cooler: I scored 20 172 in Time Spy
> 
> This is my best overall score due to CPU scoring higher: I scored 20 223 in Time Spy


----------



## lestatdk

Your memory speed is 100 MHz lower, your average temp is also 13C higher


----------



## 99belle99

I was just browsing through 3dmarks scores and seen I have the third best CPU score in the world for a 3700X. I also have no idea how I managed that score as I can get no where near it now. I was benching two nights ago and kept getting bad CPU scores.


----------



## J7SC

99belle99 said:


> I was just browsing through 3dmarks scores and seen I have the third best CPU score in the world for a 3700X. I also have no idea how I managed that score as I can get no where near it now. I was benching two nights ago and kept getting bad CPU scores.
> 
> View attachment 2572633


...nice ! I hadn't even checked that database, but two in the top 10 for my config (totally non-optimized re. CPU, btw). With fall / winter coming, I might revisit w/ a bench session or two


----------



## Trevbev

lestatdk said:


> Your memory speed is 100 MHz lower, your average temp is also 13C higher


True but I don't think it explains much lower it is. The cpu is also not OC'd.


----------



## lestatdk

Maybe try and log temperatures. Your average core is almost the same and memory lower and your temp is higher. Something is going on there


----------



## Trevbev

99belle99 said:


> @Trevbev That happens me from time to time but it is due to me locking frame rate to 120Hz as my 4 k Tv is 120Hz so when playing games I lock it and sometimes forget when I run a bench but I know what it is straight away.


Thanks.
Looks like this is whats happened. I don't remember limiting the frame rate though. I'll have to find where I did.


----------



## Azazil1190

Does anyone knows how i can open the strix lc to repaste with lm.
I cant found which screws are the right.
Its different than my toxic extreme


----------



## Azazil1190

deadfelllow said:


> This is after 5 mins of Furmark. 470W core power. 550W total. Istanbul is still hot tho. Temp outside is 29C, Room 25-26C.
> 
> View attachment 2572196


Amazing temps!


----------



## deadfelllow

Azazil1190 said:


> Amazing temps!


Thanks bro! I cant push more than this cuz of my PSU. Its 750W. Any suggestions for 550-600W PPT MPT settings?

Also when i tick TEMP_DEPENDENT_VMIN it does not work. My voltage swinging like a hell.









I scored 20 219 in Time Spy


AMD Ryzen 5 5600X, AMD Radeon RX 6900 XT x 1, 16384 MB, 64-bit Windows 10}




www.3dmark.com













I scored 20 092 in Time Spy


AMD Ryzen 5 5600X, AMD Radeon RX 6900 XT x 1, 16384 MB, 64-bit Windows 10}




www.3dmark.com


----------



## ericc64

Helo .. I new here witch a Asrock 6900XT Formula on water  The Bykski block is not so very good, but it works. Thermal Grizzly Conductonaut on core + better pads on VRAM + VRM.

In FH5 , 24/7 settings , stock voltage, stock power limit ( + 15% ) 

2870Mhz core / 2138Mhz ram
28C core / 36C hotspot / 34C VRM / 32C VRAM

Cooling 2x Mora 420 in basement. Water temp 16C.

Best score so far in 3Dmark Time Spy - 26677 .... LC bios + modified power Limit.








I scored 26 677 in Time Spy


Intel Core i9-12900K Processor, AMD Radeon RX 6900 XT x 1, 32768 MB, 64-bit Windows 10}




www.3dmark.com





I wait for the new 7950X and later a push more in the 6900XT


----------



## deadfelllow

ericc64 said:


> Helo .. I new here witch a Asrock 6900XT Formula on water  The Bykski block is not so very good, but it works. Thermal Grizzly Conductonaut on core + better pads on VRAM + VRM.
> 
> In FH5 , 24/7 settings , stock voltage, stock power limit ( + 15% )
> 
> 2870Mhz core / 2138Mhz ram
> 28C core / 36C hotspot / 34C VRM / 32C VRAM
> 
> Cooling 2x Mora 420 in basement. Water temp 16C.
> 
> Best score so far in 3Dmark Time Spy - 26677 .... LC bios + modified power Limit.
> 
> 
> 
> 
> 
> 
> 
> 
> I scored 26 677 in Time Spy
> 
> 
> Intel Core i9-12900K Processor, AMD Radeon RX 6900 XT x 1, 32768 MB, 64-bit Windows 10}
> 
> 
> 
> 
> www.3dmark.com
> 
> 
> 
> 
> 
> I wait for the new 7950X and later a push more in the 6900XT
> View attachment 2572744


Could you please share MPT settings with voltages please?


----------



## Trevbev

Awesome 


ericc64 said:


> Helo .. I new here witch a Asrock 6900XT Formula on water  The Bykski block is not so very good, but it works. Thermal Grizzly Conductonaut on core + better pads on VRAM + VRM.
> 
> In FH5 , 24/7 settings , stock voltage, stock power limit ( + 15% )
> 
> 2870Mhz core / 2138Mhz ram
> 28C core / 36C hotspot / 34C VRM / 32C VRAM
> 
> Cooling 2x Mora 420 in basement. Water temp 16C.
> 
> Best score so far in 3Dmark Time Spy - 26677 .... LC bios + modified power Limit.
> 
> 
> 
> 
> 
> 
> 
> 
> I scored 26 677 in Time Spy
> 
> 
> Intel Core i9-12900K Processor, AMD Radeon RX 6900 XT x 1, 32768 MB, 64-bit Windows 10}
> 
> 
> 
> 
> www.3dmark.com
> 
> 
> 
> 
> 
> I wait for the new 7950X and later a push more in the 6900XT
> View attachment 2572744


 Awesome setup have made any videos ?


----------



## ericc64

deadfelllow said:


> Could you please share MPT settings with voltages please?


This is setting witch the 26677 score.


Trevbev said:


> Awesome
> 
> Awesome setup have made any videos ?


 Thanks... I have no videos  Its nothing special , XTXH card + water blok + cold water  the secret is just maximal cooling the card + increase power limit + increase voltage in MPT + LC bios ( when is a XTXH cip ) . After that is posible 27K+ in graphics score in Time spy.


----------



## Sufferage

ericc64 said:


> This is setting witch the 26677 score.
> Thanks... I have no videos  Its nothing special , XTXH card + water blok + cold water  the secret is just maximal cooling the card + increase power limit + increase voltage in MPT + LC bios ( when is a XTXH cip ) . After that is posible 27K+ in graphics score in Time spy.
> 
> 
> View attachment 2572765
> View attachment 2572770


Love the MORAs in your basement, wish i had that possibility 🆒


----------



## alexp247365

NiteNinja said:


> Good evening.
> 
> I just got a Gigabyte Waterforce 6900XT. I'm having some interesting issues with it, and here is what I've been spending most of the night on.
> 
> It sometimes works for some games, but crashes immediately or black screens on others. Furmark can run it full tilt with no issues, temps are all good, and everything is stable, stock and overclocked. Some games like Doom 2016 and Genshin Impact runs fine. But others like Fallout 76 or Doom Eternal just crashes out. 3D Mark won't start Timespy or Firestrike either, just kicks me out and says an error occured. Quake II RTX freezes too.
> 
> I was in Windows 11 first, I DDU and installed both the WQHL and Optional drivers, no luck. I rolled my system back to Windows 10, and no luck. And now I'm trying older drivers, with no luck either. I've even seen some people disconnect their internet, go into safe mode, DDU, install the new driver and go, but that doesn't work either. I've tried underclocking, undervolting, everything.
> 
> I'm using an EVGA 1,000W Platinum power supply, with a 5900X CPU, 32gb of HP V10 3200mhz CL14 memory (Samsung B-Die), on an ASRock X470 Gaming K4. I've installed and reinstalled all the chipset drivers and whatnot too, BIOS on the MB is updated, VBIOS is re-flashed to factory just in case it was a miner card (It was bought open box new condition, still had the plastic peel on it). Most my system pulls is about 840w from the readout on my UPS, but it crashes on these games no matter how much or little power it pulls.
> 
> I'm chocking up to drivers being bad but I really hope that something on the card itself isn't bad either. I know different workloads can hit the core differently, and I can't seem to find anything in common. Doom Eternal doesn't work in DirectX or Vulcan.
> 
> Any help or advice will be highly appreciated and thanks in advance!
> 
> Edit: Couple more things I've done. I relocated the GPU from vertical mount to horizontal standard position, both PCI-E slots. Rolled all the way back to even 22.2.4. Disabled every overlay possible. GTA5 seems to work fine. I bought this card for VR but haven't tested any VR titles yet.


I went through 3 power supplies before I found one that worked for my card, all 1200w +. The superflower leadex gold 1300w eventually gave me no issues. (2 platinums did not work.)


----------



## Nechana

colourcode said:


> Can someone explain this to me...
> I have the 6900xt Nitro+ SE. (NITRO+ AMD Radeon RX 6900 XT SE 16G GDDR6 (sapphiretech.com) )
> 
> Supposedly it's running at these clocks:
> 
> GPU: Boost Clock: Up to 2365 MHz
> GPU: Game Clock: Up to 2135 MHz
> 
> However when I run it without any settings changed it boosts around 2500mhz when gaming - and dips constantly due to overheating (my guess).
> In the AMD tuning page the default value for max clock is set at ~2527 or so.
> 
> I'm running it undervolted and set the max freq to 2365 which seems to work, and instead of running itself into the ground it's steady around 80 degrees.
> Why is it running so much higher at stock?
> 
> Another question - did anyone change the paste on this model?
> Looks like the parts with cooling pads are separate from the main cooler but I havent really found any defintive information.



Hi , i have same card and it behave exactly same. I was lowered MHz in Wattman too but i set low to 2265MHz, dips are not so often now. I think its driver problem and hope for new WHQL drivers.I use 22.5.1 for now because they are most stable.
I repaste my card and temps are now max 85 ambient and 95 hotspot. I just apply new paste because there are very thic thermal pads (0,5 cm maybe) and some smaller but many.


----------



## Nechana

sry.fat.irl said:


> here ya go.. you can dl it from my gdrive link. the pcie drivers. just read the include .txt file on directions.
> 
> AMDGPIO_PciBus.rar
> 
> i mean right now i have the 22.7.1 drivers with the vulkan and opengl updates. havent had any major issues.



Hi i tried update this PciBus driver but after restart is allways back to 21.50.0.1 from 14.12.2021


----------



## MotomEniac

Nechana said:


> Hi i tried update this PciBus driver but after restart is allways back to 21.50.0.1 from 14.12.2021


That was also happening for me. I solved it next way find pci bus device in device manager, select update driver->select from available on this computer->select PCI Express root complex. Then disable network->reboot->install your desired pci bus driver. Worked for me


----------



## Nechana

MotomEniac said:


> That was also happening for me. I solved it next way find pci bus device in device manager, select update driver->select from available on this computer->select PCI Express root complex. Then disable network->reboot->install your desired pci bus driver. Worked for me


 Thank you for tip but it doesnt work for me, it allways revert back to original.


----------



## lestatdk

Nechana said:


> Thank you for tip but it doesnt work for me, it allways revert back to original.


I have the same problem. Reverts back to previous version


----------



## cfranko

Is there a way to make the voltage drop when idle while using tempdependentvmin? Currently when I use TempDependentVmin, the card stays at max voltage even when idle


----------



## deadfelllow

cfranko said:


> Is there a way to make the voltage drop when idle while using tempdependentvmin? Currently when I use TempDependentVmin, the card stays at max voltage even when idle


Buna bak koçero. 









[Official] AMD Radeon RX 6900 XT Owner's Club


Can some of you guys please post yours stock 3dmark scores along with what cpu was used? Much appreciated.




www.overclock.net


----------



## cfranko

deadfelllow said:


> Buna bak koçero.
> 
> 
> 
> 
> 
> 
> 
> 
> 
> [Official] AMD Radeon RX 6900 XT Owner's Club
> 
> 
> Can some of you guys please post yours stock 3dmark scores along with what cpu was used? Much appreciated.
> 
> 
> 
> 
> www.overclock.net


This doesn't work


----------



## MotomEniac

cfranko said:


> This doesn't work


That's strange, check that all of the DS flags are checked(left at default). Also if you will put some screenshots of your MPT and also HWMon showing your voltage - it will help to resolve your issue. It is working flawless for me.


----------



## cfranko

MotomEniac said:


> That's strange, check that all of the DS flags are checked(left at default). Also if you will put some screenshots of your MPT and also HWMon showing your voltage - it will help to resolve your issue. It is working flawless for me.


Since all the DS flags are already checked by default, I didn’t understand why you posted this method. Technically since everything you need for the voltage to drop while idle is already checked as default, only using TempDependentVmin without changing anything additional needs to allow the voltage to drop while idle since the other DS flags are already checked by default, so what is exactly the catch in your method? Am I missing something while doing this.


----------



## MotomEniac

cfranko said:


> Since all the DS flags are already checked by default, I didn’t understand why you posted this method. Technically since everything you need for the voltage to drop while idle is already checked as default, only using TempDependentVmin without changing anything additional needs to allow the voltage to drop while idle since the other DS flags are already checked by default, so what is exactly the catch in your method? Am I missing something while doing this.


The catch is no one mentioned much about temp_depandant_v_min, and everyone was referring to that Igor's lab guide of overclocking, where he wrote that by using that flag you will have to live with that constant high voltages(probably because he used it with DS off). So it was breakthrough for me, that pretty extreme feature can be used for 24\7 overclocking. Thats why I wrote that reference.


----------



## cfranko

MotomEniac said:


> The catch is no one mentioned much about temp_depandant_v_min, and everyone was referring to that Igor's lab guide of overclocking, where he wrote that by using that flag you will have to live with that constant high voltages(probably because he used it with DS off). So it was breakthrough for me, that pretty extreme feature can be used for 24\7 overclocking. Thats why I wrote that reference.


Enabling all the DS ticks doesn’t work for me, I guess no temp dependent v min for me


----------



## MotomEniac

cfranko said:


> Enabling all the DS ticks doesn’t work for me, I guess no temp dependent v min for me


How are you checking your Vcore?


----------



## cfranko

MotomEniac said:


> How are you checking your Vcore?


radeon software wattman


----------



## energie80

no one here using 6950xt to share some results?


----------



## DevilX

Hi guys,

I've been reading along a bit, but I'm not sure which bios is appropriate for my card. I have a 6900xt direct from amd that has been modified to water cooling.


----------



## deadfelllow

energie80 said:


> no one here using 6950xt to share some results?


Check this channel.



https://www.youtube.com/user/TheMattB81


----------



## MotomEniac

cfranko said:


> radeon software wattman


I was witnessing incorrect indications in Wattman often(I suppose it shows aka VID voltage for CPU), I advice you to recheck with HWInfo64, specifically VDDCR one.


----------



## 99belle99

DevilX said:


> Hi guys,
> 
> I've been reading along a bit, but I'm not sure which bios is appropriate for my card. I have a 6900xt direct from amd that has been modified to water cooling.


There is no bios available for that card. I have the same one. You can use MPT.


----------



## DevilX

I have unfortunately only linux available. I think I have to get windows until I have created a bios.


----------



## 8800GT

DevilX said:


> I have unfortunately only linux available. I think I have to get windows until I have created a bios.


These reference cards are notorious for bricking themselves trying to flash any other bios. For peace of mind, I would have a backup bios chip on hand. Maybe its better on Linux, but RBE isn't even a good option on Windows.


----------



## 99belle99

I wouldn't even flash a reference 6900 XT. The only people who bios flash is XTXH cards with dual bios and then they only do it for the faster memory clock.

You cannot flash a XTXH bios to a XTX. 

So forget about flashing a bios to the reference card as even if you did successfully flash another bios to it(which you cannot anyway) you would gain very little benefit anyway.

The only dies that get faster cores are XTXH dies and you cannot flash that bios to a XTX anyway.


----------



## cfranko

MotomEniac said:


> I was witnessing incorrect indications in Wattman often(I suppose it shows aka VID voltage for CPU), I advice you to recheck with HWInfo64, specifically VDDCR one.
> View attachment 2573158


Hwinfo does actually show the voltage low while idle just like you said, I don't know why wattman shows it has max voltage


----------



## energie80

I’m making better fps on 1440p competitive


----------



## J7SC

...haven't followed this thread too much over the last few weeks, but have these two new apps been posted here ?

> Radeon Monster / Yuri

> MCT / Igor's Lab

...not sure if I bother because MPT (on the right below) has given plenty of extra oomph for my custom-PCB 'regular' 6900 XT (though the air-cooler had to go, 3x 3800 rpm and all that...)


----------



## Blist66

Hello , been a long since posting!
But I see place still goes strong!
I put my card on a loop at last (red devil ultimate)
First things wanted to ask is , shoulder i flash lc 6900 bios or 6950 red devil.askign because ive see that because red devil doesn't have type c port you might have some issues with lc one.
On to the next one, now im on water thermals are much better on air with 360watts i was at 95-100c now on 415w i didnt to above 80 and without maxed out fans. my question is i crank up the watts and clocks go higher but the scores get lower whats the trick behind this should i tweak more mpt parameters except from the typical ones? I think i have enough thermal headroom to push but it doesn't seem it bring the results, i guess im missing something here! Or could it be just the silicon lottery?


----------



## jonRock1992

Blist66 said:


> Hello , been a long since posting!
> But I see place still goes strong!
> I put my card on a loop at last (red devil ultimate)
> First things wanted to ask is , shoulder i flash lc 6900 bios or 6950 red devil.askign because ive see that because red devil doesn't have type c port you might have some issues with lc one.
> On to the next one, now im on water thermals are much better on air with 360watts i was at 95-100c now on 415w i didnt to above 80 and without maxed out fans. my question is i crank up the watts and clocks go higher but the scores get lower whats the trick behind this should i tweak more mpt parameters except from the typical ones? I think i have enough thermal headroom to push but it doesn't seem it bring the results, i guess im missing something here! Or could it be just the silicon lottery?


I got better performance with the LC vbios on my Red Devil Ultimate. This is because FT2 vram timings work with the LC vbios and not the 6950 XT vbios. The LC vbios only works with certain motherboards though. Usually motherboards that are not ASUS.


----------



## Veii

jonRock1992 said:


> Usually motherboards that are not ASUS.


Gigabyte boards and latest Chipset drivers (PCI driver to be more correct) deny flashing in Windows and Linux
ASUS Board here and ASRock board here. SMU 56.69 (patch B) is flawless.


J7SC said:


> ...haven't followed this thread too much over the last few weeks, but have these two new apps been posted here ?
> 
> > Radeon Monster / Yuri
> 
> > MCT / Igor's Lab


Also everyone have fun with RMP ~ and if my profile for XTXH & KXTX s*cks, please complain plenty with @ mentions on platforms you find me. 
It's "decent" but it's free. Shines more in highs @ near 26 000p


----------



## jonRock1992

Veii said:


> Gigabyte boards and latest Chipset drivers (PCI driver to be more correct) deny flashing in Windows and Linux ASUS Board here and ASRock board here. SMU 56.69 (patch B) is flawless. Also everyone have fun with RMP ~ and if my profile for XTXH & KXTX s*cks, please complain plenty with @ mentions on platforms you find me.  It's "decent" but it's free. Shines more in highs @ near 26 000p
> View attachment 2573739


 Never said it wouldn't flash with ASUS boards. It will fash, but it won't post because the RDU doesn't have a USB-C port and the LC vBIOS expects there to be one. ASUS boards report this error as a USB Overcurrent Protection, and it will refuse to post because of it.


----------



## Blist66

jonRock1992 said:


> I got better performance with the LC vbios on my Red Devil Ultimate. This is because FT2 vram timings work with the LC vbios and not the 6950 XT vbios. The LC vbios only works with certain motherboards though. Usually motherboards that are not ASUS.


Thanks for answering one of my 2 questions.

So i have a gigabyte b550 it.should be ok to flash the lc bios right to my RDU?

Also something else since we got same card how far can you push it?

Mine now is on water i can get around 24k timespy with 1150mv and 2750clk but im nowhere near thermal throttle (80c core) or extreme wattage, seems like i cant pull more than 410watts even if i set MPT at 450 or so.dont know why is that though as long as you don't throttle and you are able to pull more watts at max mv(1200) it should be fine.


----------



## Veii

jonRock1992 said:


> Never said it wouldn't flash with ASUS boards. It will fash, but it won't post because the RDU doesn't have a USB-C port and the LC vBIOS expects there to be one.


I know but you are mistaken
That is in ROM not in EFI // it is a GOP thing

And for the EFI part you are mistaken too, because it's gone on said version number
I know what i speak about, because KXTX rom spoofing and bypass (+finding) of this issue comes from me


----------



## jonRock1992

Blist66 said:


> Thanks for answering one of my 2 questions.
> 
> So i have a gigabyte b550 it.should be ok to flash the lc bios right to my RDU?
> 
> Also something else since we got same card how far can you push it?
> 
> Mine now is on water i can get around 24k timespy with 1150mv and 2750clk but im nowhere near thermal throttle (80c core) or extreme wattage, seems like i cant pull more than 410watts even if i set MPT at 450 or so.dont know why is that though as long as you don't throttle and you are able to pull more watts at max mv(1200) it should be fine.


I got 26.15K at one point in time with the stock vbios:








I scored 22 891 in Time Spy


AMD Ryzen 7 5800X, AMD Radeon RX 6900 XT x 1, 16384 MB, 64-bit Windows 10}




www.3dmark.com




I haven't tried to bench Timespy with the LC vbios though. The settings that I used are in the description for the bench in the above link.


----------



## jonRock1992

Veii said:


> I know but you are mistaken
> That is in ROM not in EFI // it is a GOP thing
> 
> And for the EFI part you are mistaken too, because it's gone on said version number
> I know what i speak about, because KXTX rom spoofing and bypass (finding) of this issue comes from me


Idk, but I discovered this issue a long ass time ago. I was using an ASUS Dark Hero at the time. I had to change to an MSI motherboard for my RDU 6900 XTX-H to post with the LC vBIOS.


----------



## Veii

jonRock1992 said:


> I was using an ASUS Dark Hero at the time. I had to change to an MSI motherboard for my RDU 6900 XTX-H to post with the LC vBIOS.


ASUS board is infront of me 
But the gigabyte issue i think still remains

It's on two parts
Chipset driver, PCIe driver (not gpu, but paired with)
and AGESA

Then Overcurrent trigger itself is up to PCB design (on card)
and in bios.
OCFormula bioses dont have this ~ you can spoof KXTX bios , but memory unlock in SMU needs to happen one way or another
Optimally on 071 vbios if LCREF doesnt boot


----------



## tsamolotoff

Hello peeps, I've read whole thread (finally!) and put my 6900xt PG (XTX, unfortunately) under water. All is fine, I guess, can do 2800 at 1.25v set in vdepmin (it gets down to 1.15v or even less under proper load) in TS/TSE/FS as well as all the other heavy loads (I scored 22 561 in Time Spy). I wonder if there's something I can do (apart from going higher with the V, which is probably fine for benching (hotspot under 60C in a typical TS run if I open the window (27C+ in my room, heating was turned on early in Moscow, brrr) but I intend to leave the current settings for daily use, I've tested it in Control / AMID Evil (RTX mode) for hours, seems to be fine) to get more out of the card.

Maybe there's a way to increase memclock limit, I've tested memory scaling in TS/TSE and it seems to be scaling upwards up till the 2150 in the Wattman. Setting FCLK to 2133 caused instant reboot on air, but I haven't checked it on water, but probably it won't work anyway, I think.


----------



## Blist66

Veii said:


> ASUS board is infront of me
> But the gigabyte issue i think still remains
> 
> It's on two parts
> Chipset driver, PCIe driver (not gpu, but paired with)
> and AGESA
> 
> Then Overcurrent trigger itself is up to PCB design (on card)
> and in bios.
> OCFormula bioses dont have this ~ you can spoof KXTX bios , but memory unlock in SMU needs to happen one way or another
> Optimally on 071 vbios if LCREF doesnt boot


So i wont be able to flash a RDU in a gigabyte board and make it work?


----------



## Blist66

jonRock1992 said:


> I got 26.15K at one point in time with the stock vbios:
> 
> 
> 
> 
> 
> 
> 
> 
> I scored 22 891 in Time Spy
> 
> 
> AMD Ryzen 7 5800X, AMD Radeon RX 6900 XT x 1, 16384 MB, 64-bit Windows 10}
> 
> 
> 
> 
> www.3dmark.com
> 
> 
> 
> 
> I haven't tried to bench Timespy with the LC vbios though. The settings that I used are in the description for the bench in the above link.


Thats a pretty high score can you share some mpt and Wattman info about those performances?


----------



## tsamolotoff

lestatdk said:


> I don't get how it can average almost 2700 with that score ?
> 
> My best is with 2605 average and I got 23672 GPU score
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> I scored 20 958 in Time Spy
> 
> 
> AMD Ryzen 7 5800X3D, AMD Radeon RX 6900 XT x 1, 32768 MB, 64-bit Windows 10}
> 
> 
> 
> 
> www.3dmark.com


If you have more than one monitor (I have three, for example) enabled, TS score tanks down. I've got extra 600 points just by disabling the other two monitors (I guess frame buffer takes precedence over anything else and that's what effectively cuts the infinity cache by substantial amount of megabytes)


----------



## energie80

guys any mpt guide for reference 6950xt? thanks


----------



## earphonelnwshop

OC for use everydays with 6900XT OCF
OC 2880mhz / mem2400mhz / Power limit 600w / Custom water loop/ Mod themal pad
Time spy stress test passesd









Let's rock s


----------



## MotomEniac

earphonelnwshop said:


> OC for use everydays with 6900XT OCF
> OC 2880mhz / mem2400mhz / Power limit 600w / Custom water loop/ Mod themal pad
> Time spy stress test passesd


Whow that's powerful. What card is it?


----------



## earphonelnwshop

MotomEniac said:


> Whow that's powerful. What card is it?


ASROCK 6900XT OC Formula


----------



## deadfelllow

earphonelnwshop said:


> ASROCK 6900XT OC Formula


Can you please provide LOAD temps? I wonder total card power & hotspot C.


----------



## earphonelnwshop

deadfelllow said:


> Can you please provide LOAD temps? I wonder total card power & hotspot C.


You can see it in the video I posted. 
The temperature is displayed in 3dmark benchmark

Ambient temperature about 25 degrees
Full load temperature is about 45-59 degrees Celsius.


----------



## Azazil1190

Hey guys how are you?
Quick question
How i can uv my card via mptool .
I want to test it at 2530/2630 1120/1125v
I know only to oc not to uv


----------



## deadfelllow

Azazil1190 said:


> Hey guys how are you?
> Quick question
> How i can uv my card via mptool .
> I want to test it at 2530/2630 1120/1125v
> I know only to oc not to uv


tick TEMP_DEPENDENT_VMIN and set voltage min/max 1125?


----------



## Azazil1190

deadfelllow said:


> tick TEMP_DEPENDENT_VMIN and set voltage min/max 1125?


Love you mate! Thnx a lot

Im gonna give a try


----------



## sry.fat.irl

had to reinstall amd software and now gpuz,wattman and mpt read my gpu as a 6950xt despite it being the sapphire toxic aircooled 6900xt... is it using the vbios version as the reference to pull the info or what. i shot an inquiry to sapphire already cuz i dont want it to have a seizure. kind of wondering how many people have this card too. look at the pic
View attachment 2574422


----------



## Azazil1190

So far my card is stable uv/oc at 1125v 2645/2745 2150 fast and max pl about 383-387w
Thnx to deadfelllow about the tip at mptool


----------



## nordskov

earphonelnwshop said:


> OC for use everydays with 6900XT OCF
> OC 2880mhz / mem2400mhz / Power limit 600w / Custom water loop/ Mod themal pad
> Time spy stress test passesd
> 
> 
> 
> 
> 
> 
> 
> 
> 
> Let's rock s
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> View attachment 2574169
> View attachment 2574170
> View attachment 2574171





earphonelnwshop said:


> OC for use everydays with 6900XT OCF
> OC 2880mhz / mem2400mhz / Power limit 600w / Custom water loop/ Mod themal pad
> Time spy stress test passesd
> 
> 
> 
> 
> 
> 
> 
> 
> 
> Let's rock s
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> View attachment 2574169
> View attachment 2574170
> View attachment 2574171


wierd. I though it would do better with Those fast ram speeds.
Heres my 6900xt toxic Extreme only 2872 boost 1.268v 480w limit +15% pl driver and 2140mhz ft
Everyday gaming stable (using same settings to game stable at 2900+ MHz.


----------



## supergt99

does anyone run temp_depend vmin 24/7? or for gaming use?


----------



## jonRock1992

supergt99 said:


> does anyone run temp_depend vmin 24/7? or for gaming use?


yes. i have been for months. no issues.


----------



## lestatdk

jonRock1992 said:


> yes. i have been for months. no issues.


Do you mind sharing your settings, please ?


----------



## nordskov

supergt99 said:


> does anyone run temp_depend vmin 24/7? or for gaming use?


Yes. I run it everyday for last year or so. Using above picture as Daily. And its perfect for gaming at 2900mhz 👌


----------



## tsamolotoff

What's interesting is that if you set clocks below 2500, then the chip will downvolt automatically at idle with vdepmin, but it doesn't do that unless you restart the driver (via CRU or some other means)


----------



## darkchy

Azazil1190 said:


> So far my card is stable uv/oc at 1125v 2645/2745 2150 fast and max pl about 383-387w
> Thnx to deadfelllow about the tip at mptool
> View attachment 2574452



Hi can you please share your hole Config with MPT and Radeon Software Center?
Do you have an AirCooled?


I search a good OC/UV for my XFX Merc 319 Black Limited 6900XT


----------



## lestatdk

nordskov said:


> Yes. I run it everyday for last year or so. Using above picture as Daily. And its perfect for gaming at 2900mhz 👌


MPT settings , please ?


----------



## Blameless

earphonelnwshop said:


> OC for use everydays with 6900XT OCF
> OC 2880mhz / mem2400mhz / Power limit 600w / Custom water loop/ Mod themal pad
> Time spy stress test passesd
> 
> 
> 
> 
> 
> 
> 
> 
> 
> Let's rock s
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> View attachment 2574169
> View attachment 2574170
> View attachment 2574171


I got a real lemon of a 6900XT OCF. Mine wouldn't even do 2100 on the memory stable, and would crash in Night Raid past 2.55GHz on the core. Was barely faster than my 6800 XT once both were tuned.

Sold it the other day for 600 USD.


----------



## nordskov

lestatdk said:


> MPT settings , please ?


1.250v 480w +15% pl gfx and soc at 380a 
Driver set to Max 2872 minimum 2426 
Fast timing ram 2140mhz fanspeed alwais on and Max 31%. Amd sam on


----------



## PJVol

tsamolotoff said:


> What's interesting is that if you set clocks below 2500, then the chip will downvolt automatically at idle with vdepmin, but it doesn't do that unless you restart the driver (via CRU or some other means)


What did you mean "doesn't downvolt" ? Below is a sensors' snapshot at idle, GfxClk max is 2700mhz and tdvmin active:



deadfelllow said:


> tick TEMP_DEPENDENT_VMIN and set voltage min/max 1125?


Why are you using temp.dep.vmin to undervolt?
Tuning vddgfx offset (along with PL and Fmax) seems to make more sense, than exploiting TDVmin driver bugs. Not to mention the direct setting of Gfx Vmax in a "Voltage Vmin/Vmax (mV)" fields without touching any features.


----------



## alceryes

Something I thought about recently, for all those who like to overclock and push components to the edge. 
Note that this usually won't matter for normal usage or non-overclocked/overvolted systems.

When overclocking and bumping up the power draw for GPUs and CPUs to extreme levels, you need to make sure your power cabling isn't letting you down.

For GPUs, even if you have three 8-pin cables, they should ALL be homerunned to the PSU to give your GPU the best possible power with the least amount of stress on cables or ports. Each 8-pin GPU cable is rated at a max of 150W, 6-pin cables a max of 75W, and the PCIe slot a max of 75W. Of course, higher wattage/quality PSUs usually come with better quality cables that CAN run a higher wattage than they are speced for. However, this can cause unneeded stress on the power cables and the power ports on the PSU, which can lead to a power delivery that isn't quite as clean as it could be with separate 8 or 6-pin aux power cables for each port on the GPU. At worst case, you'll experience phantom instability issues that are difficult to source. In short - no daisy chaining of GPU aux power cables.

Another point is aesthetics and the fact that aesthetics can be detrimental to performance. We've all seen those gorgeous builds with all the cables beautifully bent and hidden behind bezels and cable shrouds. This is all well and good for signaling (data) cables or low wattage power cables for SATA devices, fans, pumps, etc., however, when you've got over 75W of continuous power going through cables, the actual connection points and cable angles DO MATTER. Bending a hard 90º angle into a GPU or CPU aux power cables puts extra stress on the cable itself and the connection points in the receptacle and may even affect how clean the power is. If you're trying to go for the bleeding edge in performance and want to give your system the best possible power delivery, do yourself a favor and loop your cables gently instead of bending them at hard angles right out of the connectors. The same is also true for ATX power delivery cables to the motherboard (both the 24-pin and the 4 or 8-pin aux). Keeping the cables 'loopy' instead of hard bends will give you better, cleaner power delivery.


----------



## MotomEniac

supergt99 said:


> does anyone run temp_depend vmin 24/7? or for gaming use?


I am, so far 3 months - all good. 1.275V in MPT


----------



## nordskov

MotomEniac said:


> I am, so far 3 months - all good. 1.275V in MPT


How High boost clock are you running with Those voltage ? At 1.250v i run games at 2900-2910mhz. At 1.250 i get timespy gpu score of 25600


----------



## MotomEniac

nordskov said:


> How High boost clock are you running with Those voltage ? At 1.250v i run games at 2900-2910mhz. At 1.250 i get timespy gpu score of 25600


My card is pretty leaky(regular XTX), so with that voltage I can run, depending on intesity of game, 2775-2850Mhz core. In time spy I'm hitting thermal capacity of my cooling around 1.25V with result of 24400 in TS


----------



## nordskov

MotomEniac said:


> My card is pretty leaky(regular XTX), so with that voltage I can run, depending on intesity of game, 2775-2850Mhz core. In time spy I'm hitting thermal capacity of my cooling around 1.25V with result of 24400 in TS


 oh damn, i hit 24300 with stock voltage, but to hit 25000 i need 1.237v, 25500+ 1.250v best firestrike gpu score is around 73000+ with 2980mhz avg boost, and 1.287v


----------



## tsamolotoff

PJVol said:


> What did you mean "doesn't downvolt" ? Below is a sensors' snapshot at idle, GfxClk max is 2700mhz and tdvmin active:


For me it doesn't really downvolt unless driver is restarted (or maybe it's random)



nordskov said:


> How High boost clock are you running with Those voltage ?


I've been running the card at 1.243v set, it gets to 1.16-1.19v under load, I get around 24.4-24.6k GS in TS with clocks around 2780/2150 (set to 2670/2830 and 2150 fast timings, fclk 2066 (PC reboots instantly if set to 2133), soc 1380). I think XTXH cards are intrinsically faster even at the same clocks


----------



## deadfelllow

Almost 21K on Superposition 4K. [email protected] 2960-3060 Clock - 2420Mem FT1 - KXTX ROM Sapphire LE.


----------



## supergt99

wheres a good place to get into watercooling? its been years for me(sandy bridge). i may run my gpu another year would like to water cool it.


----------



## deadfelllow

3050Mhz on FC6 but my shading units are crying tears.


----------



## nordskov

deadfelllow said:


> 3050Mhz on FC6 but my shading units are crying tears.
> 
> View attachment 2575079


can u get it stable no errors and game @ 3ghz?  im almost able to break into the 3ghz, i ran cod mw stable fine temps at 2950mhz, gotta give it a go one of the days to see if i can do it .. atm im satisfied by 2900+ mhz stable everyday gaming though


----------



## deadfelllow

nordskov said:


> can u get it stable no errors and game @ 3ghz?  im almost able to break into the 3ghz, i ran cod mw stable fine temps at 2950mhz, gotta give it a go one of the days to see if i can do it .. atm im satisfied by 2900+ mhz stable everyday gaming though


It depends on the game. For instance i can run almost 3090Mhz w/o crash or visual glitches on BF V.


----------



## tsamolotoff

The highest I could get in SP, GPU clock was around 2900 mhz


----------



## KingCry

So I finally got my 6950XT, undervolting and overclocking stuff. It's very confusing from what I was normally used to. Since I hear they are super sensitive to temps in regards to performance.


----------



## deadfelllow

Guys we're getting destroyed by 4090's (((

3DMark Time Spy Graphic Score 1x GPU


----------



## damric

KingCry said:


> So I finally got my 6950XT, undervolting and overclocking stuff. It's very confusing from what I was normally used to. Since I hear they are super sensitive to temps in regards to performance.


Been finding the same thing with a reference card 6900xt. The tuning is even more complex than Vega cards. With a significant undervolt I was able to raise the frequency sliders to like 2400/2550. I pushed the power limit slider to max which lets the GPU run at 293w. But from seeing the hotspot temp, I won't be doing the More Power mod until I get the card under water since that edge temp is already 95-105C while benching.


----------



## 99belle99

deadfelllow said:


> Guys we're getting destroyed by 4090's (((
> 
> 3DMark Time Spy Graphic Score 1x GPU
> View attachment 2575370


Wait a few weeks to see what AMD are cooking up.


----------



## KingCry

damric said:


> Been finding the same thing with a reference card 6900xt. The tuning is even more complex than Vega cards. With a significant undervolt I was able to raise the frequency sliders to like 2400/2550. I pushed the power limit slider to max which lets the GPU run at 293w. But from seeing the hotspot temp, I won't be doing the More Power mod until I get the card under water since that edge temp is already 95-105C while benching.


No like I don't understand how to really mess with it, I'm got super comfy with 10xx series over clocking now I'm in uncharted territory with my Devil 6950 XT


----------



## energie80

99belle99 said:


> Wait a few weeks to see what AMD are cooking up.


best 6950xt score around on time spy?


----------



## 99belle99

energie80 said:


> best 6950xt score around on time spy?


No the new Navi 3/7000 series GPU's.


----------



## energie80

i mean what is the best 6950xt time spy score around atm?


----------



## 99belle99

energie80 said:


> i mean what is the best 6950xt time spy score around atm?


This is the best graphics score for the 6950 XT

I scored 25 814 in Time Spy


----------



## St0RM53

Hello guys. I have 2x reference 6900xt's and i want to find out which one has the better silicon. Which tests do you recommend i use to find that? I already used to take it up to 380w before in 3dmark.

Also from what i see now the voltage limit has been unlocked even for base 6900xt models? What MPT settings and driver version should i use to unlock past 1.175v?


----------



## deadfelllow

energie80 said:


> i mean what is the best 6950xt time spy score around atm?


6950XT's aren't good at high clock freq as 6900XTXH's. The only thing is better than regular XTXH is Mem speed,timings and IF improvements.

XTXH ranking probably will be ;

1-Saphire 6900XT EE
2? XFX Speedster zero
3? Asus 6900XT LC


----------



## jonRock1992

energie80 said:


> i mean what is the best 6950xt time spy score around atm?


28356 Timespy graphics score for a single 6900 XT.


----------



## RadeonOwner93

Hi all, I got a RX 6900 XT. When I go to my AVR (Denon AVR X1700H) the driver tells me VRR Compatible device available.
I can activate VRR in the driver then. But VRR is still not active. If I go directly to my Freesync compatible TV I can activate Freesync as usual and its working.
Is there a way to force VRR in the driver via AVR?


----------



## damric

KingCry said:


> No like I don't understand how to really mess with it, I'm got super comfy with 10xx series over clocking now I'm in uncharted territory with my Devil 6950 XT


Ok well just know that these 6900xt are constrained by power limits and also temperature. There's a way to bypass the power limit, but that's only useful if you can keep it cool. Your red devil should have enhanced power limit and cooling so you should be in good shape.


----------



## KingCry

damric said:


> Ok well just know that these 6900xt are constrained by power limits and also temperature. There's a way to bypass the power limit, but that's only useful if you can keep it cool. Your red devil should have enhanced power limit and cooling so you should be in good shape.


I just did a time spy, had peak temp of 58C on the core Junction was at 63C. Idk if that's good or not.


----------



## damric

KingCry said:


> I just did a time spy, had peak temp of 58C on the core Junction was at 63C. Idk if that's good or not.


That's really good.


----------



## drnilly007

Anyone have issue where Driver wont install just stuck on checking hardware? I ddu drivers and still nothing tried installing in safe mode said error 192


----------



## tsamolotoff

Not really


KingCry said:


> Idk if that's good or not.


There's no such big dependence of boost clocks on temperature as on nvidia (which starts to downclock at 45C edge temp), I haven't notice any differences in stability/clocking behaviour even if water temperature rises by 10C after a few sessions of timespy or few hours of gaming. What can get you more (on water, not sure about air cooling) is the fact that thermal output significantly decreases (like 10-30W) if you keep the chip cool enough. Not sure if this effect is significant on air, you won't be able to reach typical watercooling temps (30-40C edge) anyways


----------



## Scimitar4211

Just got the Gigabyte RX 6900 XT Gaming OC about a month ago and I was wondering if thermal pads will tear if I disassemble the card? I want to paint the shroud and back plate to match my setup. Comments are appreciated


----------



## drnilly007

Scimitar4211 said:


> Just got the Gigabyte RX 6900 XT Gaming OC about a month ago and I was wondering if thermal pads will tear if I disassemble the card? I want to paint the shroud and back plate to match my setup. Comments are appreciated


Gigabyte been know to use some of the worst thermal pads so buy some and replace them anyways


----------



## muhasdas

Hey guys. Iv been reading this topic for like 6 months now. I try some dif oc co for my Xfx 6900xt merc 319 black. Can you please help me if I'm doing something wrong, because Im a huge noob about oc co gpu. Once, before I get this 6900xt, I had 1080ti (evga) and I did mount that gpu kraken g12 myself. Used it for like 3 years. After that, one my friend wanted to buy it, but he wanted it with stock air cooler, I did unmount the kraken, revert it back to stock. We tried, tested, it was all cool at the beggining, but I made a mistake (old habbits) via msi afterburner, so the card died hardly  did everything to fix it, but nothing worked. Anyways, its my first AMD gpu. Its really hard to use it correctly.. I do some oc, hard, med. low, nothing seems to work properly.. Gaming, all cool. Browsing the internet, sometimes its giving me grey screen with no warning at all and then I need to reset the pc manually. After reboot, AMD wattman crash report comes, reverts back my oc settings. That issue happens for like last couple weeks. Im guessing its a driver issue. The other question is; is my gpu's ram broken or something? People r using most of 6900 xts with 2150mhz fast timings all the time. I can never pass 2100 properly. Even its 2110, my screen glitches.. Am I doing something wrong here? These are the specs of my PC:

Monitor: Odyssey G7 27"
Case: Phanteks P500a
MOBO: Gigabyte x570s Aorus Master
CPU: 5900x - Pbo2 +200/-28 secondary cores, -13 main cores
CPU Cooler: Arctic Liquid Freezer II 240 (No RGB, I hate RGB for any parts 
GPU: XFX 6900xt Merc 319 Black - 2500/2600 mhz - 2090 Fast Timing ram - +%15 Power - 1100 mv - Stock Bios **** *Getting grey screen while browsing lately :/ Need to restard manually. After reboot, wattman resets my oc settings.. I have to figure it out that issue. Starts to happen last couple weeks..*
RAM: 4x8 Patriot Viper Steelseries 4400 cl19 / [email protected] 1.5v
PSU: Seasonic 1000w gold
FANS: 3x Silent Wings 3 140mm front, intake. 1x P500a Original 140mm outtake, back of the case
SSDs: Crucial 1 TB P5 Plus CT1000P5PSSD8 M.2 PCI-Express 4.0 - Samsung 970 evo 500gb
HDD: 2tb Seagate

And this is my last stable OC. At least it passed timespy tests.. But as you can see, hot spot temps were almost 110c here.. Is it safe to use it like this? Im only gaming, sometimes getting some renders for my work. I'm not a stat whore tho, just try to play that goddam warzone better. When I see 2600mhz+ on cores, the game is smoother. Trust me, I can feel that  Any comment from you guys much much much appritiated. I will do as you say, thank you so much for your help. (sorry for my eng btw)


----------



## deadfelllow

muhasdas said:


> Hey guys. Iv been reading this topic for like 6 months now. I try some dif oc co for my Xfx 6900xt merc 319 black. Can you please help me if I'm doing something wrong, because Im a huge noob about oc co gpu. Once, before I get this 6900xt, I had 1080ti (evga) and I did mount that gpu kraken g12 myself. Used it for like 3 years. After that, one my friend wanted to buy it, but he wanted it with stock air cooler, I did unmount the kraken, revert it back to stock. We tried, tested, it was all cool at the beggining, but I made a mistake (old habbits) via msi afterburner, so the card died hardly  did everything to fix it, but nothing worked. Anyways, its my first AMD gpu. Its really hard to use it correctly.. I do some oc, hard, med. low, nothing seems to work properly.. Gaming, all cool. Browsing the internet, sometimes its giving me grey screen with no warning at all and then I need to reset the pc manually. After reboot, AMD wattman crash report comes, reverts back my oc settings. That issue happens for like last couple weeks. Im guessing its a driver issue. The other question is; is my gpu's ram broken or something? People r using most of 6900 xts with 2150mhz fast timings all the time. I can never pass 2100 properly. Even its 2110, my screen glitches.. Am I doing something wrong here? These are the specs of my PC:
> 
> Monitor: Odyssey G7 27"
> Case: Phanteks P500a
> MOBO: Gigabyte x570s Aorus Master
> CPU: 5900x - Pbo2 +200/-28 secondary cores, -13 main cores
> CPU Cooler: Arctic Liquid Freezer II 240 (No RGB, I hate RGB for any parts
> GPU: XFX 6900xt Merc 319 Black - 2500/2600 mhz - 2090 Fast Timing ram - +%15 Power - 1100 mv - Stock Bios **** *Getting grey screen while browsing lately :/ Need to restard manually. After reboot, wattman resets my oc settings.. I have to figure it out that issue. Starts to happen last couple weeks..*
> RAM: 4x8 Patriot Viper Steelseries 4400 cl19 / [email protected] 1.5v
> PSU: Seasonic 1000w gold
> FANS: 3x Silent Wings 3 140mm front, intake. 1x P500a Original 140mm outtake, back of the case
> SSDs: Crucial 1 TB P5 Plus CT1000P5PSSD8 M.2 PCI-Express 4.0 - Samsung 970 evo 500gb
> HDD: 2tb Seagate
> 
> And this is my last stable OC. At least it passed timespy tests.. But as you can see, hot spot temps were almost 110c here.. Is it safe to use it like this? Im only gaming, sometimes getting some renders for my work. I'm not a stat whore tho, just try to play that goddam warzone better. When I see 2600mhz+ on cores, the game is smoother. Trust me, I can feel that  Any comment from you guys much much much appritiated. I will do as you say, thank you so much for your help. (sorry for my eng btw)
> View attachment 2575970


Dc'den ekle beni : deadfellow#7624


----------



## muhasdas

deadfelllow said:


> Dc'den ekle beni : deadfellow#7624


Üstadım ekleyemiyorum seni :/ Arkadaşlık isteğini kapatmışsın sanırım. Sen beni ekleyebilir misin? MuHasdas#6605


----------



## jonRock1992

muhasdas said:


> Hey guys. Iv been reading this topic for like 6 months now. I try some dif oc co for my Xfx 6900xt merc 319 black. Can you please help me if I'm doing something wrong, because Im a huge noob about oc co gpu. Once, before I get this 6900xt, I had 1080ti (evga) and I did mount that gpu kraken g12 myself. Used it for like 3 years. After that, one my friend wanted to buy it, but he wanted it with stock air cooler, I did unmount the kraken, revert it back to stock. We tried, tested, it was all cool at the beggining, but I made a mistake (old habbits) via msi afterburner, so the card died hardly  did everything to fix it, but nothing worked. Anyways, its my first AMD gpu. Its really hard to use it correctly.. I do some oc, hard, med. low, nothing seems to work properly.. Gaming, all cool. Browsing the internet, sometimes its giving me grey screen with no warning at all and then I need to reset the pc manually. After reboot, AMD wattman crash report comes, reverts back my oc settings. That issue happens for like last couple weeks. Im guessing its a driver issue. The other question is; is my gpu's ram broken or something? People r using most of 6900 xts with 2150mhz fast timings all the time. I can never pass 2100 properly. Even its 2110, my screen glitches.. Am I doing something wrong here? These are the specs of my PC:
> 
> Monitor: Odyssey G7 27"
> Case: Phanteks P500a
> MOBO: Gigabyte x570s Aorus Master
> CPU: 5900x - Pbo2 +200/-28 secondary cores, -13 main cores
> CPU Cooler: Arctic Liquid Freezer II 240 (No RGB, I hate RGB for any parts
> GPU: XFX 6900xt Merc 319 Black - 2500/2600 mhz - 2090 Fast Timing ram - +%15 Power - 1100 mv - Stock Bios **** *Getting grey screen while browsing lately :/ Need to restard manually. After reboot, wattman resets my oc settings.. I have to figure it out that issue. Starts to happen last couple weeks..*
> RAM: 4x8 Patriot Viper Steelseries 4400 cl19 / [email protected] 1.5v
> PSU: Seasonic 1000w gold
> FANS: 3x Silent Wings 3 140mm front, intake. 1x P500a Original 140mm outtake, back of the case
> SSDs: Crucial 1 TB P5 Plus CT1000P5PSSD8 M.2 PCI-Express 4.0 - Samsung 970 evo 500gb
> HDD: 2tb Seagate
> 
> And this is my last stable OC. At least it passed timespy tests.. But as you can see, hot spot temps were almost 110c here.. Is it safe to use it like this? Im only gaming, sometimes getting some renders for my work. I'm not a stat whore tho, just try to play that goddam warzone better. When I see 2600mhz+ on cores, the game is smoother. Trust me, I can feel that  Any comment from you guys much much much appritiated. I will do as you say, thank you so much for your help. (sorry for my eng btw)
> View attachment 2575970


I remember getting grey screens when my CPU overclock was unstable. If the grey screens don't happen at stock cpu settings, then you'll know it's related to that.


----------



## muhasdas

jonRock1992 said:


> I remember getting grey screens when my CPU overclock was unstable. If the grey screens don't happen at stock cpu settings, then you'll know it's related to that.


Thank you for your reply Jon, I'll give it a try with stock cpu settings for a while. Will reply the results.


----------



## MotomEniac

muhasdas said:


> *Getting grey screen while browsing lately :/ Need to restard manually. After reboot, wattman resets my oc settings.. I have to figure it out that issue. Starts to happen last couple weeks..*


Same problem for me and also begin to happen few weeks ago, at about the time when 22.8.1 driver was released. I can't find a cure still. Did try pretty much everything except windows reinstall with installing 22.7.1. If you have this possibility I advice you to try it.


----------



## muhasdas

MotomEniac said:


> Same problem for me and also begin to happen few weeks ago, at about the time when 22.8.1 driver was released. I can't find a cure still. Did try pretty much everything except windows reinstall with installing 22.7.1. If you have this possibility I advice you to try it.


Fresh windows install did not help at all. The driver conflicts the vrams I guess. Because when I reduce the vram oc like from 2110 to 2080mhz (fast timings) things get better. I dont get any grey screen so for. Thats so intresting tho.. I could use it 2110-2120mhz easily couple weeks ago.. AMD just try some different things I guess. I will try to downgrade the driver, will reply the results


----------



## MotomEniac

muhasdas said:


> Fresh windows install did not help at all. The driver conflicts the vrams I guess. Because when I reduce the vram oc like from 2110 to 2080mhz (fast timings) things get better. I dont get any grey screen so for. Thats so intresting tho.. I could use it 2110-2120mhz easily couple weeks ago.. AMD just try some different things I guess. I will try to downgrade the driver, will reply the results


Maybe it is different problem, but in my case it happens with completely stock settings either...


----------



## PJVol

muhasdas said:


> Fresh windows install did not help


Are you using chrome for browsing by any chance?


----------



## muhasdas

PJVol said:


> Are you using chrome for browsing by any chance?


yes, for like since the first relaese


----------



## alceryes

muhasdas said:


> Hey guys. Iv been reading this topic for like 6 months now. I try some dif oc co for my Xfx 6900xt merc 319 black. Can you please help me if I'm doing something wrong, because Im a huge noob about oc co gpu. Once, before I get this 6900xt, I had 1080ti (evga) and I did mount that gpu kraken g12 myself. Used it for like 3 years. After that, one my friend wanted to buy it, but he wanted it with stock air cooler, I did unmount the kraken, revert it back to stock. We tried, tested, it was all cool at the beggining, but I made a mistake (old habbits) via msi afterburner, so the card died hardly  did everything to fix it, but nothing worked. Anyways, its my first AMD gpu. Its really hard to use it correctly.. I do some oc, hard, med. low, nothing seems to work properly.. Gaming, all cool. Browsing the internet, sometimes its giving me grey screen with no warning at all and then I need to reset the pc manually. After reboot, AMD wattman crash report comes, reverts back my oc settings. That issue happens for like last couple weeks. Im guessing its a driver issue. The other question is; is my gpu's ram broken or something? People r using most of 6900 xts with 2150mhz fast timings all the time. I can never pass 2100 properly. Even its 2110, my screen glitches.. Am I doing something wrong here? These are the specs of my PC:


Does that card have thermal pads on the back of the PCB? The reference 6900 XT completely neglects the back of the PCB even though you've got a full aluminum shroud (heatsink) there. I did the easy PCB back thermal pad mod and lowered my HS temps a good amount.

Scroll down on this page - [Official] AMD Radeon RX 6900 XT Owner's Club


----------



## freddy85

When i change max memory clock in MorePowerTool to 1100, and i enabale the 2200mhz clock in amd radeon software the gpu clock get locked at 500mhz. Is there a fix for this?


----------



## muhasdas

alceryes said:


> Does that card have thermal pads on the back of the PCB? The reference 6900 XT completely neglects the back of the PCB even though you've got a full aluminum shroud (heatsink) there. I did the easy PCB back thermal pad mod and lowered my HS temps a good amount.
> 
> Scroll down on this page - [Official] AMD Radeon RX 6900 XT Owner's Club


Yes, I think there are thermal pads on the back of the pcb. I can see some when I look carefully. And this is the photo of the back plate I found on the web;


----------



## PJVol

muhasdas said:


> yes, for like since the first relaese


I'd stay away from chrome with modern amd gpus. Using firefox ~ 5 years didn't experience any black(grey, rosy, etc.) screens since Vega56>5700xt>6800xt.


----------



## 99belle99

PJVol said:


> I'd stay away from chrome with modern amd gpus. Using firefox ~ 5 years didn't experience any black(grey, rosy, etc.) screens since Vega56>5700xt>6800xt.


I'm also using Chrome and have had AMD cards for years without issues. R9 290>Fury X>Vega 56>5700 XT>6900 XT.


----------



## muhasdas

99belle99 said:


> I'm also using Chrome and have had AMD cards for years without issues. R9 290>Fury X>Vega 56>5700 XT>6900 XT.


I used an r9 290x once, gifted to my neph asap. Hated the drivrs so much. Since then its my first time using AMD without any issues at least while gaming with real hard work tuning  the browser fights will never end till this world end. I use thor, chrome, Firefox all at the same time. But my main is chrome. I had no issues since Last couple of weeks. I realise this was not chromes ****, that was amds and mine at all. I have no idea why but amd forces me to decrease my damn vram oc. Down to 2120 to 2084mhz fast timings. Its like 0.001 reduced performance, who cares? No one. Im just angry they do **** without giving us some warnings or some info maby.. Im stable so far. I will keep using chrome primary btw  <3


----------



## tsamolotoff

Just disable hardware video decoding / encoding in the chrome://flags, your life would get infinitely better. As you have 5900x, you can even watch 8k 60 fps videos with no issues in Chrome after disabling HWA. What's fun 8K HWA decoding is stuttering in chrome (or chromium based vivaldi that I use) while it's completely smooth in firefox. Must be AMD who is to blame


----------



## alceryes

One thing to consider, with these Chrome issues, is that the card may not be entirely stable. If you can reset your clocks, voltages, MPT (basically all settings) to defeult, and have no issues with Chrome, then I would say that Chrome is just finding some kind of weakness in the cards stability.

I second the Firefox suggestion, BTW. So much more secure and private for no perceptible performance loss. Been using it for many years.


----------



## Scimitar4211

drnilly007 said:


> Gigabyte been know to use some of the worst thermal pads so buy some and replace them anyways


Where do I find the size specs for thermal pads? unless you know it and can post here. Again, comments are appreciated


----------



## Blameless

freddy85 said:


> When i change max memory clock in MorePowerTool to 1100, and i enabale the 2200mhz clock in amd radeon software the gpu clock get locked at 500mhz. Is there a fix for this?


Don't change the memory clock in MPT. Most 6900 XT firmware is locked to 1075MHz maximum on the memory.


----------



## alceryes

Scimitar4211 said:


> Where do I find the size specs for thermal pads? unless you know it and can post here. Again, comments are appreciated


Unfortunately, this info may be difficult to get second hand.

I used both 1mm and 3mm pads on the back of my reference 6900 XT. If you can easily return purchases, You could buy a few different sizes, carefully take your card apart, and eyeball measure the pads you have in place. That way you only open the packages you need and can return the rest. Some pads compress more than others. Gelid GP-Extreme pads are very good.


----------



## Elev8rSh0es

Hey guys long time member but new 6900xt owner. I have the XFX Merc 319 6900xt and love it I am a little miffed at the conservative adrenaline software, they should let me blow up my hardware if I want to lol. What are your guys clocks and without reading all 8,000 posts is there a way to give this thing more wattage I seem to be stuck at 330 Watts and would really like to open this thing up it can't even hit 2400 sometimes due to the wattage limit other than that I have it overclocked to 2750 with no issues however besides the default AMD stress test built into the adrenaline software all my games such as cyberpunk draw so much wattage it ends up running significantly lower. Thank you guys for any help may all your cards be golden.















And before I get everyone on the site to scream at me (lol jk) for the single eight pin the EVGA supernova 1600 watt Platinum is on its way


----------



## tsamolotoff

Just download MorePowerTool from Igorslab and follow this instruction (autotranslate it if necessary):









[Guide] - Navi 21 Max Overclocking Tutorial [6800 XT / 69X0 XT]


Wer wissen will, was die eigene Navi 21 Karte wirklich kann, aber nicht weiß, wie man das anstellt, der ist hier richtig. Ein Typischer Fall ist dieser: Karte gekauft und jetzt läuft die viel langsamer als bei den großen Jungs im Luxx Forum. Was tun? Inhaltsverzeichnis 1. Time Spy: das (fast)...




www.hardwareluxx.de


----------



## Elev8rSh0es

tsamolotoff said:


> Just download MorePowerTool from Igorslab and follow this instruction (autotranslate it if necessary):
> 
> 
> 
> 
> 
> 
> 
> 
> 
> [Guide] - Navi 21 Max Overclocking Tutorial [6800 XT / 69X0 XT]
> 
> 
> Wer wissen will, was die eigene Navi 21 Karte wirklich kann, aber nicht weiß, wie man das anstellt, der ist hier richtig. Ein Typischer Fall ist dieser: Karte gekauft und jetzt läuft die viel langsamer als bei den großen Jungs im Luxx Forum. Was tun? Inhaltsverzeichnis 1. Time Spy: das (fast)...
> 
> 
> 
> 
> www.hardwareluxx.de


Awesome that did it 😜


----------



## alceryes

Elev8rSh0es said:


> And before I get everyone on the site to scream at me (lol jk) for the single eight pin the EVGA supernova 1600 watt Platinum is on its way


Wasn't gonna scream, but was definitely gonna mention it. 
I'd wait till you are sure you've got good power going into the card. Then you can blow it up.


----------



## SkipSauls

New 6900 XT owner here, and loving it. I purchased a XFX SPEEDSTER MERC319 AMD Radeon RX 6900 XT LIMITED BLACK (yes, that's a mouthful) 3 weeks ago, replacing a Gigabyte Gaming OC 3080 Ti in my main build. That may seem odd to some folks, but I really liked a 6700 XT in another build, and the 6900 XT deal was too good to pass up. The 6900 XT is awesome, even if I'm "late to the game" with it.

My main build has evolved over the past few years with various motherboards, CPUs, GPUs, PSUs, memory, AIOs, fans, etc. I'm a geek who loves to build, tinker, and learn, maybe as much or more than actually using the PCs. It's now completely Team Red, and I call it Darth Enthoo. A few pics that explain the naming:

























You all know the specs and so forth for the card better than I do, but the rest of the build is a Gigabyte Aorus X570 Master motherboard, 5900X CPU, 32GB 3600 CL14 OC'e to 3800 CL16, Seasonic Prime TX 850 PSU, Arctic LiquidFreezer II 420mm, and lots and lots of fans. I prefer to run many fans at low RPMs (30-50%), so there are 6 Arctic P14 ARGB in push-pull on the AIO, 3 P12 ARGB intake on the bottom, 3 P12 ARGB exhaust on top, 1 P12 ARGB exhaust on the rear, and 3 Phanteks T30s as intake on the side. The case is a Phanteks Enthoo Pro 2, which is massive, and all around great to build in and use.

The CPU is running PBO Auto, with a per-core curve optimizer. It will spike to 60C or so at times, but averages in the high 40C to mid 50C range for games. It will spike to 138 watts at times, but generally draws far less. I've OC'ed it to 230+ watts for benchmarks, but that doesn't really do much for games or apps. I love this CPU, even if it's frequently underutilized.

The GPU was originally vertically mounted, which looked cool, but ran fairly hot. The case has lots of airflow, but it was a bit cooler running in the standard configuration. My guess is the GPU benefits from the bottom fans blowing directly on it. That being said, this led me down the path of exploring undervolting, etc., as I'd read on this board and other places that these cards still perform well when not running all-out.

My 3080 Ti wasn't a super hot card, but things like OC, undervolting, etc. didn't really have much impact. Doing much of anything made it unstable, and the card consistently drew 340 or so watts during games.

The 6900 XT, on the other hand, responds very well to undervolting, clock limits, and power limits. I've settled on 2400 MHz, 1145 mV, and -5% power, with great results. I also set a very conservative and consistent fan curve, as I hate the ramping up-and-down of the stock curve. The fans are fairly noisy, far noisier than anything else in my build, and I'll give up a few FPS for a more peaceful experience. With these settings the 6900 XT is maxing out in the 222 watt range (TGP), which is impressively low. Temps max at 57C for the GPU, 57C for the memory junction, and 67C for the hotspot. This is cooler and uses far less power than the 3080 Ti, but the performance is similar and the results are smoother.

Dislikes? Mainly around the in-game replays, which are far more fussier than the NVIDIA equivalents. There have been times when it won't save a replay, times when the audio encoding couldn't be used with standard media players, and so on. I've figured things out, but would rather this be straightforward and reliable.

I'm as excited about the 7000 series as anyone, and some may wonder why I didn't wait. My plan is to build a new AM5, RDNA3, X670/B650, and DDR5 system, but I tend to wait for the dust to settle a bit. I'm not against Intel or NVIDIA by any stretch, but I do like AMD, and would prefer to stick with them.

P.S. Thanks to all of you for providing posts, responses, and resources on this board. I read many posts when deciding to go for the 6900 XT, and learned a ton. I look forward to more!


----------



## kairi_zeroblade

SkipSauls said:


> TLDR


Good luck with the coil whine on those..I had a 6800XT Merc and its buzzing kills my mood in gaming..lol..sold it in a heart beat!!


----------



## SkipSauls

kairi_zeroblade said:


> Good luck with the coil whine on those..I had a 6800XT Merc and its buzzing kills my mood in gaming..lol..sold it in a heart beat!!


Thanks. I haven't experienced coil whine during gaming, benchmarking, etc. The fans were ramping up aggressively, which I fixed with a flatter fan curve. Maybe I should knock on wood.


----------



## darkchy

Hi can you show your MPT and Radeon Settings??



Elev8rSh0es said:


> Hey guys long time member but new 6900xt owner. I have the XFX Merc 319 6900xt and love it I am a little miffed at the conservative adrenaline software, they should let me blow up my hardware if I want to lol. What are your guys clocks and without reading all 8,000 posts is there a way to give this thing more wattage I seem to be stuck at 330 Watts and would really like to open this thing up it can't even hit 2400 sometimes due to the wattage limit other than that I have it overclocked to 2750 with no issues however besides the default AMD stress test built into the adrenaline software all my games such as cyberpunk draw so much wattage it ends up running significantly lower. Thank you guys for any help may all your cards be golden.
> View attachment 2576647
> View attachment 2576648
> 
> And before I get everyone on the site to scream at me (lol jk) for the single eight pin the EVGA supernova 1600 watt Platinum is on its way


----------



## lawson67

Think i have messed up my sapphire RX6900 XT , I remounted it on the water block now it works and will boot into windows fine but ONLY when the drivers are not installed as soon as i install the driver for the card the PC dump and only start in safe mode , then i have to DDU to get back into window, yet as soon i install the driver again pc shuts down again, any thoughts anyone, i feel i have tried everything i must of killed some kind of chip on the card which only lets in run in basic mode??


----------



## harrysun

@lawson67 What does your windows eventlog tell you?


----------



## lawson67

harrysun said:


> @lawson67 What does your windows eventlog tell you?


Looking


----------



## lawson67

harrysun said:


> @lawson67 What does your windows eventlog tell you?


There's no latest input for this date in the log in WHEA error


----------



## lawson67

Whats the best RX 6950 XT to buy at the moment or should i wait for a RTX 4090?


----------



## deadfelllow

lawson67 said:


> Whats the best RX 6950 XT to buy at the moment or should i wait for a RTX 4090?


Wait for RDNA 3 tbh.


----------



## alceryes

FYI, new Raijintek Morpheus 8069 cooler in the works. Could be a great option for those who want better cooling but not water cooling.









Raijintek Morpheus 8069 VGA cooler for AMD RX 6000 & NVIDIA RTX 30/40 GPUs pictured - VideoCardz.com


Raijintek Morpheus 8069 VGA leaks out It has been a while since the last time we covered an aftermarket air cooler for graphics cards. Morpheus 8069 is the upcoming VGA cooler from Raijintek compatible with AMD and NVIDIA GPUs. Multiple product renders have been leaked ahead of official...




videocardz.com


----------



## Supertone07

lawson67 said:


> Whats the best RX 6950 XT to buy at the moment or should i wait for a RTX 4090?








Are you a human?







www.newegg.com


----------



## Supertone07

Supertone07 said:


> Are you a human?
> 
> 
> 
> 
> 
> 
> 
> www.newegg.com








Are you a human?







www.newegg.com


----------



## Supertone07

Both Asrock 6900 and 6950 XT cards at this price point are a great deal. If you can't wait until RDNA 3 comes out or finding one, like the 4090's now. Then this isn't a bad way to go. I opted for the 6900XT as it's the best XTXH overclocking card there is.


----------



## Dude970

Powercolor Red Devil is a great option too. Great sales out there now!


----------



## lawson67

Dude970 said:


> Powercolor Red Devil is a great option too. Great sales out there now!


My Powercolor Red devil RX6950XT arrived today however any powerlimit over 13 it artifacts so i am going to send it back, the powerlimit on the new RX6950XT goes up 20 now so i belive i should be able to use all of that with expecting artifacts, so annoying clearly bad silicon i would of thought


----------



## alceryes

lawson67 said:


> Whats the best RX 6950 XT to buy at the moment or should i wait for a RTX 4090?


I would definitely wait as well. RDNA3 is very close.
Even if you don't go with the next gen, RDNA3's release should just drop the 6000-series cards more in price. Plus BF/CM is right around the corner.


----------



## Dude970

You did the right thing sending it back. I get no artifacts when power slid to +20. I'm really happy with the card. I was going to wait but got a really good price.


----------



## lawson67

Dude970 said:


> You did the right thing sending it back. I get no artifacts when power slid to +20. I'm really happy with the card. I was going to wait but got a really good price.


I went and bought a 3090ti for the same price as my RX6950XT so thats coming tomorrow and the RX6950XT is going back, been a few years since ive had a NVidia


----------



## Dude970

lawson67 said:


> I went and bought a 3090ti for the same price as my RX6950XT so thats coming tomorrow and the RX6950XT is going back, been a few years since ive had a NVidia


Congrats on the new GPU. To me Nvidia is getting greedy, I went back to AMD


----------



## RichieRich25

Haven't been on here in a while but I decided to make a quick repaste video of the 6900xt red devil and basically what to pay attention to when remounting the card back onto the heatsink.




hope it helps someone


----------



## Dude970

Thank you


----------



## Supertone07

earphonelnwshop said:


> ASROCK 6900XT OC Formula


Please how did you get 2400 VRAM clock speed, I’ve got the same card, OCF. Thanks so much


99belle99 said:


> Wait a few weeks to see what AMD are cooking up.


Bring out the big guns!!


----------



## 99belle99

Supertone07 said:


> Please how did you get 2400 VRAM clock speed, I’ve got the same card, OCF. Thanks so much


He would have flashed the liquid Cooled reference model bios as that card has faster memory chips.


----------



## DivineLight

Do the 18 Gbps have actually different memory chips or is it overclocked by BIOS?

And how is the RMA experience with AMD in Europe? I have a reference 6900 XT since early 2021 and it shuts off my PC. The weird thing is that it happens in non-demanding games like Mass Effect LE or War Thunder. The PSU is a Seasonic Prime Ultra TX-650. Do you think this one is too weak? I switched GPU+PSU with my secondary PC and now run a Fury with just a PX-550 without any issues. I already tried changing PSUs, ECC RAM, BIOS update / base settings, eco mode (5900X).

One strange thing is that BFV will not run with raytracing. I get no reflections, extremly low FPS and graphical glitches with DXR on. Although it has been ever since working without RT and I didn't care until now. Is my card defective, at least partially? It has been running in the other PC without any new issues for a few days now beside of some graphical glitches in War Thunder, but at least no shutoffs. This whole issue reminds me of my dying 1080 Ti in 2020, which in the end turned out to be a VRAM defect that only surfaced once a certain amount of usage was reached.

And how are my chances of getting a 6950 XT or XTXH if I RMA it?


----------



## tsamolotoff

That seasonic PSU might have overagressive OCP that's what shuts the PC down suddenly if sudden spikes are happening. It tends to happen more often with high-fps workloads.


----------



## ericc64

Result not found







www.3dmark.com





First with my Asrock 6900XT


----------



## DivineLight

tsamolotoff said:


> That seasonic PSU might have overagressive OCP that's what shuts the PC down suddenly if sudden spikes are happening. It tends to happen more often with high-fps workloads.


Do you think the 650 W Titanium Ultra is too weak for UV 5900X + 6900? I'm running the Fury stock on the 550 W Platinum since days and didn't get any issue.

What makes me wonder is the graphical glitches in War Thunder and the fact that DXR doesn't work in BFV (or cause extreme glitches and reflections are missiog).

AMD has now offered me a refund of the GPU, do they refund you the whole amount? That would sound to be to good to be true. Their service states they don't have 6900s on stock and apparantly not even 6950s.

I would like to get RDNA3, but I fear that AMD mimics NVIDIAs pricing. Though I could live with the successor of the 6800 XT if it was 'reasonably' priced, meaning ~1 K €. Its pretty likely there will be stock shortages and scalping. Where I live I see people selling 4090s for 2700 €... I doubt that AMD would give decent performance away for free.


----------



## tsamolotoff

DivineLight said:


> Do you think the 650 W Titanium Ultra is too weak for UV 5900X + 6900? I'm running the Fury stock on the 550 W Platinum since days and didn't get any issue.


FuryX had less spiky power consumption. It's not about overall power requirements, but instantaneous spikes that can be 2x of TDP or even more. Some of Seasonic PSUs are tuned to treat such power surges as a sign of short circuit that's why they shut down. A common occurence on any modern GPU starting with Vega and 1080ti and onwards, you can find lots of complaints about that on reddit or OCN. I have 1000W Seasonic PSU (Cooler Master V1000) that also shuts down with two stock 6900xt in some high FPS games and benchmarks so I know that first hand


----------



## jonRock1992

I'm not gonna lie, I've just been using an Arsegame 1000W PSU with my 6900 XTX-H with a 550W power limit and an overclock of 2870MHz, and I've had no issues with it lol. This affordable PSU kinda blows my mind. It's the AGT1000. It's also like $50 cheaper than when I bought it back in 2021.


----------



## Trevbev

RichieRich25 said:


> Haven't been on here in a while but I decided to make a quick repaste video of the 6900xt red devil and basically what to pay attention to when remounting the card back onto the heatsink.
> 
> 
> 
> 
> hope it helps someone


Nice video!
Isn't that an unusually big temperature drop from a repaste?


----------



## DivineLight

tsamolotoff said:


> FuryX had less spiky power consumption. It's not about overall power requirements, but instantaneous spikes that can be 2x of TDP or even more. Some of Seasonic PSUs are tuned to treat such power surges as a sign of short circuit that's why they shut down. A common occurence on any modern GPU starting with Vega and 1080ti and onwards, you can find lots of complaints about that on reddit or OCN. I have 1000W Seasonic PSU (Cooler Master V1000) that also shuts down with two stock 6900xt in some high FPS games and benchmarks so I know that first hand


I had the 1080 Ti and the 2080 Ti FTW3 for years on the same PSU. The only major system chance was getting the 5900X this year. But in theory it should be more efficient than my old R7 1700. FPS are capped to 118 and all parts used to be undervolted (I ran stock after the shutoffs started occuring). I bought this PSU with the intention to not cheap out, it still costs almost 200 €. Not sure if some 1000 W Bronze PSU would be better than a premium 650 W.


----------



## tsamolotoff

Well, if you don't believe me, just google something like "seasonic ocp trigger 3080 / 6900xt" and you'll see a lot of topics like this:



https://forums.evga.com/3080-Ti-FTW3-power-cutcrash-OCP-or-SCP-or-OPP-or-on-1300W-Seasonic-Prime-PSU-m3428960.aspx


----------



## ryouiki

DivineLight said:


> I have a reference 6900 XT since early 2021 and it shuts off my PC. The weird thing is that it happens in non-demanding games like Mass Effect LE or War Thunder. The PSU is a Seasonic Prime Ultra TX-650. Do you think this one is too weak?


Might be underpowered but... I had a Prime Ultra TR-1000 that would also shutdown (only with my 6900XT, not my 3080Ti)... this was some issue with OCP and I believe there is a post floating around somewhere from JohnnyGuru talking about an issue with feedback on the sense wires/actually cutting the modular cable or putting a ferrite choke on it to resolve the issue. That PS was sent back and both Prime TX-1000 I have now do not have the issue.


----------



## RichieRich25

Here's my stats with an oc on MW2. If I go past 2675 I crash 75% of the time. Probably the game more than the card though. Anybody able to play above 2675 on air. It's the first game I can't go above 2700
Whats your FPS? MW2 In-game BenchMark 1080p/1440p/4k/5k Extreme. 6900xt/5950x/[email protected]/Overclocked


----------



## PJVol

RichieRich25 said:


> It's the first game I can't go above 2700


Not the 6900XT here, but still... the game engine doesn't seem to put much stress onto the GPU, and so far running game itself and benchmark @ 2750mhz limit with no issues.
For example AC Valhalla would certainly crash the driver at these settings.


----------



## nordskov

PJVol said:


> Not the 6900XT, but still... the game engine doesn't seem to put much stress onto the GPU, and so far running game itself and benchmark @2750 limit with no issues.
> For example AC Valhalla would certainly crash the driver at these settings.


I Play ac Valhalla @ 2900mhz on my 6900xt

take a look benchmarked ac Valhalla @ 1440p maxed and one with fsr on


----------



## RichieRich25

PJVol said:


> Not the 6900XT here, but still... the game engine doesn't seem to put much stress onto the GPU, and so far running game itself and benchmark @ 2750mhz limit with no issues.
> For example AC Valhalla would certainly crash the driver at these settings.


Could be the new driver then because I played thr campaign on 10.2 and was able to got 2725 and played on 4k the whole campaign without any crashes.


----------



## kairi_zeroblade

Not on 6900xt also, but no issues on MW2..I am running 2700mhz max slider on wattman (with an effective clock ranging from 2675-2680mhz) and has no issues on latest 22.10.3 beta driver


----------



## DivineLight

ryouiki said:


> Might be underpowered but... I had a Prime Ultra TR-1000 that would also shutdown (only with my 6900XT, not my 3080Ti)... this was some issue with OCP and I believe there is a post floating around somewhere from JohnnyGuru talking about an issue with feedback on the sense wires/actually cutting the modular cable or putting a ferrite choke on it to resolve the issue. That PS was sent back and both Prime TX-1000 I have now do not have the issue.


I sent mine back in 2018 and got the "ULTRA" in return, but then it turned out that it was a faulty 2700X. I wouldn't mod a 12 year warranty PSU and void the warranty. So far I'm using the Fury and my 5900X on the PX-550 without issues. but I'm limited to 1440p72. Do you think its worth to RMA them to get a free upgrade?

And does anyone know if AMD does a full refund when returning the 6900 XT? It sounds to good to be true, I found one old thread with my issue:


















Reflections are missing when DXR is turned on. Even with a completely new system and fresh OS install. Beside of this I never encountered any issues and didn't care for RT since I have a 120 Hz monitor now. DXR off and I get a smooth 118 FPS.


----------



## RichieRich25

Figured out now. Had to delete all the COD profiles I had in AMD and recreated the profiles. Now the stuttering is gone and im able to go 2600-2800.


----------



## nordskov

RichieRich25 said:


> Figured out now. Had to delete all the COD profiles I had in AMD and recreated the profiles. Now the stuttering is gone and im able to go 2600-2800.
> View attachment 2579394


ended up buying cod mw2 to see how my system is compared lol xD.. the game itself is fun, took the benchmark to see how id do, same settings 1440p all ultra 164 fps, with fsr on at quality 240fps


----------



## nordskov

DivineLight said:


> I sent mine back in 2018 and got the "ULTRA" in return, but then it turned out that it was a faulty 2700X. I wouldn't mod a 12 year warranty PSU and void the warranty. So far I'm using the Fury and my 5900X on the PX-550 without issues. but I'm limited to 1440p72. Do you think its worth to RMA them to get a free upgrade?
> 
> And does anyone know if AMD does a full refund when returning the 6900 XT? It sounds to good to be true, I found one old thread with my issue:
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> Reflections are missing when DXR is turned on. Even with a completely new system and fresh OS install. Beside of this I never encountered any issues and didn't care for RT since I have a 120 Hz monitor now. DXR off and I get a smooth 118 FPS.


besides the missing reflexions, what do you play at? dx 11 or 12? no matter what settings i choose i get stutters in bf5 as the only game.. its SOOOOO annoying i cant try this game for real without stuttering around the map...


----------



## RichieRich25

nordskov said:


> ended up buying cod mw2 to see how my system is compared lol xD.. the game itself is fun, took the benchmark to see how id do, same settings 1440p all ultra 164 fps, with fsr on at quality 240fps


I honestly love the game and I have zero complaints except for UI and the fact that there are a few features that are locked. Nice numbers by the way! Are you on air or water?


----------



## nordskov

RichieRich25 said:


> I honestly love the game and I have zero complaints except for UI and the fact that there are a few features that are locked. Nice numbers by the way! Are you on air or water?


Stock toxic Extreme water ✌.

Got some gameplay and benchmark undermy vids. Running 2900mhz gameplay everyday stable 😆





 Heres actual gameplay 1440p fsr on dont mind i suck at the game lol 😂

Heres benchmark video


----------



## RichieRich25

nordskov said:


> Stock toxic Extreme water ✌.
> 
> Got some gameplay and benchmark undermy vids. Running 2900mhz gameplay everyday stable 😆
> 
> 
> 
> 
> 
> Heres actual gameplay 1440p fsr on dont mind i suck at the game lol 😂
> 
> Heres benchmark video


Nice numbers and the game took a bit of learning curve for me because I got so use to warzone but now I'm killing it. If the 7900 wasn't around the corner I would water cool this one but for now it's holding up.


----------



## tsamolotoff

nordskov said:


> dx 11 or 12


BF5 is broken in DX12, don't use it for online gameplay


----------



## nordskov

RichieRich25 said:


> Nice numbers and the game took a bit of learning curve for me because I got so use to warzone but now I'm killing it. If the 7900 wasn't around the corner I would water cool this one but for now it's holding up.


Dont know if im going to upgrade to 7900xt this time as i only Play 1440p and all games gets killed by this card already.
heres ac Valhalla. Thinking next Big upgrade might be 240hz 4K and 8900xt when that time comes. Guess that should work great in 4K with rtx on aswell. But for now even with raytracing i get above 144fps in all games and my screen only supports 144hz anyway so should be fine 2 more years


----------



## nordskov

tsamolotoff said:


> BF5 is broken in DX12, don't use it for online gameplay


Hmm tryed dx11 But still get stutters like ****


----------



## RichieRich25

nordskov said:


> Dont know if im going to upgrade to 7900xt this time as i only Play 1440p and all games gets killed by this card already.
> heres ac Valhalla. Thinking next Big upgrade might be 240hz 4K and 8900xt when that time comes. Guess that should work great in 4K with rtx on aswell. But for now even with raytracing i get above 144fps in all games and my screen only supports 144hz anyway so should be fine 2 more years


I play on a 1080p 390hz monitor and my 2nd monitor is a 4k120hz. But I'm going to give my son my 6900xt and I'm going to get the new one because I have him my 1440p 165hz monitor and the 6600xt can't keep up that much in certain titles


----------



## nordskov

RichieRich25 said:


> I play on a 1080p 390hz monitor and my 2nd monitor is a 4k120hz. But I'm going to give my son my 6900xt and I'm going to get the new one because I have him my 1440p 165hz monitor and the 6600xt can't keep up that much in certain titles


390hz damn thats sick.. looking atm for a 240hz 1440p ips display that doesnt cost a fortune, or a 1080p 280+ hz just for csgo, every other games i play at my 31,5inch 1440p so just need a smaller monitor for csgo and cod mw2 etc, you know fps games.. and when times come where 4k looks like the new standard at 240hz thats not like 1500euro for a screen im on.. would love the fact that i would be able to get a 8900xt and a 4k 240hz screen both at max 2000euro or something like that, i dont mind spending alot on gpu etc, hence why i bought the 6900xt toxic extreme, but since its still able to max out all games and i dont get anything out of a new card it seems a waste, would be cool to have double the fps, but when i can easy run 240 hz monitor as it is now i dont see the reason to get a new yet


----------



## RichieRich25

I'm using the acer 25xv2q 1080p 390hz ips monitor and I have to say it's such a good monitor and I was able to get one at 279(love microcenter). When using virtual resolutions I swear I can't tell the difference in most games from a real 1440p/4k monitor.


----------



## DivineLight

nordskov said:


> besides the missing reflexions, what do you play at? dx 11 or 12? no matter what settings i choose i get stutters in bf5 as the only game.. its SOOOOO annoying i cant try this game for real without stuttering around the map...


DX12. Note that you have shutters in every map until the shadercache builds up.



RichieRich25 said:


> I'm using the acer 25xv2q 1080p 390hz ips monitor and I have to say it's such a good monitor and I was able to get one at 279(love microcenter). When using virtual resolutions I swear I can't tell the difference in most games from a real 1440p/4k monitor.


Maybe you never used one before, but the difference is very noticeable. I normally play on a 4K120 TV but currently I'm limited to 1440p72 since the cable doesn't do more. I tried 1080p120 and it looks pretty ugly. Not only the edges in the games, the text isn't as sharp as I'm used to.

You don't need 1000 Hz, 120 is already perfect and thats whats just becoming possible in 4K. The LG OLEDs are now commonly avaiable under 1000 € - No monitor can beat that price / performance. But you will want a 7900 to drive it. The 6900 is nice, but it does only 60-90 FPS in AAA titles depending on settings. A 4090 can do RDR2 maxed in 4K with 120 FPS, which gives me some hope AMD may deliver such experience in 'affordable' ways.

If you run 1080p at over 120 you will most likely be CPU limited.


----------



## nordskov

DivineLight said:


> DX12. Note that you have shutters in every map until the shadercache builds up.
> 
> 
> 
> Maybe you never used one before, but the difference is very noticeable. I normally play on a 4K120 TV but currently I'm limited to 1440p72 since the cable doesn't do more. I tried 1080p120 and it looks pretty ugly. Not only the edges in the games, the text isn't as sharp as I'm used to.
> 
> You don't need 1000 Hz, 120 is already perfect and thats whats just becoming possible in 4K. The LG OLEDs are now commonly avaiable under 1000 € - No monitor can beat that price / performance. But you will want a 7900 to drive it. The 6900 is nice, but it does only 60-90 FPS in AAA titles depending on settings. A 4090 can do RDR2 maxed in 4K with 120 FPS, which gives me some hope AMD may deliver such experience in 'affordable' ways.
> 
> If you run 1080p at over 120 you will most likely be CPU limited.


Well the problem is the stutter is there the engine map. For a full map lenght of gameplay. Never had this with my Old 2080 rtx. Its the only game i played that i get stutters in that i Can feel. Its driving me so much nuts i ended uninstalling the game. Tryed dx 11 and 12. Tryed low medi High pressets. Tryed with and without Ray tracing. New drivers etc. Nothing dermed to work. Even in singleplayer it was awfull


----------



## Acegr

6900xt ekwb zero edition with stock watt (332 going with 15% at 385w). Nothing touched at MPT yet. Gelid ultimate pads, gelid gc extreme paste. Temps stable for 2 months now. Seems good? (Enabling fast timings drop the score lower).
Any suggestions what I should try next?

PS the driver Im using is a Amernime Zone 22.10.1 and it sees my gpu as a 6950 xt, it's not though ;p


----------



## nordskov

Acegr said:


> 6900xt ekwb zero edition with stock watt (332 going with 15% at 385w). Nothing touched at MPT yet. Gelid ultimate pads, gelid gc extreme paste. Temps stable for 2 months now. Seems good? (Enabling fast timings drop the score lower).
> Any suggestions what I should try next?
> 
> PS the driver Im using is a Amernime Zone 22.10.1 and it sees my gpu as a 6950 xt, it's not though ;p
> 
> View attachment 2579599
> View attachment 2579599


Your sure fast timings drops score ?? Tryed Maybe fast timing and 2120mhz or so ?? Should make it go up 🤔 btw temps seems ok. Nothing bad But not good either. Mine is around 64c hotspot 82. But thats with 1.267v and 480w limit +15% in driver. In timespy it peaks at 532w

its running lile this for Daily gaming at 2900+MHz boost. And does around 25400-25600 gpu score. With newest 22.10.3 driver

timespy Max boost is only 2872 though as higher is unstable. Even at more voltage it wont go through. Even though games and firestrike is able to go all the Way to lile 3000mhz boost

firestrike is around 73000 gpu score if remembered correctly. Running a 5800X with pbo


----------



## RichieRich25

DivineLight said:


> DX12. Note that you have shutters in every map until the shadercache builds up.
> 
> 
> 
> Maybe you never used one before, but the difference is very noticeable. I normally play on a 4K120 TV but currently I'm limited to 1440p72 since the cable doesn't do more. I tried 1080p120 and it looks pretty ugly. Not only the edges in the games, the text isn't as sharp as I'm used to.
> 
> You don't need 1000 Hz, 120 is already perfect and thats whats just becoming possible in 4K. The LG OLEDs are now commonly avaiable under 1000 € - No monitor can beat that price / performance. But you will want a 7900 to drive it. The 6900 is nice, but it does only 60-90 FPS in AAA titles depending on settings. A 4090 can do RDR2 maxed in 4K with 120 FPS, which gives me some hope AMD may deliver such experience in 'affordable' ways.
> 
> If you run 1080p at over 120 you will most likely be CPU limited.


1080p is very noticeable vs 1440p and 4k but when using virtual resolution and then setting your 1080p to 1440p or 4k resolution it's not a noticeable difference especially in fast pace games. I have a 1440p asus tuf ips 165hz, lg 55 oled, and my 1080p acer and when I use virtual resolution I swear the difference is minimal


----------



## Acegr

nordskov said:


> Your sure fast timings drops score ?? Tryed Maybe fast timing and 2120mhz or so ?? Should make it go up 🤔 btw temps seems ok. Nothing bad But not good either. Mine is around 64c hotspot 82. But thats with 1.267v and 480w limit +15% in driver. In timespy it peaks at 532w
> 
> its running lile this for Daily gaming at 2900+MHz boost. And does around 25400-25600 gpu score. With newest 22.10.3 driver
> 
> timespy Max boost is only 2872 though as higher is unstable. Even at more voltage it wont go through. Even though games and firestrike is able to go all the Way to lile 3000mhz boost
> 
> firestrike is around 73000 gpu score if remembered correctly. Running a 5800X with pbo


LM?


----------



## Supertone07

nordskov said:


> Hmm tryed dx11 But still get stutters like ****


Make sure you have “Enhanced Sync” turned off within the AMD Adrenalin Driver and from MSI afterburner properties “Disable ULPS” I also check off “Extend official Overclocking limits”


----------



## DivineLight

Supertone07 said:


> Make sure you have “Enhanced Sync” turned off within the AMD Adrenalin Driver and from MSI afterburner properties “Disable ULPS” I also check off “Extend official Overclocking limits”


Weird, I have that on (enhanced sync) - In the Freesync Premium guide it said to keep it on. But turn the future frame rendering off. I had that issue too once. And memory restriction in some games.


----------



## Supertone07

DivineLight said:


> Weird, I have that on (enhanced sync) - In the Freesync Premium guide it said to keep it on. But turn the future frame rendering off. I had that issue too once. And memory restriction in some games.


Try it, my Fortnite was a stuttering mess until I did a few things, including the above and it helped the most.


----------



## Supertone07

Asrock 6900 XT OC Formula, almost got 23,000 graphics score. I'll keep trying. Anyone know when the Monster Radeon App is supposed to be available?


----------



## 99belle99

Are you using MPT. I could hit above 23000 everytime with a reference card. I picked it up before AMD had released the XTXH models and was always pissed when I seen the 23-25000+ scores those cards were hitting.


----------



## nordskov

Instead of the monster thing u talk about just use mpt. I Got 25600 score Daily on my toxic Extreme 6900xt with 1.267v and 480w +15%pl running timespy at 2872 and 2132ft

and for games i use same settings just upping boost to 2937mhz Then i maintain 2900-2910mhz during games ✌


----------



## Veii

Supertone07 said:


> Asrock 6900 XT OC Formula, almost got 23,000 graphics score. I'll keep trying. Anyone know when the Monster Radeon App is supposed to be available?


Very soon 
This month definitely. RDNA3 announcement is tomorrow too
It's common to pair big events similar big events, riding same hype train


----------



## aleek2die

I can hit over 23k, but my junction temps are over 100. MSI Gaming Z Trio 6900 xt, not sure if I should return it? I guess I'll wait till announcement tomorrow.


----------



## 99belle99

aleek2die said:


> I can hit over 23k, but my junction temps are over 100. MSI Gaming Z Trio 6900 xt, not sure if I should return it? I guess I'll wait till announcement tomorrow.


Why would you return it as that is how it should be. You need better cooling is the problem.


----------



## DivineLight

Did anyone ever return a card to AMD and knows if they refund the whole amount you paid for it?


----------



## timd78

Im new to this card. Is the overclock clock speed scaling fairly linear to framerate?
What sort of powers are people dailying on water?
What do 6900's tend to do if set to pull around 400 watts?


----------



## Acegr

nordskov said:


> Instead of the monster thing u talk about just use mpt. I Got 25600 score Daily on my toxic Extreme 6900xt with 1.267v and 480w +15%pl running timespy at 2872 and 2132ft
> 
> and for games i use same settings just upping boost to 2937mhz Then i maintain 2900-2910mhz during games ✌


could you please show your mpt settings?

Ιdk whats going on with mine. I have a xfx 6900xt ekwb zero and with stock I can do 1144mv and 2700 core and 2150 memory fine. If I raise my watt to lets say 400 or something so I can clock better the bench fails. Although it might be an issue Im only raising my power limit? No idea.


----------



## tsamolotoff

timd78 said:


> Im new to this card. Is the overclock clock speed scaling fairly linear to framerate?
> What sort of powers are people dailying on water?
> What do 6900's tend to do if set to pull around 400 watts?


1) mostly yes, depends on the resolution, of course, in 4K memory clocks are more important than in 1080p or 1440p
2) depends on the game / task. My 6900xt runs at 2800/2125 daily (1.25v vdepmin, droops to 1.17v under typical load), consumes about 330-400W depending on the game (much more in some benchmarks like TimeSpy, where peak power consumption can be above 475W)
3) Card will throttle down or clock-stretch if your power limits are below what it requires, so this question is kind of pointless




Acegr said:


> . If I raise my watt to lets say 400 or something so I can clock better the bench fails.


Increase the PL to 550W and start tuning the clock with no power limits, otherwise your GPU will downclock


----------



## Acegr

tsamolotoff said:


> 1) mostly yes, depends on the resolution, of course, in 4K memory clocks are more important than in 1080p or 1440p
> 2) depends on the game / task. My 6900xt runs at 2800/2125 daily (1.25v vdepmin, droops to 1.17v under typical load), consumes about 330-400W depending on the game (much more in some benchmarks like TimeSpy, where peak power consumption can be above 475W)
> 3) Card will throttle down or clock-stretch if your power limits are below what it requires, so this question is kind of pointless
> 
> 
> 
> Increase the PL to 550W and start tuning the clock with no power limits, otherwise your GPU will downclock


give me settings please  , at 1200mv no matter how many watts I set the limit it fails on bench above 2700. Heck it even fails at 2700 even though it benches fine on stock watt..


----------



## tsamolotoff

Acegr said:


> give me settings please  , at 1200mv no matter how many watts I set the limit it fails on bench above 2700. Heck it even fails at 2700 even though it benches fine on stock watt..


As I said, it's because it's not running at above 2700 if you leave stock limits, it downclocks itself to some lesser value (also, there's predefined V/F curve that kicks in at some point above 2500 mhz so if you set any voltage in Wattman it actually does do nothing). I just set powerlimits to 520W/480A, disabled all DS_ checkboxes and set Vdepmin on (with 1.25v /1.25v for vcore and 1.10v / 1.10v for soc).


----------



## energie80

tsamolotoff said:


> As I said, it's because it's not running at above 2700 if you leave stock limits, it downclocks itself to some lesser value (also, there's predefined V/F curve that kicks in at some point above 2500 mhz so if you set any voltage in Wattman it actually does do nothing). I just set powerlimits to 520W/480A, disabled all DS_ checkboxes and set Vdepmin on (with 1.25v /1.25v for vcore and 1.10v / 1.10v for soc).


min and max voltages?


----------



## alceryes

timd78 said:


> Im new to this card. Is the overclock clock speed scaling fairly linear to framerate?
> What sort of powers are people dailying on water?
> What do 6900's tend to do if set to pull around 400 watts?


300-350W is the sweetspot on performance per watt. Above that the performance returns per watt diminish greatly.
400W should get you around 24k in Time Spy graphics score, assuming it's tuned properly with MPT.


----------



## tsamolotoff

energie80 said:


> min and max voltages?











[Guide] - Navi 21 Max Overclocking Tutorial [6800 XT / 69X0 XT]


Wer wissen will, was die eigene Navi 21 Karte wirklich kann, aber nicht weiß, wie man das anstellt, der ist hier richtig. Ein Typischer Fall ist dieser: Karte gekauft und jetzt läuft die viel langsamer als bei den großen Jungs im Luxx Forum. Was tun? Inhaltsverzeichnis 1. Time Spy: das (fast)...




www.hardwareluxx.de


----------



## Acegr

tsamolotoff said:


> [Guide] - Navi 21 Max Overclocking Tutorial [6800 XT / 69X0 XT]
> 
> 
> Wer wissen will, was die eigene Navi 21 Karte wirklich kann, aber nicht weiß, wie man das anstellt, der ist hier richtig. Ein Typischer Fall ist dieser: Karte gekauft und jetzt läuft die viel langsamer als bei den großen Jungs im Luxx Forum. Was tun? Inhaltsverzeichnis 1. Time Spy: das (fast)...
> 
> 
> 
> 
> www.hardwareluxx.de


 timespy keeps crashing, cpu core reaches 55c, hotspot 85c. What's wrong with my settings?


----------



## 99belle99

What die do you have? XTX or XTXH. I had a XTX and couldn't run Timespy above 2640. And I was lucky to get it to run at those clocks but I did have a reference card on stock cooling.

*Edit: *Never mind I just seen you have a 6950XT and I have no experience with them.


----------



## Acegr

99belle99 said:


> What die do you have? XTX or XTXH. I had a XTX and couldn't run Timespy above 2640. And I was lucky to get it to run at those clocks but I did have a reference card on stock cooling.


Its a xfx 6900 xt ekwb zero, hwinfo shows xtxh. Considering the slider of memory can be pulled more than 2150 I guess thats correct.


----------



## 99belle99

Acegr said:


> Its a xfx 6900 xt ekwb zero, hwinfo shows xtxh. Considering the slider of memory can be pulled more than 2150 I guess thats correct.


I don't know why Radeon settings is showing 6950XT. But anyway what score can you get in Timespy without messing with voltages. By just increasing the power limit to enable higher frequency's?

I've a feeling your messing with the voltages is causing Timespy to crash.


----------



## Acegr

99belle99 said:


> I don't know why Radeon settings is showing 6950XT. But anyway what score can you get in Timespy without messing with voltages. By just increasing the power limit to enable higher frequency's?
> 
> I've a feeling your messing with the voltages is causing Timespy to crash.


Without messing with MPT , stock 332w +15% - 1144mv - 2700-2600-2150


----------



## ajleeuk

Hi, Im trying to undervolt my xfx 6900 xt merc 319 by using MPT. I dont want the card to pull more than 250-260watt. Do I need to change anything else?


----------



## damric

You guys all pushing 350-400w and I have mine tuned for like 125w 😎


----------



## nordskov

damric said:


> You guys all pushing 350-400w and I have mine tuned for like 125w 😎


Forget 300-400 w im running 540w Max Daily 😂😂


----------



## ajleeuk

damric said:


> You guys all pushing 350-400w and I have mine tuned for like 125w 😎


hi, could u share ur MPT settings? I want mine to be around max 250watt


----------



## damric

ajleeuk said:


> hi, could u share ur MPT settings? I want mine to be around max 250watt


I'm just using the driver settings to keep the card at 800mv and 2000MHz. It's fast enough this way for the games I play.


----------



## senzu

Hi! I have an RX 6900 XT coming soon, however my power supply only has 2 independent 6+2 PIN branches (Be Quiet Pure Power 11 700W Gold). I have ordered a Corsair RM1000x, however it will take over a week to arrive. Until it arrives, can I use the card with reduced performance (undervoltage and clock reduction) with only 2 PINs instead of 3 without damage?


----------



## Veii

senzu said:


> Until it arrives, can I use the card with reduced performance (undervoltage and clock reduction) with only 2 PINs instead of 3 without damage?


Please writeout the model
Most of the XTX(H) refuse to start with only 2 of 3 ports populated


----------



## senzu

Veii said:


> Please writeout the model
> Most of the XTX(H) refuse to start with only 2 of 3 ports populated


It's a PowerColor Red Devil


----------



## Veii

senzu said:


> It's a PowerColor Red Devil


mm , thank you
Sadly it needs it's 3x8pins
you "should not" run it with pigtails - but honestly, powerlimit is soo low on stock // it wont matter here
If you can. But my OCF doesnt boot either without 3x8pins and the older 6800XT Devil didn't without all of them populated


----------



## Acegr

my xfx 6900 xtx ekwb is a xtxh. Is there a reason to change bios to LC? Will I be able to up memory more than 2150 without crashing? (It's unlocked up to 3000 but above 2150 it fails miserably atm)


----------



## pajdek

Hey Guys, could someone with sapphire 6900 xt nitro+ se upload his bios, just want to check if my is altered or not.


----------



## 99belle99

pajdek said:


> Hey Guys, could someone with sapphire 6900 xt nitro+ se upload his bios, just want to check if my is altered or not.


I'm pretty sure you can get that on techpowerup.


----------



## pajdek

99belle99 said:


> I'm pretty sure you can get that on techpowerup.


They have older version, mine is 020.001.000.071.000000, 2021-12-15 23:13


----------



## 99belle99

pajdek said:


> They have older version, mine is 020.001.000.071.000000, 2021-12-15 23:13


Unless things have changed you cannot alter a bios for these cards so I don't think yours is messed with.


----------



## Acegr

Im trying to flash the lc bios. Running on linux it says it cannot find the adapter. Running on windows it can find the adapter but it doesn't accept the -f command to force flash it resulting in SSID missmatch. Any ideas? I have an xfx 6900 xt ekwb zero.


----------



## damric

Acegr said:


> Im trying to flash the lc bios. Running on linux it says it cannot find the adapter. Running on windows it can find the adapter but it doesn't accept the -f command to force flash it resulting in SSID missmatch. Any ideas? I have an xfx 6900 xt ekwb zero.


Bad idea


----------



## Acegr

damric said:


> Bad idea


how come? Even if something went bad, it has dual bios.


----------



## alceryes

Supertone07 said:


> Bring out the big guns!!
> View attachment 2578401


lol! I can _hear_ this picture!


----------



## Veii

Acegr said:


> Im trying to flash the lc bios. Running on linux it says it cannot find the adapter. Running on windows it can find the adapter but it doesn't accept the -f command to force flash it resulting in SSID missmatch. Any ideas? I have an xfx 6900 xt ekwb zero.


PM me ~ if your card has a 2ndary ROM chip
Or you have a hardware flasher

Linux flash needs vbflash 4.69 // be sure to -unlockrom 0
For windows, i might be the only person that knows how
But i/we wait for RDNA3 launch ~ if i do this for you, only private else you'll ruin it for everyone when it gets patched on next vbflash
Too early to spoiler


----------



## Acegr

Veii said:


> PM me ~ if your card has a 2ndary ROM chip
> Or you have a hardware flasher
> 
> Linux flash needs vbflash 4.69 // be sure to -unlockrom 0
> For windows, i might be the only person that knows how
> But i/we wait for RDNA3 launch ~ if i do this for you, only private else you'll ruin it for everyone when it gets patched on next vbflash
> Too early to spoiler


this guy is not lying, such a boss. Experimenting on my card now on LC bios haha


----------



## timd78

What do you guys target first. Setting max clocks or tuneing down voltage?


----------



## 99belle99

Acegr said:


> Without messing with MPT , stock 332w +15% - 1144mv - 2700-2600-2150


No you use MPT to increase the power limit so you get higher speeds but do not touch the voltage. But if you are reaching 2600-2700MHz at stock that is pretty good going.


----------



## Acegr

99belle99 said:


> No you use MPT to increase the power limit so you get higher speeds but do not touch the voltage. But if you are reaching 2600-2700MHz at stock that is pretty good going.


 this result is with these mpt settings. If I raise the power limit and tdc timespy fails with the same amd settings.. It fails If I clock higher etc. No idea what Im doing wrong 
This is with LC bios now

edit, temps are like this


----------



## 99belle99

Acegr said:


> this result is with these mpt settings. If I raise the power limit and tdc timespy fails with the same amd settings.. It fails If I clock higher etc. No idea what Im doing wrong
> This is with LC bios now
> 
> edit, temps are like this
> View attachment 2582264
> View attachment 2582264
> 
> 
> 
> View attachment 2582263
> 
> View attachment 2582263


I do not know why it is crashing on you. Maybe bad bin. But it's a better bin than my reference 6900 XT on stock cooling. I could only get 23,300 graphics score on my card and that is in cold weather last winter. You would get no where near that with a reference card while in a case.

What is your setup? Like is it in a case like you do playing games and if so that score is pretty good. People do crazy things to get high scores and think outside the box in cold weather.


----------



## Acegr

99belle99 said:


> I do not know why it is crashing on you. Maybe bad bin. But it's a better bin than my reference 6900 XT on stock cooling. I could only get 23,300 graphics score on my card and that is in cold weather last winter. You would get no where near that with a reference card while in a case.
> 
> What is your setup? Like is it in a case like you do playing games and if so that score is pretty good. People do crazy things to get high scores and think outside the box in cold weather.


its in a closed case yes. I have xfx ekwb zero 6900 xt flashed with LC bios. I use a custom loop with 3 radiators and LM on gpu.


----------



## Henrik9979

_















_
Hello sense I haven't found much about flashing a different bios on the MSI Rx 6900 xt trio X gaming, I decided to test it my self.

Here is my results:
- MSI Trio Z and MSI Rx 6950 xt bios:
Black screen on boot up making it impossible to enter Bios, but would give picture as soon it reached windows.
Didn't test overclocking potential sense not being able to boot into Bios is a deal breaker for me.

- vBIOS from a different brand with XTX bin and XTXH bin:
WARNING DON'T DO IT!!!
It will brick the card completely. You will not be able to boot your computer again with the card mounted.

So any backup solutions like using a secondary GPU integrated GPU, DOS or Linux thumb drive will not be an option.

There is only two ways to bring it back to live.
Option 1: Buy a new BIOS chip preprogrammed from eBay.
Option 2: Use a ch341a programmer and program the chip yourself.

I had a ch341a programmer anyway so I wasn't so concerned about bricking the card, but I was surprised to see it will brick the whole system.

So unless someone knows how to edit the original bios to enable your personal settings from MorePowerTool you're stuck with MPT.

The reason I would like to edit the bios is, "1usmus" is working on an auto overclocking tool called Hydra for RDNA. My card is water cooled and have huge temperature headroom but Hydra restarts the driver disabling MPT and you are left with stock Power Limit and stock voltage making the program obsolete for experiences overclockers.

I might try the Trio Z bios again and see if it actually changes anything or just locks the card in power safe mode.

_UPDATE_ I tried flashing the Trio Z bios again but it bricked the card and I had to use the programmer to bring it back.
So if you have a Trio X don't even bother thinking about using a different vBIOS.

I'm thinking about asking MSI support if they could make a costume bios with a higher power limit. I know they can but I'm pretty sure it would be a waste of time asking them because I don't think they want.


----------



## Veii

Henrik9979 said:


> The reason I would like to edit the bios is, "1usmus" is working on an auto overclocking tool called Hydra for RDNA. My card is water cooled and have huge temperature headroom but Hydra restarts the driver disabling MPT and you are left with stock Power Limit and stock voltage making the program obsolete for experiences overclockers.


First Hydra init resets the PPTable
Hydra is being worked on to coexist and support MPT - it already does.
Hydra's "thunderbolt" resets are to apply PPTable profiles, not to wipe profiles


Henrik9979 said:


> - MSI Trio Z and MSI Rx 6950 xt bios:


Any KXTX on XTXH will fail on GOP initialization on startup
You can spoof that


Henrik9979 said:


> I'm thinking about asking MSI support if they could make a costume bios with a higher power limit.


AMD enforces security. They will not
Powerplay table changes need signing. If injected - but unless you are macOS user, its kinda useless because linux and win can load one


Henrik9979 said:


> Hello sense I haven't found much about flashing a different bios on the MSI Rx 6900 xt trio X gaming, I decided to test it my self.


Unfortunate that you didn't find what you're looking for
But it is possible to run 6950XT bios on any card~


----------



## Veii

Here is something custom (for XTXH), if you want to play
Just don't forget your source~
Also my bioses are blacklisted in 3DMark

If anybody wants to bother with UL (for me), go and try to complain soo they verify it 
I don't feel like fighting with them

For XTXH user, you need to use MPT from this
Change

























And increase memory a bit, as that was for 18gbps chips
Then you won't get driver locked


----------



## Henrik9979

"Hydra is being worked on to coexist and support MPT - it already does."

What do you mean by it already does? 
When I use MPT to set max core voltage to 1100mv and run Hydra, the driver resets and I can see that the voltage is back up to 1175mv. The same with soc voltage, and power limit.

It's nice to know the plan is they should work together but at the moment it doesn't seems like it or I'm using Hydra wrong.


----------



## Veii

Henrik9979 said:


> What do you mean by it already does?
> When I use MPT to set max core voltage to 1100mv and run Hydra, the driver resets and I can see that the voltage is back up to 1175mv. The same with soc voltage, and power limit.
> 
> It's nice to know the plan is they should work together but at the moment it doesn't seems like it or I'm using Hydra wrong.


Sorry, you're using it wrong 
I work with Yuri on it, since ~2 months. Made sure both can co-exist
So also MPT got the extended curve options since 1.3.16 (.18 is latest)

I don't know yet if he'll drop me for RMP, so 6900/6950 user wont get theirs ~ but we'll see it's up to him
I just wait for RDNA 3 launch atm


Veii said:


> Sorry, you're using it wrong


Erase everything. Build your initial curve with Hydra
Then edit with MPT the same curve


----------



## Henrik9979

Veii said:


> Sorry, you're using it wrong
> I work with Yuri on it, since 1-2 months. Made sure both can co-exist
> So also MPT got the extended curve options since 1.3.16
> 
> I don't know yet if he'll drop me for RMT , so 6900/6950 user wont get theirs ~ but we'll see it's up to him
> I just wait for RDNA 3 launch atm


Very cool! Maybe you can guid me on how to use it properly.

About the bios you send me, did you suggest I should try and flash it to my trio X? Or did you mean load it's settings in MPT?


----------



## Veii

Henrik9979 said:


> Very cool! Maybe you can guid me on how to use it properly.
> 
> About the bios you send me, did you suggest I should try and flash it to my trio X? Or did you mean load it's settings in MPT?


You can SPI Flash it, while you're at it - if you want
But it needs the showed values replicated, soo you wont get driver locked.
KXTX defaults are higher, which can not be replicated. Voltage ranges are fused on the chip. Sadly nothing can be done there

EDIT:
This is for 73AF PCI-ID, it wont work on XTX (73BF)


----------



## Henrik9979

Veii said:


> You can SPI Flash it, while you're at it - if you want
> But it needs the showed values replicated, soo you wont get driver locked.
> KXTX defaults are higher, which can not be replicated. Voltage ranges are fused on the chip. Sadly nothing can be done there
> 
> EDIT:
> This is for 73AF PCI-ID, it wont work on XTX (73BF)


Yeah mine is 73BF, that's why I kept bricking my card. The only bios that works is the MSI ones with 73BF. A TOXIC bios with 73BF bricks it too.


----------



## Veii

Henrik9979 said:


> A TOXIC bios with 73BF bricks it too.


Can you upload this one, so i can take a look
There are two 73BF an 6800XT and a 6900XT(non X)
They differ on CU's


----------



## Henrik9979

This is the one I use at the moment:








MSI RX 6900 XT VBIOS


16 GB GDDR6, 500 MHz GPU, 2000 MHz Memory




www.techpowerup.com





This is the Toxic I have tried:








AMD RX 6900 XT VBIOS


16 GB GDDR6, 500 MHz GPU, 2000 MHz Memory




www.techpowerup.com





Here you go.

This is the one I'm currently using now.


----------



## Veii

What card do you own ?
This Toxic will fail by USB overcurrent issues - up to IO
Got any gpu-z screenshot from before ?


----------



## Henrik9979

Veii said:


> What card do you own ?
> This Toxic will fail by USB overcurrent issues - up to IO
> Got any gpu-z screenshot from before ?


I own a MSI Rx 6900 xt Trio X Gaming.
I don't have a gpu-z screen shot at the moment, but it was some if the first Rx 6900 xt before AMD revealed the XTXH and etc.


----------



## Veii

Henrik9979 said:


> I own a MSI Rx 6900 xt Trio X Gaming.


Will this USB overcurrent error ?


----------



## Henrik9979

If it's the same as this BIOS yes it won't work too. I already tried it.


----------



## Veii

Henrik9979 said:


> If it's the same as this BIOS


Its not anything public
If you dont feel like trying, i'm gonna focus on something else now~


----------



## Henrik9979

Veii said:


> Its not anything public


I could try, do you think it will flash with AMDVBFlash or do I need to flash it with Command prompt?


----------



## Veii

Henrik9979 said:


> I could try, do you think it will flash with AMDVBFlash or do I need to flash it with Command prompt?


You can flash it via vbflash 4.69 linux
But you got an SPI flasher

I know it works on 6800XT's, and could on 6900XTX
But i need a test
Cards have a TPM module on them, soo a CMOS reset (post) and a driver wipe (before) is recommended

If it flashes, it can need a 3rd init (give it time), as it has to load and move internal patches
if it reboots after 25sec by itself couple of times (same delay) ~ then it's the usb OC issue.
This edit should pass, but please check


----------



## Henrik9979

Okay I will try, it's just a little bit of a hassle because my Linux drive got corrupted I doesn't get recognised and using the SPI flasher need the card to be taken apart and I had just reassembled it and mount it my case with water cooling when you start writing too me. 😅😆 Luckily I use quick connect tubes so it makes it less of a pain. 😁 But I will try. Just give me a moment to take it out.

So you suggest also to uninstall the drivers before doing it?


----------



## Henrik9979




----------



## Veii

Henrik9979 said:


> So you suggest also to uninstall the drivers before doing it?


Fully, i know AMD supplies patches - also in AGESA
but the drivers are the first that need to be gone
Chipset drivers too
GitHub - lostindark/DriverStoreExplorer: Driver Store Explorer [RAPR] use this to wipe anything AMD related


----------



## Henrik9979

So DDU is not enough?


----------



## Veii

Henrik9979 said:


> So DDU is not enough?


Mhm 
DDU plus this ontop
Optimally even staying offline

AMD BIOSES , especially on gigabyte's side ~ fully prevent flashes
And normal drivers inject into UEFI , a "healthcheck module" 
It's a bit annoying ~ but generally all AMD drivers gone (PCI BUS Filter for example)
and CMOS reset to make sure TPM gets cleared


----------



## Henrik9979

So all this? I have already used DDU.

Should I enable force delete?


----------



## Veii

Henrik9979 said:


> View attachment 2582356
> 
> 
> So all this? I have already used DDU.


Pretty much, "force deletion and then delete"


----------



## Henrik9979

Veii said:


> Pretty much, "force deletion and then delete"


Okay done. I will disable the card and try one more time.


----------



## Veii

Henrik9979 said:


> Okay done. I will disable the card and try one more time.


With 071 mod


----------



## Henrik9979

It does not seem to work. Power is on but no picture and VGA debug light on the motherboard is constant red.


----------



## Veii

Henrik9979 said:


> It does not seem to work. Power is on but no picture and VGA debug light on the motherboard is constant red.


CMOS reset ?


----------



## Henrik9979

Veii said:


> CMOS reset ?


Yes nothing happens. I have tried turning it off and on a couple of times let it sit for about a minut.


----------



## Veii

I probably should try to make something for 320TMU's 73BF then
your cards appear locked
hmm
but only your cards
6800XT's 73BF are fine
6700XT's are fine, 6900XTXH's are fine

Kinda annoying
Technically you could rebrand to a 288 TMU card (6800XT), and OC that one higher
But i don't know ~ you also disable RT cores

The only "annoyance" is the memory lock at 2150 on those
6750's i haven't got to run yet

EDIT:
Want one last try ?


----------



## Henrik9979

Veii said:


> I probably should try to make something for 320TMU's 73BF then
> your cards appear locked
> hmm
> but only your cards
> 6800XT's 73BF are fine
> 6700XT's are fine, 6900XTXH's are fine
> 
> Kinda annoying
> Technically you could rebrand to a 288 TMU card (6800XT), and OC that one higher
> But i don't know ~ you also disable RT cores
> 
> The only "annoyance" is the memory lock at 2150 on those
> 6750's i haven't got to run yet
> 
> EDIT:
> Want one last try ?


Hmm If it's impossible to do any bios modding then I must just accepted it. By the end of the day I can still use MPT to some extent.


----------



## Veii

Henrik9979 said:


> Hmm If it's impossible to do any bios modding then I must just accepted it. By the end of the day I can still use MPT to some extent.


Can you also tell me how exactly you updated, with what ?
On flashing one should take care of EEPROM lock


----------



## Henrik9979

Veii said:


> Can you also tell me how exactly you updated, with what ?
> On flashing one should take care of EEPROM lock


Yes give me a sec then I take some pictures.


----------



## Henrik9979

Okay I try with different program this time. Now it's connected. It detects the chip and it can also read the data.


----------



## Henrik9979




----------



## Henrik9979

I don't know if this changes anything.


----------



## Veii

Henrik9979 said:


> View attachment 2582364
> View attachment 2582365
> 
> I don't know if this changes anything.


Did you ever clear EEPROM lock on your write attempts ?
Because part of it is secured and locked
and once you flash it with 4.72 linux, or 3.31 win - it gets locked and expects locked state


----------



## Henrik9979

Veii said:


> Did you ever clear EEPROM lock on your write attempts ?
> Because part of it is secured and locked
> and once you flash it with 4.72 linux, or 3.31 win - it gets locked and expects locked state


No because I just realized the first program I was using didn't had a unprotect options but it didn't say protected either but NeoProgrammer has an unprotected button.
I will try that this time.


----------



## Veii

Henrik9979 said:


> No because I just realized the first program I was using didn't had a unprotect options but it didn't say protected either but NeoProgrammer has an unprotected button.
> I will try that this time.


I think the issue is different, actual card lock
Those KXTX spoofs should pass 
but it's something more

Generally Navi behaves on a whitelist method
You can downgrade "and make it worse" but you never upgrade
a 6950XT can easily become a 6900X, 6800X, even 6700X
All those rebrands push through 
but a 6900XT can not become a 6950 so far


----------



## Henrik9979

And it's done and there is no errors.


----------



## Henrik9979

Still didn't work. My last option would be to get my Linux drive to work and flash it with that. But I start to doubt that would work.


----------



## Henrik9979

Okay flashed it back to working state and then tried with linux on my mining rig because it uses internal graphics.
I flashed it and used the force funktion and it flashed successfully. I rebooted the mining rig and it didn't bricked the system, there was indeed picture from the internal graphics. I shut it off and moved the GPU to my gaming rig and no picture. I suspect the system is actually booting up, but the GPU won't send an image.
I have run out of ideas and options.
It seems like the trio x just is locked.


----------



## Veii

Henrik9979 said:


> I shut it off and moved the GPU to my gaming rig and no picture. I suspect the system is actually booting up, but the GPU won't send an image.
> I have run out of ideas and options.


Have you tried to swap IO 
there is an IO issue

But on "USB overcurrent" (fake) warning
it will stay powered on, and shutdown after 25sec automatically
Else the other is just IO not being flashed - because update via SPI is not a full update


----------



## Henrik9979

The has not been a single time the system have shut it self off. So I gues the USB overcurrent issue is not present.
I haven't tried switching IO though.


----------



## Henrik9979

Okay I try flash your bios one last time using Linux and test the IO.


----------



## Henrik9979

Okay I give up. I have tried all 3 display ports. The only port I didn't try was the hdmi port because I have 2 screens and a vr headset so if the hdmi port was the only port functional I wouldn't use that bios anyway.


----------



## Veii

Henrik9979 said:


> Okay I give up. I have tried all 3 display ports. The only port I didn't try was the hdmi port because I have 2 screens and a vr headset so if the hdmi port was the only port functional I wouldn't use that bios anyway.


Mmm i dont think its that


Henrik9979 said:


> The has not been a single time the system have shut it self off. So I gues the USB overcurrent issue is not present.
> I haven't tried switching IO though.


It has to be IO + something else // mainboard lock ?
Overcurrent issue will shut down , guaranteed after time

well you "could" try an msi 6950 (a mod rather)
via linux again
-unlockrom 0
then -newbios -f

If that boots , we know what's up
if that doesnt boot - it's not the bios and not the PCB of the card or IO
Then its external card-specific (model) lock
because we can't get the LC bios to run on the Ref 6900XT BF73 edition either


----------



## Henrik9979

Okay it worked!


----------



## Henrik9979




----------



## Henrik9979

And now I'm stuck at ultra low power state.


----------



## Henrik9979

Okay andrenalin software can be fixed with MPT by setting the values back to its original values and then restarting the bios. But then I'm just back to where I started.

So even though the bios did work it gained nothing.


----------



## Veii

Henrik9979 said:


> View attachment 2582380


X8 only ?

I would not recommend for other people to flash them , because they can easily corrupt their cards
EEPROM on those cards is not fully used
If internally you cause a defect and load a patch that doesnt belong there
no EEPROM reflash will help you~
Soo i take them down


Henrik9979 said:


> So even though the bios did work it gained nothing.


A bios is a layer of several patches and changes

IO / EDID compatibility
Memory timing straps
Voltage curvature
Sensorics

Pretty much every SKU is unique and VBIOS "age" differs with their patches
No it's not the same, it's not even close to the same ~ but it's blacklisted on 3dmark
Works on everything else, but again MPT fix needs to be done as voltage lock is fused into the chip
No Bios will change that. Sadly


----------



## Henrik9979

Veii said:


> X8 only ?
> 
> I would not recommend for other people to flash them , because they can easily corrupt their cards
> EEPROM on those cards is not fully used
> If internally you cause a defect and load a patch that doesnt belong there
> no EEPROM reflash will help you~
> Soo i take them down
> 
> A bios is a layer of several patches and changes
> 
> IO / EDID compatibility
> Memory timing straps
> Voltage curvature
> Sensorics
> 
> Pretty much every SKU is unique and VBIOS "age" differs with their patches
> No it's not the same, it's not even close to the same ~ but it's blacklisted on 3dmark
> Works on everything else, but again MPT fix needs to be done as voltage lock is fused into the chip
> No Bios will change that. Sadly


Oh I didn't notice the 8x it used to be 16x

But to wrap things up. What have changed in this bios except that all the clock values is the same as a 6950 xt and I have to revert them back with MPT?
I noticed though that lowering the memory clock to below 2150mhz will cause the system to crash.

I also tried to install the Amernime driver because it registered the card as 6950 xt but it didn't change the lock in andrenalin software.

Oh and resizable bar doesn't work too.


----------



## Veii

Henrik9979 said:


> I noticed though that lowering the memory clock to below 2150mhz will cause the system to crash.
> 
> I also tried to install the Amernime driver because it registered the card as 6950 xt but it didn't change the lock in andrenalin software.


True, it will
Just increase memory voltage like suggested  They use 18gbps binned chips
Similar, but ours need a bit more voltage, else not really different in their max speeds either

That is a name spoof and featureset spoof, but there is no way to fix that
You can edit








To have "Fast Timings 2" run , with it
Curve is harsher on KXTX too
But i think that's all so far 

If you still have mainboard-bios access, spoof worked
And that means, memory limit lifted
But voltage limit is unbreakable, sadly. Although the EVC is a nice device


----------



## Henrik9979

Hmm but what can I do about the 8x speed?


----------



## Veii

Henrik9979 said:


> Hmm but what can I do about the 8x speed?


Its your bios settings 
Not the ROM file

Or the way/location some NVMe is plugged in
Its a user error


----------



## Henrik9979

Veii said:


> Its your bios settings
> Not the ROM file











Are you sure because sizeable bar doesn't work too.


----------



## Henrik9979

Also I keep getting this message saying: "Your hardware settings have been changed. You need to restart to enable these changes."
That messenger only came up once when enabling Resize Bar in bios and after a reboot it would be actived in andrenalin. Now the messages pops up every time I boot the computer and andrenalin keeps saying not supported.


----------



## Henrik9979




----------



## Veii

Henrik9979 said:


> That messenger only came up once when enabling Resize Bar in bios and after a reboot it would be actived in andrenalin. Now the messages pops up every time I boot the computer and andrenalin keeps saying not supported.


That's a nimez issue, i think with usb filtering driver
can you show me the graphics tab again





 I installed it with this tutorial
https://sourceforge.net/projects/am....3-PVN-MDL-WHQL-Nemesis-NimeZ-DCH.7z/download 
But for AMD HSA; you need to enable IOMMU in the bios and PCI ARI/AER support


----------



## Henrik9979

Veii said:


> That's a nimez issue, i think with usb filtering driver
> can you show me the graphics tab again
> 
> 
> 
> 
> 
> I installed it with this tutorial
> https://sourceforge.net/projects/amernimezone/files/Release Polaris-Vega-Navi/V7-22.10.3-PVN-MDL-WHQL-Nemesis-NimeZ-DCH.7z/download
> But for AMD HSA; you need to enable IOMMU in the bios and PCI ARI/AER support



Okay if it's an NimeZ issue shouldn't I just go back to the original driver and safe the hassle of fixing it? I mean do I gain anything from using NimeZ driver on a 6900 xt?


----------



## Henrik9979

Enabled IOOMU but I can't find anything called AER or ARI


----------



## Veii

Henrik9979 said:


> Okay if it's an NimeZ issue shouldn't I just go back to the original driver and safe the hassle of fixing it? I mean do I gain anything from using NimeZ driver on a 6900 xt?


Sorry, i dont understand those both questions
"it does nothing" ~ bios
"does it anything" ~ custom drivers

They exist for a reason
Nimez release has for polaris , navi and vega
not only GCN and lower

Try , play around
People play with RDNA2 since the end of 2020
And still there are things to figure out
Try and see what you can figure out.

The "tutorial" from another user there, seems to be helpful - soo i linked it
The last 2-3 pages also mention how to resolve this popup issue.
i don't remember where it is, try to find it ~ it's a nimez installation issue.
I don't have it. Just give an advice, not take you by hand. Try to figure it out 😁


Henrik9979 said:


> Enabled IOOMU but I can't find anything called AER or ARI


In AMD CBS if AMD system
For intel sorry i dont know


----------



## Veii

Henrik9979 said:


> View attachment 2582518


Soo do you driver lock above 2150 mem now ?


----------



## Henrik9979

Veii said:


> Sorry, i dont understand those both questions
> "it does nothing" ~ bios
> "does it anything" ~ custom drivers
> 
> They exist for a reason
> Nimez release has for polaris , navi and vega
> not only GCN and lower
> 
> Try , play around
> People play with RDNA2 since the end of 2020
> And still there are things to figure out
> Try and see what you can figure out.
> 
> The "tutorial" from another user there, seems to be helpful - soo i linked it
> The last 2-3 pages also mention how to resolve this issue.
> i don't remember where it is, try to find it ~ it's a nimez installation issue.
> I don't have it. Just give an advice, not take you by hand. Try to figure it out 😁
> 
> In AMD CBS if AMD system
> For intel sorry i dont know


Yeah I get it but remember I just flashed a different vBIOS after alot of hassle.
So now I have to figure out if I have issues with the drivers, the vBIOS, windows or my motherboard.

Now I'm aiming for a almost to stock experience with the modded vBIOS to make sure everything is working. Then I can start to play around.

I don't know what changes the vBIOS have or how many things it affects. You are way more experienced than I am.


----------



## Veii

Henrik9979 said:


> Yeah I get it but remember I just flashed a different vBIOS after alot of hassle.
> So now I have to figure out if I have issues with the drivers, the vBIOS, windows or my motherboard.
> 
> Now I'm aiming for a almost to stock experience with the modded vBIOS to make sure everything is working. Then I can start to play around.


Alright
You only need to fix couple of things, like Voltage, powerlimit and likely memory voltage
^ to old card's limits
That's all
The rest runs native

Cards have a whitelist on them
Either it accepts it, and patches inject ~ or it does not
most of the times user flash random cards bios, and then fail even first init stage

At very best, they come to windows, because it had early drivers
but once you remove those - they again hardlock
user has to actually spoof on-boot check, as EFI "health-check" modules exist
And if they don't, they do now - after installing the drivers (which inject into UEFI)

Bios is fine
It's official, and just changed to pass on-boot check
It's not fully "custom"
I can't generate the signature, soo i can not do too much
But new memory timings are in there and curvature is a bit different.
Could be unstable, but you can fix that with MPT ~ eh just let me or users here know.
I bet many have used MPT till today


----------



## Henrik9979

Okay here is the to-do list:
Get Resize Bar - Check (I reinstalled the official AMD software)

Get card running stable - Check

Get the card running at 16x speed - unchecked
(I'm searching and searching but can't find anything helpful. The only pci device connected is the GPU, I don't even have a m.2 drive)

Learn to use Hydra correctly - unchecked 
(What started the whole thing 😐)


----------



## Veii

Henrik9979 said:


> Get the card running at 16x speed - unchecked
> (I'm searching and searching but can't find anything helpful. The only pci device connected is the GPU, I don't even have a m.2 drive)


Sometimes bioses are dumb too
Try to force 4.0 x16 mode 



Henrik9979 said:


> Learn to use Hydra correctly - unchecked
> (What started the whole thing 😐)


Just start hydra, let it backup the bios
It will automatically create a new MPT config
there is a "reset all values" button

Make any little change in hydra, like fixing the powerlimit and voltage limit (first thing)
Then the thunderbolt icon to apply - and after that you can use MPT to make edits
After the edits you have to restart hydra app, to see the changes. It sadly doesnt update automatically.
That's all


----------



## Henrik9979

Veii said:


> Sometimes bioses are dumb too
> Try to force 4.0 x16 mode
> 
> 
> Just start hydra, let it backup the bios
> It will automatically create a new MPT config
> there is a "reset all values" button
> 
> Make any little change in hydra, like fixing the powerlimit and voltage limit (first thing)
> Then the thunderbolt icon to apply - and after that you can use MPT to make edits
> After the edits you have to restart hydra app, to see the changes. It sadly doesnt update automatically.
> That's all


I'm on a b450 motherboard so 4.0 is not an option.
It should support 3.0 x16 thought 🤔

The motherboard is MSI b450m Mortar MAX


----------



## Henrik9979




----------



## Veii

Henrik9979 said:


> View attachment 2582529


In AMD PBS
If you have the menu

Or onboard device menu
It's not in this menu


----------



## Henrik9979

Veii said:


> In AMD PBS
> If you have the menu
> 
> Or onboard device menu
> It's not in this menu


No I only have AMD CBS


----------



## Veii

Sorry can't help you against board-bios bugs
You may try to find a bios mod for your board, or contact MSI for a custom Bios
oor change bios too


----------



## Henrik9979

Henrik9979 said:


> No I only have AMD CBS


















Any idea where it's supposed to be? Because I don't seem to find anything.


----------



## Veii

Henrik9979 said:


> View attachment 2582537
> 
> View attachment 2582538
> 
> Any idea where it's supposed to be? Because I don't seem to find anything.


How old is that bios
i can not find anything compiled on March on their site // nor on the 3rd of X





B450 MORTAR MAX | Motherboard | MSI Global


Best AMD AM4 B450 ATX motherboard, Turbo M.2, Extended heatsink, USB 3.2 Gen 2, Mystic Light, MSI MAG




www.msi.com


----------



## Henrik9979

Veii said:


> How old is that bios
> i can not find anything compiled on March on their site // nor on the 3rd of X
> 
> 
> 
> 
> 
> B450 MORTAR MAX | Motherboard | MSI Global
> 
> 
> Best AMD AM4 B450 ATX motherboard, Turbo M.2, Extended heatsink, USB 3.2 Gen 2, Mystic Light, MSI MAG
> 
> 
> 
> 
> www.msi.com










I see they have an update let me try that and see if anything changes.


----------



## Veii

https://download.msi.com/bos_exe/mb/7B89v2H.zip maybe wont help but here
1206C is buggy


----------



## Henrik9979

Veii said:


> https://download.msi.com/bos_exe/mb/7B89v2H.zip maybe wont help but here
> 1206C is buggy
> View attachment 2582541


Done updating and it didn't do jack. And there is no new options.

I guess the pci port could have taking damage.


----------



## Acegr

Im trying here.. card fails everytime I go 2800max and above with power limit of 800w (at timespy, firestrike is ok)... it pulls 550w but something holds it back.. Maybe my nimez drivers? Are there any better drivers?


----------



## deadfelllow

Acegr said:


> Im trying here.. card fails everytime I go 2800max and above with power limit of 800w (at timespy, firestrike is ok)... it pulls 550w but something holds it back.. Maybe my nimez drivers? Are there any better drivers?
> 
> View attachment 2582765
> 
> View attachment 2582766
> 
> View attachment 2582764
> 
> View attachment 2582763
> 
> 
> View attachment 2582763
> View attachment 2582764
> View attachment 2582765
> View attachment 2582766


Did you unlock the voltage via MPT? If your voltage is default probably you're voltage limited.


----------



## Acegr

deadfelllow said:


> Did you unlock the voltage via MPT? If your voltage is default probably you're voltage limited.


 What is wrong can you tell me?


----------



## deadfelllow

Acegr said:


> What is wrong can you tell me?
> View attachment 2582814
> View attachment 2582814


For me you dont need 1.3 SOC V leave it on auto. Try this if it helps. 








Other than that maybe you have a bad chip.

I was able to do 25749-ish scores with 1.25V 2760-2880.( LC BIOS )

560WPPT 500W TDC 1.25Vcore, 1,2Soc 









I scored 20 036 in Time Spy


AMD Ryzen 5 5600X, AMD Radeon RX 6900 XT x 1, 16384 MB, 64-bit Windows 10}




www.3dmark.com


----------



## Acegr

deadfelllow said:


> For me you dont need 1.3 SOC V leave it on auto. Try this if it helps.
> View attachment 2582815
> 
> Other than that maybe you have a bad chip.
> 
> I was able to do 25749-ish scores with 1.25V 2760-2880.( LC BIOS )
> 
> 560WPPT 500W TDC 1.25Vcore, 1,2Soc
> 
> 
> 
> 
> 
> 
> 
> 
> 
> I scored 20 036 in Time Spy
> 
> 
> AMD Ryzen 5 5600X, AMD Radeon RX 6900 XT x 1, 16384 MB, 64-bit Windows 10}
> 
> 
> 
> 
> www.3dmark.com
> 
> 
> 
> 
> View attachment 2582827


----------



## deadfelllow

Acegr said:


> View attachment 2582828
> 
> View attachment 2582828


So did it work?


----------



## Acegr

deadfelllow said:


> So did it work?


it won't allow me to go above max 2780, it will crash no mater the volts.. so no that part didn't work but I got better scores now by 400-500. I got 1st place for 12700k - 6900 xt


----------



## energie80

im actually running 2800/2900 with a reference 6950xt on water


----------



## deadfelllow

Acegr said:


> it won't allow me to go above max 2780, it will crash no mater the volts.. so no that part didn't work but I got better scores now by 400-500. I got 1st place for 12700k - 6900 xt
> 
> View attachment 2582829


Me too  But I'm lack of CPU score a lot . I got the highest Graphic score.


----------



## Acegr

1 more win on timespy extreme and 2nd on firestrike


----------



## lestatdk

New personal best GPU score.










Nordskov convinced me to try and push it a little bit ,so far working well.










Still struggling to get it to lower the voltage when below TVmin , so for now I just set low = high. Average temp in the run is 45C ,so still plenty of room so might try and push it a bit further.

MSI Gaming X 6900XT with a modified Alphacool waterblock ( since they apparently decided this model shouldn't have one )


----------



## tsamolotoff

Is there a rule of thumb on soc voltage? I've been timid around it (set it at 1100/1150mv), but maybe if I gave it more volts FCLK would go above 2066 (instant reboot if I set it at 2100 and higher)


----------



## timd78

For a XFX SWFT 319 6900XT (2x8pin) what is the most vcore i can set inside More Power Tools before i trip into a limit and get locked down?

Im assessing putting the card under water. It runs quite well. Can see 2750 in games ( of course not always ) with 2800 set @ 350 watts and seems happy with vcore undervolted to 1075 in wattman. So far ive just bumped power limits and current in MPT with write to SSPT

Mem seems happy around 2100 although i was wondering. Do you guys have a foolproof tool for me to detect ECC errors so i can fine tune the memory or get an alert etc?


----------



## tsamolotoff

timd78 said:


> Mem seems happy around 2100 although i was wondering. Do you guys have a foolproof tool for me to detect ECC errors so i can fine tune the memory or get an alert etc?


Just do a few SOTTR benchmark runs at 1440p or 4K (depending on your CPU capabilitieS) - if score tanks down after you increase mem clock, then it's too much. For me, there's like 5 fps difference between 2125 and 2150 (in favour of 2125), so it means memory clocks are unstable and error correction kicks in


----------



## Veii

tsamolotoff said:


> so it means memory clocks are unstable and error correction kicks in








AMD - RED BIOS EDITOR und MorePowerTool - BIOS-Einträge anpassen, optimieren und noch stabiler übertakten | Navi unlimited


Ich mit der 6900 Ref kann kein FT2 einstellen, auch nicht mit weiteren VRAM-Settings @stock. Das kenne ich auch so von fast allen XTX- und XT-Karten, wie von GerryB beschrieben. Hmm, interesannt Ok danke :) Ich hatte das gemerkt, aber es kann genau so gut sein dass BF 73 komisch sind & ich es...




www.igorslab.de


----------



## tsamolotoff

Veii said:


> AMD - RED BIOS EDITOR und MorePowerTool - BIOS-Einträge anpassen, optimieren und noch stabiler übertakten | Navi unlimited
> 
> 
> Ich mit der 6900 Ref kann kein FT2 einstellen, auch nicht mit weiteren VRAM-Settings @stock. Das kenne ich auch so von fast allen XTX- und XT-Karten, wie von GerryB beschrieben. Hmm, interesannt Ok danke :) Ich hatte das gemerkt, aber es kann genau so gut sein dass BF 73 komisch sind & ich es...
> 
> 
> 
> 
> www.igorslab.de


I have XTX card (of BF variety, FCLK does not go above 2066 at 1.175v soc), not sure how it will help unless there's a method of unlocking the VRAM clocks (preferably safe, as my PG card does not have dual bios and I can't really remove it from the loop and plug in another GPU in the secondary PCIE)


----------



## alceryes

Hey all. Question for the learn-ed.
Is there a huge difference, performance-wise, between the OC Formula 6900 XT and OC Formula 6950 XT?


----------



## Veii

alceryes said:


> Hey all. Question for the learn-ed.
> Is there a huge difference, performance-wise, between the OC Formula 6900 XT and OC Formula 6950 XT?


6900 List
XTX: [73 BF]
~ 2150MHz memory limited // no bios unlock
~ 1150 VMAX SOC limited
~ 1175 VMAX GFX limited (around 2650 cap)

XTXH: [73 AF]
~ 2150MHz memory limited unless bios unlock
~ 1150 VMAX SOC limited
~ 1200 VMAX GFX limited (around 2800 cap)
~ Slower timings than KXTX

KXTX (6950XT) [73 A5]
~ no memory limited till around 2500
~ no Voltage limited till 1300mV
~ SOC generally higher = higher than 2200 FCLK ("potentially")
~ better bios and newer curve
~ 100/150$ more expensive for every 6x50 lineup

EDIT:
All cards are by default 1200W Fuse capped (PPT1)


----------



## kairi_zeroblade

If you're saying the XTX is upto 2650mhz in core clock why does mine clock 2770mhz core??


----------



## timd78

Is it possible to set the Wattman Voltage offset setting inside MPT so i can flash my full optimised settings to bios? 

I just want an easy way to carry my settings to Linux / other OS's


----------



## deadfelllow

kairi_zeroblade said:


> If you're saying the XTX is upto 2650mhz in core clock why does mine clock 2770mhz core??


Can you do Timespy Graphic test without crashing @2770Mhz core?


----------



## kairi_zeroblade

deadfelllow said:


> Can you do Timespy Graphic test without crashing @2770Mhz core?


Yes..coz its under water..


----------



## 99belle99

kairi_zeroblade said:


> Yes..coz its under water..


You just got the silicon lottery. That is not normal for a XTX even one under water.


----------



## tsamolotoff

Not sure what kind of voltages *kairi_zeroblade *uses, but my 6900xt (73BF) can do around 2780 mhz in timespy at 1.24v (and it droops to 1.17v under load) and can run stable in all games that I've played recently (SoTTR, Quake II RTX, Uncharted 4, Path of Exile, Control with RTX enabled) at 2890 max clock (which translates to 2820-2850 in-game clock).


----------



## lestatdk

99belle99 said:


> You just got the silicon lottery. That is not normal for a XTX even one under water.


These are my daily driver settings , also under water 










Voltage is 1.24


----------



## lestatdk

FYI if those clocks are average then I'm a bit lower I guess. In TS it's around 2700 average


----------



## pajdek

Recently i bought used sapphire 6900 xt nitro+ se, it run well on stock, even some minor oc is ok in games, 3dmark, furmark - no visible artifacts or crashes, but yesterday i run msi kombustor artifact scanner with first test and i got some artifacts on stock settings, even lowering core frequencey show artifacts. Is Kombustor reliable or my card is dying?


----------



## kairi_zeroblade

99belle99 said:


> You just got the silicon lottery. That is not normal for a XTX even one under water.


Thanks my luck exactly.



tsamolotoff said:


> Not sure what kind of voltages *kairi_zeroblade *uses, but my 6900xt (73BF) can do around 2780 mhz in timespy at 1.24v (and it droops to 1.17v under load) and can run stable in all games that I've played recently (SoTTR, Quake II RTX, Uncharted 4, Path of Exile, Control with RTX enabled) at 2890 max clock (which translates to 2820-2850 in-game clock).


Actually I am undervolted to 1.150 volts but droops around 1.122-1.06v, load temps are just 43c with a 2x360+480mm (thicc) rad cooling it. I just slapped it back yesterday as I am finding my RTX 3090 to be performing crap even under a waterblock. so its really "fanboi's" paying those horrid prices at very less performance.


----------



## alceryes

Veii said:


> 6900 List
> XTX: [73 BF]
> ~ 2150MHz memory limited // no bios unlock
> ~ 1150 VMAX SOC limited
> ~ 1175 VMAX GFX limited (around 2650 cap)
> 
> XTXH: [73 AF]
> ~ 2150MHz memory limited unless bios unlock
> ~ 1150 VMAX SOC limited
> ~ 1200 VMAX GFX limited (around 2800 cap)
> ~ Slower timings than KXTX
> 
> KXTX (6950XT) [73 A5]
> ~ no memory limited till around 2500
> ~ no Voltage limited till 1300mV
> ~ SOC generally higher = higher than 2200 FCLK ("potentially")
> ~ better bios and newer curve
> ~ 100/150$ more expensive for every 6x50 lineup
> 
> EDIT:
> All cards are by default 1200W Fuse capped (PPT1)


Thanks.
I guess I should be happy that my slightly OCd reference 6900 XT gets right around the stock published numbers for the 6950 XT.


----------



## PJVol

timd78 said:


> Is it possible to set the Wattman Voltage offset setting inside MPT so i can flash my full optimised settings to bios?
> 
> I just want an easy way to carry my settings to Linux / other OS's


VDDGFX is inside "Overdrive" table, not the "PowerPlay".
The easiest way to set it is via sysfs interface of amdgpu kernel driver.
Its official and way more powerful than what Windows users have
For the overdrive settings you need
/sys/class/drm/card0/device/pp_od_clk_voltage
to read/write the whole powerplay table 
/sys/class/drm/card0/device/pp_table

Don't forget to enable overdrive by setting proper amdgpu.ppfeaturemask in a boot parameter which should be </sys/module/amdgpu/parameters/ppfeaturemask> | 0x4000 (check grub menu "E")

More details here:





AMDGPU - ArchWiki







wiki.archlinux.org






GPU Power/Thermal Controls and Monitoring — The Linux Kernel documentation



PS: There's also a simple utility from the @Veii's friend )








amdgpu-clocks/amdgpu-clocks at master · sibradzic/amdgpu-clocks


Simple script to control power states of amdgpu driven GPUs - amdgpu-clocks/amdgpu-clocks at master · sibradzic/amdgpu-clocks




github.com


----------



## xG O O Nx

at this point would it be worth flashing to the Radeon LC bios?
First run on FSE. Driver waiting to get approved
22.276 timespy best.
#2 in firestrike.
5800x/ 6900xt XFX speedster zero/ liquid metal/ changed thermal pads. 
Cards beastly. Ive pushed the card to 2973mhz in firestrike. 
xG O O Nx


----------



## xG O O Nx

Here's my #2 score in firestrike.
Kinda glad I found some guys that are in the top 10. Obviously ran with MPT. This run was 2950min-3050max 2180mem 
750w power plan
850w ppt limit
1.350v
1.250soc


----------



## lestatdk

1.35V ? Damn


----------



## xG O O Nx

lestatdk said:


> 1.35V ? Damn


Completely water-cooled. This also allows me to clock past 2900mhz


----------



## lestatdk

Friend of mine keep saying it's not worth it to OC. Here's a comparison of my current settings vs back when I ran it completely stock. Only a small 28% increase almost. Totally not worth it


----------



## xG O O Nx

lestatdk said:


> Friend of mine keep saying it's not worth it to OC. Here's a comparison of my current settings vs back when I ran it completely stock. Only a small 28% increase almost. Totally not worth it
> 
> View attachment 2583589


Always worth it!! Gains are always great


----------



## xG O O Nx

My best timespy so far, once drivers are approved.


----------



## lestatdk

xG O O Nx said:


> Always worth it!! Gains are always great


Yeah that's what I keep telling him. But he's a stubborn fool and won't accept the fact that it's free performance


----------



## alceryes

Agreed.
My very first Time Spy run, with my brand new 6900 XT, got me 19.3K graphics score.
With an undervolt and a little overclock I'm now getting 22K, and thanks to the PCB back thermal pad mod my card stays cooler than it was with the 19.3K score.

I'll take 10% greater performance AND cooler running any day of the week!


----------



## Henrik9979

Veii said:


> https://download.msi.com/bos_exe/mb/7B89v2H.zip maybe wont help but here
> 1206C is buggy
> View attachment 2582541


Okay I spoke with msi and it seems like my motherboard is faulty. I will get a test cpu to be sure it is the motherboard. Anyway.

You know the whole thing started with me using Hydra wrong. How do I use it right along with more power tool?

I wanted to use the auto overclocking function but unless I misunderstand something it can only be done under the diagnostic tap which doesn't allow a modified Power Play Table, therefor resetting the card back to stuck.

I wanted to see what values it would recommend as stable with 1.287mv as max but it can't be done it seems.


----------



## Henrik9979

lestatdk said:


> Yeah that's what I keep telling him. But he's a stubborn fool and won't accept the fact that it's free performance


In this case I would say "Free" 😆 bc the power bill wouldn't agree. 😉


----------



## ju-rek

alceryes said:


> to the PCB back thermal pad mod my card stays cooler than it was with the 19.3K score.


What mod is this?


----------



## 99belle99

ju-rek said:


> What mod is this?


You stick thermal pads on the back of the PCB and the back plate.

Personally I never done it as I do not think it would do much but as that poster and others that done it says it lowers temps. But sure what do I know anyway.


----------



## supergt99

Henrik9979 said:


> Okay I spoke with msi and it seems like my motherboard is faulty. I will get a test cpu to be sure it is the motherboard. Anyway.
> 
> You know the whole thing started with me using Hydra wrong. How do I use it right along with more power tool?
> 
> I wanted to use the auto overclocking function but unless I misunderstand something it can only be done under the diagnostic tap which doesn't allow a modified Power Play Table, therefor resetting the card back to stuck.
> 
> I wanted to see what values it would recommend as stable with 1.287mv as max but it can't be done it seems.


I use hydra 1.2D. Set my MPT and use the overclock gpu function


----------



## xG O O Nx

Well. Took #1 timespy extreme and #1 firestrike extreme, #2 in firestrike. got to #10 in timespy. Working on that one. 
Finally broke 25.2k GPU in timespy. 
It needs to get colder in Texas.


----------



## alceryes

ju-rek said:


> What mod is this?











[Official] AMD Radeon RX 6900 XT Owner's Club


I've been looking into it. The Merc319 limited black is basically the same PCB as a red devil Ultimate, but with slightly better input filtering. The MSI gaming trio z has the second best PCB, but the cooler allows rather high junction temps and is rather noisy. AsRock formula OC is the best...




www.overclock.net





The issue with the reference 6900 XT design is that they've got this full aluminum backplate (heatsink) without any thermal pads on the back of the card. I put pads on the VRMs and under where the memory is (memory is only on the front of the card so I'm just pulling off the heat that comes through the PCB). It definitely makes a difference although many AIBs already put thernal pads on the back of the card.


----------



## xG O O Nx

@weleh what's your secret bro, I see that score you just put up in timespy!😄, is the Radeon LC bios really pushing the card that much further ? the 16 vs 18gbps. 
Are there any other tips or little tricks I'm missing, I feel like I'm pushing MPT to the max.


----------



## kairi_zeroblade

alceryes said:


> [Official] AMD Radeon RX 6900 XT Owner's Club
> 
> 
> I've been looking into it. The Merc319 limited black is basically the same PCB as a red devil Ultimate, but with slightly better input filtering. The MSI gaming trio z has the second best PCB, but the cooler allows rather high junction temps and is rather noisy. AsRock formula OC is the best...
> 
> 
> 
> 
> www.overclock.net
> 
> 
> 
> 
> 
> The issue with the reference 6900 XT design is that they've got this full aluminum backplate (heatsink) without any thermal pads on the back of the card. I put pads on the VRMs and under where the memory is (memory is only on the front of the card so I'm just pulling off the heat that comes through the PCB). It definitely makes a difference although many AIBs already put thernal pads on the back of the card.


This mod only works depending on the model, for my use case for both my 6800XT and 6900XT (both AIB cards), when I repasted them (while still on air coolers) this didn't have that grand effect with memory junction temps, instead putting the pads behind the VRM's did help a bit.


----------



## Henrik9979

supergt99 said:


> I use hydra 1.2D. Set my MPT and use the overclock gpu function


Do you mean the overclocking funktion down below?
As you can see on my screenshot I can't press "LET'S GO" because I didn't let hydra reset my MPT settings.


----------



## alceryes

kairi_zeroblade said:


> This mod only works depending on the model, for my use case for both my 6800XT and 6900XT (both AIB cards), when I repasted them (while still on air coolers) this didn't have that grand effect with memory junction temps, instead putting the pads behind the VRM's did help a bit.


Yeah, the reference card is the prime candidate for this mod because of the complete lack of pads on the back.


----------



## supergt99

Henrik9979 said:


> Do you mean the overclocking funktion down below?
> As you can see on my screenshot I can't press "LET'S GO" because I didn't let hydra reset my MPT settings.
> View attachment 2583933


Yes. Try hydra1.2D. The newer hydra won’t let you run a modified MPT.


----------



## Henrik9979

supergt99 said:


> Yes. Try hydra1.2D. The newer hydra won’t let you run a modified MPT.


Ah okay 😃👍 I will try the older version then. But why remove that option?


----------



## supergt99

Henrik9979 said:


> Ah okay 😃👍 I will try the older version then. But why remove that option?


not sure, but maybe for RMP option? Once that became available i was no longer able to modify MPT.


----------



## DudeousDude

Hey, First time overclocker and AMD-card owner. Wondering if I found good sweetspot for my card and some newbie questions.

Got myself Powercolor RX 6900 XT Red Devil.

I've currently settled on: 










Cyberpunk and Warzone 2.0 have been nice and stable so far.

*Time Spy score: 23 115.*

Clockspeed seems to hardcap on 2545MHz. *What exactly causes this?*

Both higher and lower Voltage by increments of 5 drops the score and FPS.
Increasing Min/Max frequency by 25 allows the frequency to jump to 2566MHz, but it fluctuates a lot more and score/FPS drops.
Hotspot temperature spikes at 94C.
Fast timing is disabled since I'm 90% sure it caused instability with Cyberpunk with some earlier settings.

*So are my current settings likely the best I can push with just using Adrenalin?*

Now, if I were to increase the power limit with MPT would it bring performance with the cost of thermals? Or *could MPT open up better optimizing while keeping the thermals as I currently have them?*

Thanks!


----------



## 99belle99

First thing is change the fan curve to be louder and card cooler is a start in order to maintain higher clock speeds.


----------



## DudeousDude

99belle99 said:


> First thing is change the fan curve to be louder and card cooler is a start in order to maintain higher clock speeds.


I'm optimizing this for everyday gaming not just for benchmarks. Currently I have all fans set to a level that I find tolerable. Games aren't hitting such high temperatures, but other than that you think hitting 94C hotspot with Time Spy is too high?

[EDIT] Just to make sure tried higher fan speeds with the same settings and no effect on clockspeeds. Score even dropped for whatever reason.


----------



## alceryes

DudeousDude said:


> I'm optimizing this for everyday gaming not just for benchmarks. Currently I have all fans set to a level that I find tolerable. Games aren't hitting such high temperatures, but other than that you think hitting 94C hotspot with Time Spy is too high?
> 
> [EDIT] Just to make sure tried higher fan speeds with the same settings and no effect on clockspeeds. Score even dropped for whatever reason.


Technically, AMD says 95ºC hotspot is okay, but the lower the better.


----------



## blackdragonx

question for you guys. i have an asrock phantom gaming 6900xt. with current driver 22.10.3, i have a total allowed board power of 323.150w. with the previous whql driver, 22.5.1 it was sitting at 403w. i never messed with MPT, mostly because i wasn't sure how to use it. any idea why it would give me more power and then remove it? In adrenalin, the power slider is moved to +15% in both driver versions. I know the card was able to use that much power as while playing metro exodus with RT on it got a few peaks of 410+w. im using hwinfo64 to monitor readings. i wish i would have taken a screenshot with it open to show the previous power limit.

windows 10, high performance mode is toggled on. its water cooled so heat isn't a problem for now


----------



## tsamolotoff

Don't move slider or change % range, just increase absolute power value to the one you desire


----------



## blackdragonx

tsamolotoff said:


> Don't move slider or change % range, just increase absolute power value to the one you desire


Where do I find that option?


----------



## jonRock1992

muhasdas said:


> Hey guys. Iv been reading this topic for like 6 months now. I try some dif oc co for my Xfx 6900xt merc 319 black. Can you please help me if I'm doing something wrong, because Im a huge noob about oc co gpu. Once, before I get this 6900xt, I had 1080ti (evga) and I did mount that gpu kraken g12 myself. Used it for like 3 years. After that, one my friend wanted to buy it, but he wanted it with stock air cooler, I did unmount the kraken, revert it back to stock. We tried, tested, it was all cool at the beggining, but I made a mistake (old habbits) via msi afterburner, so the card died hardly  did everything to fix it, but nothing worked. Anyways, its my first AMD gpu. Its really hard to use it correctly.. I do some oc, hard, med. low, nothing seems to work properly.. Gaming, all cool. Browsing the internet, sometimes its giving me grey screen with no warning at all and then I need to reset the pc manually. After reboot, AMD wattman crash report comes, reverts back my oc settings. That issue happens for like last couple weeks. Im guessing its a driver issue. The other question is; is my gpu's ram broken or something? People r using most of 6900 xts with 2150mhz fast timings all the time. I can never pass 2100 properly. Even its 2110, my screen glitches.. Am I doing something wrong here? These are the specs of my PC:
> 
> Monitor: Odyssey G7 27"
> Case: Phanteks P500a
> MOBO: Gigabyte x570s Aorus Master
> CPU: 5900x - Pbo2 +200/-28 secondary cores, -13 main cores
> CPU Cooler: Arctic Liquid Freezer II 240 (No RGB, I hate RGB for any parts
> GPU: XFX 6900xt Merc 319 Black - 2500/2600 mhz - 2090 Fast Timing ram - +%15 Power - 1100 mv - Stock Bios **** *Getting grey screen while browsing lately :/ Need to restard manually. After reboot, wattman resets my oc settings.. I have to figure it out that issue. Starts to happen last couple weeks..*
> RAM: 4x8 Patriot Viper Steelseries 4400 cl19 / [email protected] 1.5v
> PSU: Seasonic 1000w gold
> FANS: 3x Silent Wings 3 140mm front, intake. 1x P500a Original 140mm outtake, back of the case
> SSDs: Crucial 1 TB P5 Plus CT1000P5PSSD8 M.2 PCI-Express 4.0 - Samsung 970 evo 500gb
> HDD: 2tb Seagate
> 
> And this is my last stable OC. At least it passed timespy tests.. But as you can see, hot spot temps were almost 110c here.. Is it safe to use it like this? Im only gaming, sometimes getting some renders for my work. I'm not a stat whore tho, just try to play that goddam warzone better. When I see 2600mhz+ on cores, the game is smoother. Trust me, I can feel that  Any comment from you guys much much much appritiated. I will do as you say, thank you so much for your help. (sorry for my eng btw)
> View attachment 2575970


I started getting the grey screens after updating to the latest AMD display driver from 22.5.1. It only happens while I'm browsing on Edge, which I believe is Chromium based. So I'm probably going to try Firefox out. It's hard to say if it's the driver's fault or the web browser's fault. It's unfortunate that I need to stay on this driver though because I'm playing through Uncharted.


----------



## tsamolotoff

blackdragonx said:


> Where do I find that option?


In MPT, just change base PL value. For example if you want the GPU to consume 430W, you just type 430 in there







:


----------



## PJVol

jonRock1992 said:


> which I believe is Chromium based


Very likely it's still buggy. Switching ANGLE backend to d3d9 or OGL might help, as reported by some users.
I myself have been using FF starting from vega, then navi10 / navi21 and have never had any issues related to either renderer or gpu decode.


----------



## colourcode

Nechana said:


> Hi , i have same card and it behave exactly same. I was lowered MHz in Wattman too but i set low to 2265MHz, dips are not so often now. I think its driver problem and hope for new WHQL drivers.I use 22.5.1 for now because they are most stable.
> I repaste my card and temps are now max 85 ambient and 95 hotspot. I just apply new paste because there are very thic thermal pads (0,5 cm maybe) and some smaller but many.


Late reply but here we go again. I used to be able to run my card undervolted with lower clocks. But now it crashes no matter what it try. It's not even stable at 2365 @default vcore...
It's also getting MUCH hotter now, despite it being way-way cooler in the room.

Is it possible to remove the heatsink without ****ing up the thermal pads?
Looks like the core-cooler is separate from the other ones?

Edit / Update:

Holy crap what a difference repasting made. This card was by far the easiest I've ever taken a part aswell!
Before, playing Valheim with vscync active, so capped at 100FPS - 100+ degrees hotspot, fan nearly at 2k RPM.
After, playing Valheim with vsync off, around 150 fps, high barely pushing pat 90 degrees hotspot, fan @ 1-1,5K rpm.


----------



## alceryes

colourcode said:


> Late reply but here we go again. I used to be able to run my card undervolted with lower clocks. But now it crashes no matter what it try. It's not even stable at 2365 @default vcore...
> It's also getting MUCH hotter now, despite it being way-way cooler in the room.
> 
> Is it possible to remove the heatsink without ****ing up the thermal pads?
> Looks like the core-cooler is separate from the other ones?
> 
> Edit / Update:
> 
> Holy crap what a difference repasting made. This card was by far the easiest I've ever taken a part aswell!
> Before, playing Valheim with vscync active, so capped at 100FPS - 100+ degrees hotspot, fan nearly at 2k RPM.
> After, playing Valheim with vsync off, around 150 fps, high barely pushing pat 90 degrees hotspot, fan @ 1-1,5K rpm.


Successful repaste!


----------



## stoleg

У меня видеокарта Sapphire RX6900XT Nitro+ SE Gaming Edition OC 11308-03-20G на ней вообще не запускаются тесты 3DMark-DirectXRaytracingFeatureTest, 3DMark-SpeedWay, 3DMark-PortRoyal при этом TimesSpy и FireStrike запускаются без проблем. Операционная система win 11 22h2, WDDM 1.3, Adrenalin 22.11.1Не могу разобраться в чём проблема


----------



## stoleg




----------



## freddy85

My red devil shows a 25c-30c higher junction temp at 320w, is this a normal behavior?

If not, I will try a repaste.


----------



## marcoschaap

freddy85 said:


> My red devil shows a 25c-30c higher junction temp at 320w, is this a normal behavior?
> 
> If not, I will try a repaste.


In comparison to stock? I'd say pretty reasonable. What's the max Tj on 320?


----------



## Kodo28

Love it, no need to upgrade anymore. Navi 31 already 😄😅😂


----------



## freddy85

marcoschaap said:


> In comparison to stock? I'd say pretty reasonable. What's the max Tj on 320?


Around 90-95 on stock fan curve.


----------



## nordskov

stoleg said:


> У меня видеокарта Sapphire RX6900XT Nitro+ SE Gaming Edition OC 11308-03-20G на ней вообще не запускаются тесты 3DMark-DirectXRaytracingFeatureTest, 3DMark-SpeedWay, 3DMark-PortRoyal при этом TimesSpy и FireStrike запускаются без проблем. Операционная система win 11 22h2, WDDM 1.3, Adrenalin 22.11.1Не могу разобраться в чём проблема


used google translate lol.. u need to write in english to let people help u..
I see the problem is your not able to run port royal etc but firestrike and timespy works fine, you could try lowering the clocks of the memory and core and see if that helps as a start, if it does your either power limited to hard by factory, or your card just bad silicon and needs more voltage to rrun proberbly i guess, or last problem, to weak a psu


----------



## st19ol78eg

I have a Sapphire RX6900XT Nitro+ SE Gaming Edition OC 11308-03-20G video card that does not run 3DMark-DirectXRaytracingFeatureTest, 3DMark-SpeedWay, 3DMark-PortRoyal tests at all, while TimesSpy and FireStrike run without problems. Operating system win 11 22 h2, WDDM 1.3, Adrenalin 22.11.1 I can't figure out what the problem is


----------



## stoleg

У меня есть видеокарта Sapphire RX6900XT Nitro + SE Gaming Edition OC 11308-03-20G, она вообще не запускает тесты 3DMark-DirectXRaytracingFeatureTest, 3DMark-SpeedWay, 3DMark-PortRoyal, в то время как TimesSpy и FireStrike работают без проблем. Операционная система win 11 22h2, WDDM 1.3, Adrenalin 22.11.1 Я не могу понять, в чем проблема


----------



## stoleg

I have a Sapphire RX6900XT Nitro + SE Gaming Edition OC 11308-03-20G video card, it does not run 3DMark-DirectXRaytracingFeatureTest, 3DMark-SpeedWay, 3DMark-PortRoyal tests at all, while TimesSpy and FireStrike run without problems. Operating system win 11 22h2, WDDM 1.3, Adrenalin 22.11.1 I can't figure out what the problem is


----------



## stoleg

nordskov said:


> used google translate lol.. u need to write in english to let people help u..
> I see the problem is your not able to run port royal etc but firestrike and timespy works fine, you could try lowering the clocks of the memory and core and see if that helps as a start, if it does your either power limited to hard by factory, or your card just bad silicon and needs more voltage to rrun proberbly i guess, or last problem, to weak a psu


I tried to reduce the frequency of the GPU and the voltage of the GPU, I also tried to automatically enable GPU overclocking and increase the GPU voltage in the adrenaline driver itself. Nothing helps. changing the memory parameters in the adrenaline driver cannot be changed. I used to have an Aerocool KCAS 1000M power supply, then I bought a new PHANTEKS REVOLT X 1200 PLATINUM 80 PL. With the new power supply, the situation has not changed, the tests do not work


----------



## timd78

Does anyone on linux know how to target the faster ram timings? I can see how to do the rest.


----------



## nordskov

stoleg said:


> I tried to reduce the frequency of the GPU and the voltage of the GPU, I also tried to automatically enable GPU overclocking and increase the GPU voltage in the adrenaline driver itself. Nothing helps. changing the memory parameters in the adrenaline driver cannot be changed. I used to have an Aerocool KCAS 1000M power supply, then I bought a new PHANTEKS REVOLT X 1200 PLATINUM 80 PL. With the new power supply, the situation has not changed, the tests do not work


did u try reset all in the driver to factory defaults? and then to run stock voltage and just lowering the boost clock? something doesnt seem right, it should easy complete those benchmarks to be honest.


----------



## stoleg

nordskov said:


> did u try reset all in the driver to factory defaults? and then to run stock voltage and just lowering the boost clock? something doesnt seem right, it should easy complete those benchmarks to be honest.


Tried, didn't help. It's the same in games, as soon as I go to the menu and turn on ray tracing, the games freeze right in the menu or at the beginning of the gameplay


----------



## nordskov

stoleg said:


> Tried, didn't help. It's the same in games, as soon as I go to the menu and turn on ray tracing, the games freeze right in the menu or at the beginning of the gameplay


that is really wierd.. you tryed different driver etc already ? could be a bad gpu then ? :/ not sure i have other suggestions, mine does every benchmark fine, and does it really well, dont have driver issues etc either so not sure what to do here, maybe you just got bad luck and got a bad gpu from the beginning ?


----------



## stoleg

nordskov said:


> that is really wierd.. you tryed different driver etc already ? could be a bad gpu then ? :/ not sure i have other suggestions, mine does every benchmark fine, and does it really well, dont have driver issues etc either so not sure what to do here, maybe you just got bad luck and got a bad gpu from the beginning ?


I have already reinstalled the operating system to the latest one, and tried with different drivers and installed them using the DDU driver cleaning program. I bought a used video card but with a residual guarantee from our country's store. The seller refused to refund my money and take my video card back. After that, I turned to that store for a guarantee, they answered me that this is not their warranty card, but a fake


----------



## nordskov

stoleg said:


> I have already reinstalled the operating system to the latest one, and tried with different drivers and installed them using the DDU driver cleaning program. I bought a used video card but with a residual guarantee from our country's store. The seller refused to refund my money and take my video card back. After that, I turned to that store for a guarantee, they answered me that this is not their warranty card, but a fake


How Can it be fake ??


----------



## smokedawg

timd78 said:


> Does anyone on linux know how to target the faster ram timings? I can see how to do the rest.


Maybe this could be of use (though 6000 series GPU support needs to be added)








GitHub - Eliovp/amdmemorytweak: Read and modify memory timings on the fly


Read and modify memory timings on the fly. Contribute to Eliovp/amdmemorytweak development by creating an account on GitHub.




github.com


----------



## 99belle99

stoleg said:


> I have already reinstalled the operating system to the latest one, and tried with different drivers and installed them using the DDU driver cleaning program. I bought a used video card but with a residual guarantee from our country's store. The seller refused to refund my money and take my video card back. After that, I turned to that store for a guarantee, they answered me that this is not their warranty card, but a fake


So you bought a second hand card. That's the problem. Someone obviously sold it as it had problems. That the main issue with buying second hand.

There are honest people who would never sell a faulty card but unfortunately not everyone is like that.


----------



## jonRock1992

There's no more grey screens so far when browsing with Chromium-based web browsers using the latest driver.


----------



## stoleg

nordskov said:


> How Can it be fake ??


the warranty card turned out to be invalid, I wrote to that store and showed them a photo of the warranty card and they told me that it was not their warranty card, it did not look like their real warranty cards at all, there is a stamp in my warranty card and a seal in their present one. The main thing is that I do not know how the video card worked initially, now it does not work correctly with ray tracing. Furmark works, but the temperature in it is too high 85 degrees.


----------



## Kodo28

stoleg said:


> the warranty card turned out to be invalid, I wrote to that store and showed them a photo of the warranty card and they told me that it was not their warranty card, it did not look like their real warranty cards at all, there is a stamp in my warranty card and a seal in their present one. The main thing is that I do not know how the video card worked initially, now it does not work correctly with ray tracing. Furmark works, but the temperature in it is too high 85 degrees.


Sounds like you got scammed by that seller telling you the card was still under warranty with fake warranty card when it was not and maybe even defective.
Other than the "warranty card" did you got any invoice/receipt of the card. 
Try to check with Sapphire to confirm by the serial number if they can tell you how old the card is.
Can you post picture of the card and gpu-z?


----------



## stoleg

Kodo28 said:


> Похоже, что вы были обмануты продавцом, сказав вам, что карта все еще находится на гарантии с поддельным гарантийным талоном, когда она не была и, возможно, даже дефектной.
> Кроме «гарантийного талона», вы получили какой-либо счет/квитанцию карты.
> Попробуйте проверить с Sapphire, чтобы подтвердить серийный номер, могут ли они сказать вам, сколько лет карте.
> Можете ли вы опубликовать фотографию карты и gpu-z?
> [/ЦИТАТА]


----------



## stoleg

Pri poluchenii na pochte yest' cheki ob oplate i dannyye cheloveka prodavtsa

Upon receipt at the post office there are receipts for payment and data of the person of the seller


----------



## tsamolotoff

stoleg said:


> I have already reinstalled the operating system to the latest one, and tried with different drivers


Is 3dmark up to date and geniune (aka non-pirated)? If your GPU can pass Timespy, then it should pass any DXR tests as they are less demanding than anything and RT modules are integrated within TMUs (so they probably can't be defective without affecting any other aspect of performance). Just to check it download Quake II RTX from steam (it's free) and test if it works. Also, any other RTX game like Control (which was gifted for free on EGS a while ago)


----------



## stoleg

tsamolotoff said:


> Is 3dmark up to date and geniune (aka non-pirated)? If your GPU can pass Timespy, then it should pass any DXR tests as they are less demanding than anything and RT modules are integrated within TMUs (so they probably can't be defective without affecting any other aspect of performance). Just to check it download Quake II RTX from steam (it's free) and test if it works. Also, any other RTX game like Control (which was gifted for free on EGS a while ago)


3dmark download beta version, with the latest updates. At the expense of these two games I'll try


----------



## Kodo28

stoleg said:


> Pri poluchenii na pochte yest' cheki ob oplate i dannyye cheloveka prodavtsa
> Upon receipt at the post office there are receipts for payment and data of the person of the seller


Up to you to go forward with this. Not sure how law things are done in your Country for sales abuses.
For the card issue, try to test it in another system. At least you will be cleared to know if issue comes from card or not. 
If you get same issue in another system than card is for sure defective.


----------



## stoleg

Kodo28 said:


> Up to you to go forward with this. Not sure how law things are done in your Country for sales abuses.
> For the card issue, try to test it in another system. At least you will be cleared to know if issue comes from card or not.
> If you get same issue in another system than card is for sure defective.


I installed another MSI RTX 2070 video card in my computer and everything worked well on it, both games and tests


----------



## Kodo28

stoleg said:


> I installed another MSI RTX 2070 video card in my computer and everything worked well on it, both games and tests


I would test the 6900xt on another system just in case. If issue happen also on other system than card is faulty for sure.


----------



## marcoschaap

I have a question, what would be a non destructive power target for a 2x8-pin PCB? I have a Sapphire Nitro+ Radeon RX 6900 XT SE on a Corsair rMX 850 PSU. Currently running @400w but I'm looking for a bit more (GPU limited in VR). I know by default 8-pins are rated for 150w each, they both have a seperate line to the PSU. Could running 475 destroy the GPU?


----------



## kairi_zeroblade

marcoschaap said:


> I have a question, what would be a non destructive power target for a 2x8-pin PCB? I have a Sapphire Nitro+ Radeon RX 6900 XT SE on a Corsair rMX 850 PSU. Currently running @400w but I'm looking for a bit more (GPU limited in VR). I know by default 8-pins are rated for 150w each, they both have a seperate line to the PSU. Could running 475 destroy the GPU?


The sapphire has fuses..dunno the max limit those fuses will allow those 2 8 pins can draw upto 200w each safely as long as your cable is 16awg..


----------



## stoleg

tsamolotoff said:


> Is 3dmark up to date and geniune (aka non-pirated)? If your GPU can pass Timespy, then it should pass any DXR tests as they are less demanding than anything and RT modules are integrated within TMUs (so they probably can't be defective without affecting any other aspect of performance). Just to check it download Quake II RTX from steam (it's free) and test if it works. Also, any other RTX game like Control (which was gifted for free on EGS a while ago)


Quake II RTX also freezes


----------



## stoleg

stoleg said:


> Quake II RTX also freezes


Control also freezes


----------



## tsamolotoff

If it is the same for clean windows install, then your GPU is truly broken, my condolences. Report the seller to KGB and Lukashenko


----------



## damric

Hello

I finally got around to getting a full cover water block on my reference 6900XT. The benchmark results so far are very good. Before I was hot spot 95C limited even with fans at 100% at stock power limit. Now with 350w MPT I'm barely at 50C hot spot. 

My question is, can I increase voltage passed 1.175 using MPT? I see the option is there, but does it work on reference cards?


----------



## tsamolotoff

damric said:


> Hello
> 
> I finally got around to getting a full cover water block on my reference 6900XT. The benchmark results so far are very good. Before I was hot spot 95C limited even with fans at 100% at stock power limit. Now with 350w MPT I'm barely at 50C hot spot.
> 
> My question is, can I increase voltage passed 1.175 using MPT? I see the option is there, but does it work on reference cards?


Yes, you can via vdepmin.


----------



## lestatdk

damric said:


> Hello
> 
> I finally got around to getting a full cover water block on my reference 6900XT. The benchmark results so far are very good. Before I was hot spot 95C limited even with fans at 100% at stock power limit. Now with 350w MPT I'm barely at 50C hot spot.
> 
> My question is, can I increase voltage passed 1.175 using MPT? I see the option is there, but does it work on reference cards?


Temp Dependent Vmin is the solution for now . This is what I'm using, do your own testing and see how it works. My card is watercooled


----------



## damric

lestatdk said:


> Temp Dependent Vmin is the solution for now . This is what I'm using, do your own testing and see how it works. My card is watercooled
> 
> View attachment 2586412


Ok I changed it to 1200. Software monitoring still shows 1175. Is that right?


----------



## lestatdk

Remember to set it in Feature Control as well


----------



## deadfelllow

marcoschaap said:


> I have a question, what would be a non destructive power target for a 2x8-pin PCB? I have a Sapphire Nitro+ Radeon RX 6900 XT SE on a Corsair rMX 850 PSU. Currently running @400w but I'm looking for a bit more (GPU limited in VR). I know by default 8-pins are rated for 150w each, they both have a seperate line to the PSU. Could running 475 destroy the GPU?


I have a 6900XT extreme edition and my GPU can draw up to 650W with 2x8 1x6 pins = 150+150+75= 375W. So I'm over 275W of rated specs and nothing happens.


----------



## 6u4rdi4n

Have anyone tested the latest drivers? I've been using the ones from May, but the last couple of weeks, windows update suddenly installs a new driver for me and that makes Adrenalin unable to launch (says it's not compatible with the driver installed). The last time windows updated the driver, I went back to my usual driver, but suddenly it installed a newer one again (even if it's set not to install drivers), so I went with the latest optional driver to get Adrenalin working.

Now my 6900 XT MERC 319 won't draw anymore than about 240W max, according to the readings in Adrenalin. Meanwhile, I've been able to draw the full 375W without MPT tweaks with the earlier versions and even north of 400W with MPT tweaks.


----------



## wetherfort

No issues on my Phantom D from ASRock. Been rather surprising how stable it is coming from a 3070.


----------



## 6u4rdi4n

Stability definitely hasn't been an issue for me. I've been mainly using Nvidia graphics card in my main computer since 07 or something, and the 6900 XT has been the most stable card I've had.

I managed to push the power draw north of 400W again, after just trying to apply MPT settings a number of times. So seems like maybe I fixed it? lol. It definitely draws a bit less power, and it doesn't seem like it's hitting the same upper frequency as before with a a bit lower avg frequency in Time Spy. Same MPT and Adrenalin settings as with the previous driver I used.

I did manage a higher score after getting the MPT tweaks working (just power limit increase for me, nothing else), so I guess we are good. 24 288 graphics score in Time Spy for those interested.


----------



## damric

lestatdk said:


> Remember to set it in Feature Control as well
> 
> 
> View attachment 2586435


Thanks. That did the trick. I had time to run a couple Time Spy tests this morning with 1.20v and 380w. It didn't really help stabilize any higher clocks but it definitely ran a bit hotter, near 60C on the hot spot. I may try 1.25v. My VRAM hates being over 2100MHz before performance drops too.


----------



## wetherfort

6u4rdi4n said:


> Stability definitely hasn't been an issue for me. I've been mainly using Nvidia graphics card in my main computer since 07 or something, and the 6900 XT has been the most stable card I've had.
> 
> I managed to push the power draw north of 400W again, after just trying to apply MPT settings a number of times. So seems like maybe I fixed it? lol. It definitely draws a bit less power, and it doesn't seem like it's hitting the same upper frequency as before with a a bit lower avg frequency in Time Spy. Same MPT and Adrenalin settings as with the previous driver I used.
> 
> I did manage a higher score after getting the MPT tweaks working (just power limit increase for me, nothing else), so I guess we are good. 24 288 graphics score in Time Spy for those interested.


Nice graphics score! What clocks?
This is my 3D Mark result.









I scored 19 771 in Time Spy


AMD Ryzen 7 5800X, AMD Radeon RX 6900 XT x 1, 32768 MB, 64-bit Windows 11}




www.3dmark.com


----------



## damric

Luke`s 3DMark - Time Spy score: 19087 marks with a Radeon RX 6900 XT


The Radeon RX 6900 XT @ 2651/2088MHzscores getScoreFormatted in the 3DMark - Time Spy benchmark. Lukeranks #null worldwide and #null in the hardware class. Find out more at HWBOT.




hwbot.org


----------



## 6u4rdi4n

wetherfort said:


> Nice graphics score! What clocks?
> This is my 3D Mark result.
> 
> 
> 
> 
> 
> 
> 
> 
> 
> I scored 19 771 in Time Spy
> 
> 
> AMD Ryzen 7 5800X, AMD Radeon RX 6900 XT x 1, 32768 MB, 64-bit Windows 11}
> 
> 
> 
> 
> www.3dmark.com


2450 min 2800 max in Adrenalin. You can see the real frequency in the link ^^ Also, I didn't shut down any background applications or anything. Just booted up and ran TS.









I scored 24 245 in Time Spy


Intel Core i9-13900K Processor, AMD Radeon RX 6900 XT x 1, 32768 MB, 64-bit Windows 10}




www.3dmark.com





This is my previous best with the old driver from May:









I scored 23 986 in Time Spy


Intel Core i9-13900K Processor, AMD Radeon RX 6900 XT x 1, 32768 MB, 64-bit Windows 10}




www.3dmark.com


----------



## GreedyMuffin

Hey guys!

Just got my hands on a Gigabutt Waterforce Extreme for sale at a ridicilous price (at least here in Norway, around 600USD) new, so it's arriving next week. An 4090 is 2500 USD here to gain an example. 

I reckon the chip is the nice XTXH due to max boost of 2525mhz. I've started reading about RDNA2 yesterday, so I'm still learning. Will I be able to flash a RX6950 rom on my card to gain the addition memory speed/voltage, or is it binned VRAM in the same way the GPU die is binned? I reckong the temps won't be an issue what so ever if everything is done correctly from the factory.

The card will be in a loop with a lapped 12700K, 2xD5, XTX360 and a Monsta480, so should keep it cool.

Is recommended to change the paste/and or thermal pads as I've read the last gen cards at Nvidia has been a total failure. Don't know if this applies here?

Many thanks, Christian.


----------



## 99belle99

GreedyMuffin said:


> Hey guys!
> 
> Just got my hands on a Gigabutt Waterforce Extreme for sale at a ridicilous price (at least here in Norway, around 600USD) new, so it's arriving next week. An 4090 is 2500 USD here to gain an example.
> 
> I reckon the chip is the nice XTXH due to max boost of 2525mhz. I've started reading about RDNA2 yesterday, so I'm still learning. Will I be able to flash a RX6950 rom on my card to gain the addition memory speed/voltage, or is it binned VRAM in the same way the GPU die is binned? I reckong the temps won't be an issue what so ever if everything is done correctly from the factory.
> 
> The card will be in a loop with a lapped 12700K, 2xD5, XTX360 and a Monsta480, so should keep it cool.
> 
> Is recommended to change the paste/and or thermal pads as I've read the last gen cards at Nvidia has been a total failure. Don't know if this applies here?
> 
> Many thanks, Christian.


If it is a XTXH chip and has dual bios then yes you can flash it with the watercooled bios to gain extra memory speeds and not a 6950 bios.

A 6950 is a different card while the watercooled is a XTXH with faster memory. That si why you flash an XTXH with the watercooled bios. And the reason I said only flash if it has dual bios is you can fix it if the flash goes wrong without having to resort to an external flasher to fix it.


----------



## GreedyMuffin

99belle99 said:


> If it is a XTXH chip and has dual bios then yes you can flash it with the watercooled bios to gain extra memory speeds and not a 6950 bios.
> 
> A 6950 is a different card while the watercooled is a XTXH with faster memory. That si why you flash an XTXH with the watercooled bios. And the reason I said only flash if it has dual bios is you can fix it if the flash goes wrong without having to resort to an external flasher to fix it.


Awesome, read some more and now I understand. Thank you! I'll try the 6900X LC bios when my card arrives. I have a 1070 in the system in case things go south. Bricked a couple of 980s back in the day, so I'm known with recovering^^.

Thank you!


----------



## TaunyTiger

lestatdk said:


> Remember to set it in Feature Control as well
> 
> 
> View attachment 2586435


OMG, thank you, thank you, thank you. 
Been running 1,3v a long time now while all the apps only shows max 1,2v. Now it shows the right values! Thanks man!


----------



## TaunyTiger

Damn, played a round of BF2042 after activating temp dependent vmin. Got some more heat to my loop. The gpu went from 54C to 63C, watertemp from 34-35c and the gpu pulled 430w. Not that usual in a game. PPT set to 450W. But I guess i got some good performence increase.


----------



## wetherfort

My hot spot reaches 95c max with an aggro fan curve and power limit max whilst limited to 2500mhz at 1080mv, I have reapplied thermal paste and seen zero difference.

Anyone got solutions other than waterblock?


----------



## tsamolotoff

I've been using Bykski fullcover waterblock for 6900xt PG, seems to be fine (+5C chip / +10-15C hotspot under long load relative to water temperature)


----------



## damric

wetherfort said:


> My hot spot reaches 95c max with an aggro fan curve and power limit max whilst limited to 2500mhz at 1080mv, I have reapplied thermal paste and seen zero difference.
> 
> Anyone got solutions other than waterblock?


Nope. Mine was like that until water.


----------



## lestatdk

damric said:


> Nope. Mine was like that until water.


My card was shutting down due to hitting the 118C thermal limit. 

Under water it can barely go above 60C..


----------



## 8800GT

Are the youtube stutters in chrome there for anyone else still? It was fixed for 1 driver version, then broke again. Their known issues say it can happen on vrr monitors with high refresh rate, but even if I turn off freesync and set it to 60hz, it's still dropping frames like mad. It's getting pretty annoying that this has gone on so long lol.


----------



## 99belle99

8800GT said:


> Are the youtube stutters in chrome there for anyone else still? It was fixed for 1 driver version, then broke again. Their known issues say it can happen on vrr monitors with high refresh rate, but even if I turn off freesync and set it to 60hz, it's still dropping frames like mad. It's getting pretty annoying that this has gone on so long lol.


I know this doesn't help you but I have a reference 6900 XT since near launch and I never experienced that issue.


----------



## 8800GT

99belle99 said:


> I know this doesn't help you but I have a reference 6900 XT since near launch and I never experienced that issue.


Seems to be random from what I've seen. Maybe a fresh windows install is in order.


----------



## ericc64

I set today new Speed Way record for 6900XT..... 4698  Card ist Asrock 6900XT Formula OC + Bykski block + 2x Mora 420









I scored 4 698 in Speed Way


Intel Core i9-13900K Processor, AMD Radeon RX 6900 XT x 1, 32768 MB, 64-bit Windows 10}




www.3dmark.com


----------



## jonRock1992

GreedyMuffin said:


> Awesome, read some more and now I understand. Thank you! I'll try the 6900X LC bios when my card arrives. I have a 1070 in the system in case things go south. Bricked a couple of 980s back in the day, so I'm known with recovering^^.
> 
> Thank you!


The 6900 XTLC vbios is great. I have my Red Devil Ultimate flashed with it. It does 2362MHz Fast-Timings Level 2 with that vbios. Just beware, if your GPU does not have a USB-C output, your system may not Post with the LC vbios. It just depends on your motherboard.


----------



## 8800GT

ericc64 said:


> I set today new Speed Way record for 6900XT..... 4698  Card ist Asrock 6900XT Formula OC + Bykski block + 2x Mora 420
> 
> 
> 
> 
> 
> 
> 
> 
> 
> I scored 4 698 in Speed Way
> 
> 
> Intel Core i9-13900K Processor, AMD Radeon RX 6900 XT x 1, 32768 MB, 64-bit Windows 10}
> 
> 
> 
> 
> www.3dmark.com
> 
> 
> 
> 
> 
> View attachment 2587354


Damn. What voltage you have it at? I'm on reference card and I can't get above 2815 on 1.35v. I think I have the worst binned card ever.


----------



## jonRock1992

ericc64 said:


> I set today new Speed Way record for 6900XT..... 4698  Card ist Asrock 6900XT Formula OC + Bykski block + 2x Mora 420
> 
> 
> 
> 
> 
> 
> 
> 
> 
> I scored 4 698 in Speed Way
> 
> 
> Intel Core i9-13900K Processor, AMD Radeon RX 6900 XT x 1, 32768 MB, 64-bit Windows 10}
> 
> 
> 
> 
> www.3dmark.com
> 
> 
> 
> 
> 
> View attachment 2587354


That build is beautiful


----------



## ericc64

8800GT said:


> Damn. What voltage you have it at? I'm on reference card and I can't get above 2815 on 1.35v. I think I have the worst binned card ever.


cca 1,34V ... reference is not good, you must have XTXH cip for those high clock.


----------



## damric

ericc64 said:


> cca 1,34V ... reference is not good, you must have XTXH cip for those high clock.


Love your rig. What did you do to that VRAM to get it so high? I do notice that Speed Way is more lenient to unstable high clocks than when I run Time Spy.


----------



## ericc64

damric said:


> Love your rig. What did you do to that VRAM to get it so high? I do notice that Speed Way is more lenient to unstable high clocks than when I run Time Spy.


LC bios, 1,41V, fast timing 2. Yes, its sensitive for unstable clock ... 2416 was maximum stabil , 2418 was already unstable


----------



## Acegr

Freaking timespy will crash each time I go above 2780 core.. Firestrike extreme or superposition or heaven I can even reach 2900..


----------



## lucian2pinro

Hello guys !
This is my 24/7 settings 100% stable.
[email protected] enable / 2*16gb Corsair [email protected]
Sapphire 6900XT Nitro+ SE watercooled by this


https://www.aliexpress.com/item/1005003075358472.html?pdp_npi=2%40dis%21RON%21RON%20749.65%21RON%20614.71%21%21%21%21%21%402101f6b516706308668202135ee86e%2112000029476375301%21btf&_t=pvid:57ef567b-b0a2-4d47-9aab-dd008120e01a&afTraceInfo=1005003075358472__pc__pcBridgePPC__xxxxxx__1670630867&spm=a2g0o.ppclist.product.mainProduct


----------



## nordskov

lucian2pinro said:


> Hello guys !
> This is my 24/7 settings 100% stable.
> [email protected] enable / 2*16gb Corsair [email protected]
> Sapphire 6900XT Nitro+ SE watercooled by this
> 
> 
> https://www.aliexpress.com/item/1005003075358472.html?pdp_npi=2%40dis%21RON%21RON%20749.65%21RON%20614.71%21%21%21%21%21%402101f6b516706308668202135ee86e%2112000029476375301%21btf&_t=pvid:57ef567b-b0a2-4d47-9aab-dd008120e01a&afTraceInfo=1005003075358472__pc__pcBridgePPC__xxxxxx__1670630867&spm=a2g0o.ppclist.product.mainProduct


Not a bad build. But you Can up the scores alot more for Daily. This is my 5800x 6900xt build 

settings for pbo is following; 150 boost override, cpu offset voltage negative 0.0625. Curve optimizer 2best cores -21 2 second best -27 rest -30. Ppt edc settings 139/160/200

ram is gskill Tridentz neo 4x16gb 3600mhz cl16 tweaked at cl14-14-12-12-22 1t +subs

gpu running 1.287v and 480w +15%pl. Boost for timespy 2872 and ram 2132ft. For games i increase boost to 2947 giving me a ingame clock of 2895-2920


----------



## lucian2pinro

nordskov said:


> Not a bad build. But you Can up the scores alot more for Daily. This is my 5800x 6900xt build
> 
> settings for pbo is following; 150 boost override, cpu offset voltage negative 0.0625. Curve optimizer 2best cores -21 2 second best -27 rest -30. Ppt edc settings 139/160/200
> 
> ram is gskill Tridentz neo 4x16gb 3600mhz cl16 tweaked at cl14-14-12-12-22 1t +subs
> 
> gpu running 1.287v and 480w +15%pl. Boost for timespy 2872 and ram 2132ft. For games i increase boost to 2947 giving me a ingame clock of 2895-2920
> 
> View attachment 2587874


Can you give me pls some info about MorePowerTool settings. I presume you use it .Mine are power limit 350 tdc limits 370 soc 55 .Nothing else changed .
What about hotspot temp at 480W ?!
With the settings mentioned above i can set a 2830 gpu/ 2080 mem( instant reading catalist metrics ) 100% stable warzone 2 with 360W~.
Thank you.


----------



## nordskov

lucian2pinro said:


> Can you give me pls some info about MorePowerTool settings. I presume you use it .Mine are power limit 350 tdc limits 370 soc 55 .Nothing else changed .
> What about hotspot temp at 480W ?!
> With the settings mentioned above i can set a 2830 gpu/ 2080 mem( instant reading catalist metrics ) 100% stable warzone 2 with 360W~.
> Thank you.


of cause, im running sapphire toxic extreme 6900xt with stock 360 aio, repaste though as stock thermal paste was ****, respastet with kryonaut extreme liquid metal, that dropped temps about 25-30c on the hotspot, before even stock with only raised boost from 2606 mhz to 2700 i easy hit 90c hotspot and throttling, in mpt i bumped voltage to 1.287 (running that for 1½ year now)
480w limit +15% more in driver so hitting about 536w peaks in timespy, my settings in mpt is temp dependent vmin on, in power and voltage settings im at 480w power limit, TDC limits 450-75
Vmin low /High (mv) im at 1287-1287 GFX and SOC 1250/1250, driver set to 2132mhz fast timing ram, boost set to 2872, low boost 2426 mhz, when gaming max boost put to 2937mhz (gives med 2900-2910 mhz stable gaming, and with 2872 im hitting 25000+ in timespy daily settings, max temps timespy at 536w peaks, is around 66c, and hotspot 82c


----------



## freddy85

I have the Aoures RX 6900 XT Xtreme WaterForce WB.

So I don't break anything, what is a safe memory voltage? I increased it to 1400mv and the memory is now stable at 2200MHz and maybe more, before the voltage increase it was only stable at 2175MHz.


----------



## nordskov

freddy85 said:


> I have the Aoures RX 6900 XT Xtreme WaterForce WB.
> 
> So I don't break anything, what is a safe memory voltage? I increased it to 1400mv and the memory is now stable at 2200MHz and maybe more, before the voltage increase it was only stable at 2175MHz.
> View attachment 2588237


did u test lower frequensies to see your not actually loseing performance? most cases thats what happens above 2130, you can finish the tests higher etc.. but performance goes down above


----------



## freddy85

nordskov said:


> did u test lower frequensies to see your not actually loseing performance? most cases thats what happens above 2130, you can finish the tests higher etc.. but performance goes down above


Yes, I did.
Now I increased the voltage to 1450 on mvdd, vddci is stil 900 and I'm able to get 2225 before error correction and lower performance. So the performance is scaling well to 2225mhz with fast timings.

But I'm not sure how far I can go, is 1500ok? and what about the VDDCI?


----------



## jonRock1992

How's the 22.11.2 driver working for everybody?


----------



## lestatdk

So far it's the first driver since 22.5.1 that does not give me issues in VR. So that's nice


----------



## Shoggoth

Can someone tell me where the power limit, core frequency and memory frequency sliders stop on the better 6900 XT(XH) models?

I've got a 6800 XT Red Devil running at 2800 core/2150 memory on water, but I'm toying with the idea of hanking in a Gigabyte 6900 Ultimate Xtreme WaterForce just for fun.


----------



## freddy85

Shoggoth said:


> Can someone tell me where the power limit, core frequency and memory frequency sliders stop on the better 6900 XT(XH) models?
> 
> I've got a 6800 XT Red Devil running at 2800 core/2150 memory on water, but I'm toying with the idea of hanking in a Gigabyte 6900 Ultimate Xtreme WaterForce just for fun.


It's completely unlocked, without too much tinkering I'm getting about 1200 higher score in time spy extreme compared to my previous red devil 6900 xt(standard edition on air).


----------



## Shoggoth

freddy85 said:


> It's completely unlocked, without too much tinkering I'm getting about 1200 higher score in time spy extreme compared to my previous red devil 6900 xt(standard edition on air).


Really, it's completely unlocked out of the box? Well, that makes it quite a bit more interesting....

What kind of clocks and temps are you seeing?

Edit: and in what kind of loop, if I can ask?


----------



## freddy85

Shoggoth said:


> Really, it's completely unlocked out of the box? Well, that makes it quite a bit more interesting....
> 
> What kind of clocks and temps are you seeing?
> 
> Edit: and in what kind of loop, if I can ask?


Yep, it is. Can change any voltages and such in mpt and its not limiting anything like my red devil none ultimate did.
I'm running two 360 rads and an ek pump.


----------



## Shoggoth

Ah OK, so unlocked when using MPT, but otherwise with the same old 115/2800 limit?


----------



## freddy85

Shoggoth said:


> Ah OK, so unlocked when using MPT, but otherwise with the same old 115/2800 limit?


5000 core and 3000 memory

Edit, regarding the block. Its 13-15c delta at 370w. The water is 30c and core between 43-45c. At 200w its 9c.


----------



## Kodo28

RX 7900 XT Overclocked To 3.7 GHz Front End Clock and 3.5 GHz Shader Clock, pretty nice  










AMD Radeon RX 7900 XT 'Navi 31' GPU Overclocked To 3.7 GHz Front End Clock & 3.5 GHz Shader Clock (wccftech.com)


----------



## Veii

XTXH is locked at 1200mV, 2719MHz boost clock, 15+ powerlimit slider
KXTX is voltage unlocked over 1.2v


----------



## freddy85

nordskov said:


> did u test lower frequensies to see your not actually loseing performance? most cases thats what happens above 2130, you can finish the tests higher etc.. but performance goes down above


Did some further testing and the performance did not improve after all. When I raised the voltage to 1400 i got a slight performance increase going from my initial 2175 mhz to 2200 mhz, before i got major error correction doing this. But it seems like lowering the voltage back to stock and the 2175 clock is actually faster. So it was a placebo effect.


----------



## Shoggoth

freddy85 said:


> 5000 core and 3000 memory
> 
> Edit, regarding the block. Its 13-15c delta at 370w. The water is 30c and core between 43-45c. At 200w its 9c.


Thanks a lot mate, appreciate the info! I'm still vacillating between buying one of these, a 7900 XTX, going all-out moron and buying a 4090, or simply do ****-all and wait until the next generation arrives.


----------



## nordskov

RX 7900 XT Overclocked To 3.7 GHz Front End Clock and 3.5 GHz Shader Clock, pretty nice 

View attachment 2588358


AMD Radeon RX 7900 XT 'Navi 31' GPU Overclocked To 3.7 GHz Front End Clock & 3.5 GHz Shader Clock (wccftech.com)
[/QUOTE]

Holly ****... is that every day usage it can run like that???

whats the timespy and firestrike etc??


----------



## Blameless

nordskov said:


> whats the timespy and firestrike etc??


They won't run at that frequency. _So far_ ~3.2GHz looks like the peak usable clock. Could change with more cooling and more power.


----------



## GreedyMuffin

What is supposedly max recommended voltage/temps/powerdraw for this architechture as long as the card can handle it? I've got a Waterforce and I saw over 500W in Timespy with som added voltage.


----------



## nordskov

GreedyMuffin said:


> What is supposedly max recommended voltage/temps/powerdraw for this architechture as long as the card can handle it? I've got a Waterforce and I saw over 500W in Timespy with som added voltage.


if you by this architechture, mean the 6000 series, my 6900xt been running 1.287w daily for more then a year, with a watt limit of 500W +15% driver, i get peaks of 552w when running timespy
boost clocks in games is around 2900-2920mhz, and timespy everyday stable 25500-25600, firestrike 72000+ best at 72900, firestrike ultra 17500, port royal 12200, speedway 4400


----------



## GreedyMuffin

nordskov said:


> if you by this architechture, mean the 6000 series, my 6900xt been running 1.287w daily for more then a year, with a watt limit of 500W +15% driver, i get peaks of 552w when running timespy
> boost clocks in games is around 2900-2920mhz, and timespy everyday stable 25500-25600, firestrike 72000+ best at 72900, firestrike ultra 17500, port royal 12200, speedway 4400


Awesome, thank you! What card do you have? I'll try up to 1,3V. I'm afraid my hotspot will be closer to 90'C under water, is that OK? Got a XTXH chip on my waterforce ultimate extreme.


----------



## nordskov

GreedyMuffin said:


> Awesome, thank you! What card do you have? I'll try up to 1,3V. I'm afraid my hotspot will be closer to 90'C under water, is that OK? Got a XTXH chip on my waterforce ultimate extreme.


Got a Sapphire toxic Extreme with 360aio Stock just repastet. 1.3 should be fine as Long as temps is good


----------



## GreedyMuffin

nordskov said:


> Got a Sapphire toxic Extreme with 360aio Stock just repastet. 1.3 should be fine as Long as temps is good


Running quite the loop, so I'll be sub 100'C hotspot, and sub 60'C GPU anyways with 1,3 voltage. Monsta 480 p/p + xtx 360 p/p and 2x D5s.


----------



## nordskov

GreedyMuffin said:


> Running quite the loop, so I'll be sub 100'C hotspot, and sub 60'C GPU anyways with 1,3 voltage. Monsta 480 p/p + xtx 360 p/p and 2x D5s.


100 is to High. Normal throttle temp is 90c hotspot. My toxic Extreme runs 540w atm and 1.287v 25600 timespy with Max 64-82c hotspot.


----------



## Acegr

nordskov said:


> 100 is to High. Normal throttle temp is 90c hotspot. My toxic Extreme runs 540w atm and 1.287v 25600 timespy with Max 64-82c hotspot.


what settings? I can do 25772 but hotspot reaches 92c. I cannot go above 2780 max core no matter what I do on timespy. (I havent tried a different driver though).








I scored 24 627 in Time Spy


Intel Core i7-12700K Processor, AMD Radeon RX 6900 XT x 1, 32768 MB, 64-bit Windows 10}




www.3dmark.com


----------



## nordskov

Acegr said:


> what settings? I can do 25772 but hotspot reaches 92c. I cannot go above 2780 max core no matter what I do on timespy. (I havent tried a different driver though).
> 
> 
> 
> 
> 
> 
> 
> 
> I scored 24 627 in Time Spy
> 
> 
> Intel Core i7-12700K Processor, AMD Radeon RX 6900 XT x 1, 32768 MB, 64-bit Windows 10}
> 
> 
> 
> 
> www.3dmark.com


Was like how the ****. But Saw your ram running Way faster. Are you on another bios ? If so how did u change it ? Cant get mine faster Then 2136 without losing performance. You Got a huge gap between temp and hotspot. I used thermal grizzly condonaut liquid metal dropped to 15-20c Max between temp and hotspot so it doesnt throttle. Havent changed it for 1 year still running strong on the temps


----------



## GreedyMuffin

I saw 99'C in timespy, is that messed up? I'll try a clean re-install and re-run the system. :O

Maybe I need to remount the block and check my system. CPU temps seems normal.. Aiiii does not sound fun doing atm... What should I expect? I should have plenty of cooling.

EDIT2: Stock run on my WF. 47'C core. 72'C hotspot. Timespy

EDIT: 2800/2125 15% PL 383. 50'C, 80'C hotspot. Timespy


----------



## Acegr

nordskov said:


> Was like how the ****. But Saw your ram running Way faster. Are you on another bios ? If so how did u change it ? Cant get mine faster Then 2136 without losing performance. You Got a huge gap between temp and hotspot. I used thermal grizzly condonaut liquid metal dropped to 15-20c Max between temp and hotspot so it doesnt throttle. Havent changed it for 1 year still running strong on the temps


I run on LC bios. Mine runs on grizzly LM too. I flashed through linux. With my 24.200 24/7 clock I get max 73c hotspot.


----------



## nordskov

Acegr said:


> I run on LC bios. Mine runs on grizzly LM too. I flashed through linux. With my 24.200 24/7 clock I get max 73c hotspot.


Aint the 25700 score Daily settings ?? 
And isnt it possible to flash the card without Linux ??


----------



## Acegr

nordskov said:


> Aint the 25700 score Daily settings ??
> And isnt it possible to flash the card without Linux ??


It is but I dont know how to flash with windows. Nah not daily, I havent tried gaming, there is no reason - no fps issue at all. Running 385w. If a time comes it struggles I will try. Till then EU electricity prices might have calmed down.


----------



## nordskov

Acegr said:


> It is but I dont know how to flash with windows. Nah not daily, I havent tried gaming, there is no reason - no fps issue at all. Running 385w. If a time comes it struggles I will try. Till then EU electricity prices might have calmed down.


This is my Daily settings gaming. 









I scored 22 440 in Time Spy


AMD Ryzen 7 5800X, AMD Radeon RX 6900 XT x 1, 65536 MB, 64-bit Windows 11}




www.3dmark.com





In timespy it wont go past 2900 MHz in driver Max 2888. But in gaming firestrike and so on im able to do up about 3ghz Daily settings. But for Daily stability i set games to 2950boost in driver Playing cod mw2 ac Valhalla at a constant 2900-2930 MHz boost


----------



## Acegr

nordskov said:


> This is my Daily settings gaming.
> 
> 
> 
> 
> 
> 
> 
> 
> 
> I scored 22 440 in Time Spy
> 
> 
> AMD Ryzen 7 5800X, AMD Radeon RX 6900 XT x 1, 65536 MB, 64-bit Windows 11}
> 
> 
> 
> 
> www.3dmark.com
> 
> 
> 
> 
> 
> In timespy it wont go past 2900 MHz in driver Max 2888. But in gaming firestrike and so on im able to do up about 3ghz Daily settings. But for Daily stability i set games to 2950boost in driver Playing cod mw2 ac Valhalla at a constant 2900-2930 MHz boost


what's your mpt settings? Can you show full screens of features too? Thanks

ps. You for sure have a great bin, mine is bad since I cant go higher than 2780 in timespy plus my temps arent the best.


----------



## nordskov

Acegr said:


> what's your mpt settings? Can you show full screens of features too? Thanks
> 
> ps. You for sure have a great bin, mine is bad since I cant go higher than 2780 in timespy plus my temps arent the best.


Its actually just: 
Temp dependent vmin on
500w limit (+15% in driver)
tdc limits: 450 gfx - 75 soc
1287-1287 gfx voltage
1225-1225 soc voltage
FCLK 2177-2177 FCLKBoostFreq 2177

Driver set to: 
Timespy: 2888 mhz boost, 2132Ft ram
minimum boost 2606 mhz boost fan speed 
P1: 25-27
P2: 25-40
P3: 28-47
P4: 32-53
P5: 35-58
+15% pl and amd sam on
same settings for games and firestrike etc. just raising boost higher for 2950mhz (gives 2900-2930 mhz boost in games) can do 1.3-1.325v aswell gives me 3ghz + but raising fan speed a bit more for that, to aprox 50% fan speed. still low noise but i hear it more, so i preffer 2900 mhz boost at max 35% fan speed


----------



## Acegr

nordskov said:


> Its actually just:
> Temp dependent vmin on
> 500w limit (+15% in driver)
> tdc limits: 450 gfx - 75 soc
> 1287-1287 gfx voltage
> 1225-1225 soc voltage
> FCLK 2177-2177 FCLKBoostFreq 2177
> 
> Driver set to:
> Timespy: 2888 mhz boost, 2132Ft ram
> minimum boost 2606 mhz boost fan speed
> P1: 25-27
> P2: 25-40
> P3: 28-47
> P4: 32-53
> P5: 35-58
> +15% pl and amd sam on
> same settings for games and firestrike etc. just raising boost higher for 2950mhz (gives 2900-2930 mhz boost in games) can do 1.3-1.325v aswell gives me 3ghz + but raising fan speed a bit more for that, to aprox 50% fan speed. still low noise but i hear it more, so i preffer 2900 mhz boost at max 35% fan speed


Thanks, doesn't it means you run your gpu always at a 1.287v though? Even on idle.


----------



## nordskov

No it clocks Down and voltage drops just like Stock. So works perfectly as intended ✌😁


----------



## Acegr

nordskov said:


> No it clocks Down and voltage drops just like Stock. So works perfectly as intended ✌😁


My bios bricks out with your settings, thanks anyway


----------



## tsamolotoff

Acegr said:


> My bios bricks out with your settings, thanks anyway


Try to lower FCLK clocks, my 73BF card can't go above 2066 (or maybe 2100), insta reboot if FCLK is set to 2133


----------



## Nizzen

The 7900xtx really making the 6900xt look like the best marvel of technology the world has ever seen 😅


----------



## 99belle99

Nizzen said:


> The 7900xtx really making the 6900xt look like the best marvel of technology the world has ever seen 😅


How do you mean?


----------



## EastCoast

Opps, wrong thread.


----------



## nordskov

Acegr said:


> My bios bricks out with your settings, thanks anyway


Proberbly the fclk. Try lowering that a bit 😅


----------



## Acegr

Thing is no matter what I do, Im hardstuck at 2780 no matter how much volt I put. If I go even 2781 timespy fails. My mpt settings


----------



## nordskov

oh i see, yea that kinda sucks.. im stuck at max 2888 even with higher voltage, but all other things like firestrike ultra games like cod mw2 ac valhalla etc all goes way beyond 2900mhz effective boost.. my brothers strix 240 aio 6900xt xtxh does about 3 ghz before failing at timespy, but for some reason, even though hes clocks is higher then mine he scores a bit lower, he only use a voltage of only 1.277v he really got a great card...


----------



## GreedyMuffin

Got this weird max boost behaviour. Could somebody help explain how it works?

At 1.225V I could go 2,8ghz consistently (2850 max boost in GPU-Z, was not hit even once) and it went fine. Sometimes it crashes as it goes to max boost (2850?) in TimeSpy Test 2. I can't understand what makes the card sometime boost to max, and sometimes max is the normal 2850-50 eff. 2800mhz.


----------



## nordskov

GreedyMuffin said:


> Got this weird max boost behaviour. Could somebody help explain how it works?
> 
> At 1.225V I could go 2,8ghz consistently (2850 max boost in GPU-Z, was not hit even once) and it went fine. Sometimes it crashes as it goes to max boost (2850?) in TimeSpy Test 2. I can't understand what makes the card sometime boost to max, and sometimes max is the normal 2850-50 eff. 2800mhz.


Its also at gt2 mine fails. Max boost is 2888 no matter how much voltage (tryed up to 1.350v even though firestrike ultra cod mw2 ac Valhalla and every freaking other games easy goes 2967+ MHz just ts gt2 crashes. Tryed different voltages. Different memory speeds etc. But gt2 fails if i go higher Then 2888.


----------



## damric

Yeah for Time Spy I just have a separate overclock profile with lower clock speeds. That new Speedway bench will really let you get away with unstable higher clocks though.


----------



## nordskov

damric said:


> Yeah for Time Spy I just have a separate overclock profile with lower clock speeds. That new Speedway bench will really let you get away with unstable higher clocks though.


Hmm in my case speedway Can go as High as firestrike games and port royal etc. Only timespy thats Got issues. Every other game and benchmark etc all runs just fine no artifacts unstability etc


----------



## GreedyMuffin

nordskov said:


> Hmm in my case speedway Can go as High as firestrike games and port royal etc. Only timespy thats Got issues. Every other game and benchmark etc all runs just fine no artifacts unstability etc


Same experience here. I may have worded myself wrong, but I don't know why it will go to full boost speed some times, but other times not. This is really annoying as the card will work at 2,8ghz, but not consistently as it sometimes goes to max 2850mhz, but other times stay at 2,8ghz and pass several runs.


----------



## nordskov

GreedyMuffin said:


> Same experience here. I may have worded myself wrong, but I don't know why it will go to full boost speed some times, but other times not. This is really annoying as the card will work at 2,8ghz, but not consistently as it sometimes goes to max 2850mhz, but other times stay at 2,8ghz and pass several runs.


guess it depends on load, temps, voltages etc. but at 2888 mhz my avg clock speed is pretty solid around 2840-2845 mhz in timespy, but the wierd part is just every other application i run easy runs all day long 2965 boost in driver (effective 2900-2930mhz).. dont get it.. but guess i shouldnt, maybe another bios would fix it.. but if it requires linux im at a dead end lol


----------



## damric

I don't game with clocks that high, but I do crank them up for benchmarking hwbot runs. For gaming I tune for about 150 watts. If I'm running [email protected] I have a profile for 120 watts where it stays at 800mV around 1850MHz. I just can't justify the energy usage to keep it going at full bore.


----------



## ApolloX30

These different Benchmarks as well as applications have quite different loads, so you need different settings for the maximum performances.


----------



## nordskov

damric said:


> I don't game with clocks that high, but I do crank them up for benchmarking hwbot runs. For gaming I tune for about 150 watts. If I'm running [email protected] I have a profile for 120 watts where it stays at 800mV around 1850MHz. I just can't justify the energy usage to keep it going at full bore.


I see. But even at Those ettings the games i Play only use like 280-360w so not much more Then Stock 😅


----------



## ManniX-ITA

hey everyone, I would need a little help to optimize my software for Adrenalin.

What I need to know is which processes are started when "Record and Stream" feature is used.
Both for Record and Live Stream, they could be different.

If more than one process is spawned when starting rec/stream, which one is actually recording to disk or sending data via network.

You can use Process Hacker 2 and watch at processes filter by the keyword "amd", I guess, on top right.
You should have by default a column I/O data rate.






Downloads - Process Hacker


Process Hacker, A free, powerful, multi-purpose tool that helps you monitor system resources, debug software and detect malware.




processhacker.sourceforge.io





Would really appreciate, thanks!


----------



## GreedyMuffin

nordskov said:


> guess it depends on load, temps, voltages etc. but at 2888 mhz my avg clock speed is pretty solid around 2840-2845 mhz in timespy, but the wierd part is just every other application i run easy runs all day long 2965 boost in driver (effective 2900-2930mhz).. dont get it.. but guess i shouldnt, maybe another bios would fix it.. but if it requires linux im at a dead end lol


Yeah but that's the thing. Sometimes it goes to max boost (as said in GPU-Z and slider) and will be unstable, other times it will boost as normal and pass. That's the thing I'm trying to figure out.

Happens at different voltages as well. Tested 1.1V, 1,15V, 1,2V, 1,25V.


----------



## tsamolotoff

Timespy seems to be really hammering both Infinity cache (as score drops drastically if more than one monitor is active) and CUs, while the other tests and games don't really stress them that much, which allows the GPU to run at higher clocks.


----------



## jonRock1992

nordskov said:


> Its also at gt2 mine fails. Max boost is 2888 no matter how much voltage (tryed up to 1.350v even though firestrike ultra cod mw2 ac Valhalla and every freaking other games easy goes 2967+ MHz just ts gt2 crashes. Tryed different voltages. Different memory speeds etc. But gt2 fails if i go higher Then 2888.


My red devil ultimate behaves the same way. BF2042 is even worse than Timespy for me lol.


----------



## nordskov

jonRock1992 said:


> My red devil ultimate behaves the same way. BF2042 is even worse than Timespy for me lol.


***. Timespy is Max 2888mhz boost in driver (2840mhz effective) But bf2042 ac Valhalla cod mw2 all runs 2900-2930 np 🤷‍♂️ fun how Cards are different


----------



## jonRock1992

nordskov said:


> ***. Timespy is Max 2888mhz boost in driver (2840mhz effective) But bf2042 ac Valhalla cod mw2 all runs 2900-2930 np 🤷‍♂️ fun how Cards are different


It's possible the actual game works good on it now, as I was playing the beta before it launched.


----------



## freddy85

Anyone tried to flash the AMD Radeon RX 6900 XT LC bios to AORUS Radeon RX 6900 XT Xtreme WATERFORCE WB?


----------



## Veii

freddy85 said:


> Anyone tried to flash the AMD Radeon RX 6900 XT LC bios to AORUS Radeon RX 6900 XT Xtreme WATERFORCE WB?


Could maybe not work, by Matrix I/O issues (dual hdmi vs hdmi + type C)
But the Aorus master, has already "unlocked mem"
you can try - but please only try with hw flasher


----------



## BIaze

While i’m still considering between a 3080Ti and a 6900XT, is there anything to note for now? Like which models have good cooling, which values to overclock/change in MPT. Last time I had a 6900XT was mid 2021 and I was limited to changing power limit via MPT


----------



## SamSqautch84

nordskov said:


> of cause, im running sapphire toxic extreme 6900xt with stock 360 aio, repaste though as stock thermal paste was ****, respastet with kryonaut extreme liquid metal, that dropped temps about 25-30c on the hotspot, before even stock with only raised boost from 2606 mhz to 2700 i easy hit 90c hotspot and throttling, in mpt i bumped voltage to 1.287 (running that for 1½ year now)
> 480w limit +15% more in driver so hitting about 536w peaks in timespy, my settings in mpt is temp dependent vmin on, in power and voltage settings im at 480w power limit, TDC limits 450-75
> Vmin low /High (mv) im at 1287-1287 GFX and SOC 1250/1250, driver set to 2132mhz fast timing ram, boost set to 2872, low boost 2426 mhz, when gaming max boost put to 2937mhz (gives med 2900-2910 mhz stable gaming, and with 2872 im hitting 25000+ in timespy daily settings, max temps timespy at 536w peaks, is around 66c, and hotspot 82c


I have a reference AMD 6900xt it's on a custom loop with a dual 400mm radiator. I currently run it with 350w limit 370tdc and DS_ parameters unchecked in Feature Control. My card crashes everytime I hit 2660mhz. I can set the max clock higher than 2660 in RadeonSettings and it won't crash if it doesn't hit that. I can run it at 2625 no problem. It really feels like a limiter instead of instability. Is there any thing I can do to try and get past that? Or are the reference cards that much more limited in what they can do? I've attched GPU-Z for info on my card and my personal best on Timespy. At the time I did the benchmarks in August of 2021 I was ranked 33rd for 5800x/Single 6900XT. 

My best TimeSpy run


----------



## 99belle99

I have a reference card too on stock cooler and I have the same issue of crashing when in the same range you do. I actually scored over a hundred more graphic score than you did in Timespy with the stock cooler.


----------



## damric

Needs more voltage


----------



## 99belle99

damric said:


> Needs more voltage


I put 1250 in MPT and still crashed. Just bad bin. Everyone with really good Timespy scores have a XTXH die.


----------



## damric

My reference card isn't too shabby. It comes down to how much voltage and power I want to feed it versus how cold my water loop is.









Luke`s 3DMark - Time Spy (GPU) score: 23910 marks with a Radeon RX 6900 XT


The Radeon RX 6900 XTscores getScoreFormatted in the 3DMark - Time Spy (GPU) benchmark. Lukeranks #38 worldwide and #17 in the hardware class. Find out more at HWBOT.




hwbot.org





That's 1.275v from MPT and clock sliders at 2750/2850 VRAM at 2100FT. 

Also that's with a lowly Ryzen 5 5600G APU, X470 board (PCIE 3.0), and some generic Crucial Ballistix DDR4-3600.


----------



## 99belle99

damric said:


> My reference card isn't too shabby. It comes down to how much voltage and power I want to feed it versus how cold my water loop is.
> 
> 
> 
> 
> 
> 
> 
> 
> 
> Luke`s 3DMark - Time Spy (GPU) score: 23910 marks with a Radeon RX 6900 XT
> 
> 
> The Radeon RX 6900 XTscores getScoreFormatted in the 3DMark - Time Spy (GPU) benchmark. Lukeranks #38 worldwide and #17 in the hardware class. Find out more at HWBOT.
> 
> 
> 
> 
> hwbot.org
> 
> 
> 
> 
> 
> That's 1.275v from MPT and clock sliders at 2750/2850 VRAM at 2100FT.
> 
> Also that's with a lowly Ryzen 5 5600G APU, X470 board (PCIE 3.0), and some generic Crucial Ballistix DDR4-3600.


Why does your memory on 6900 XT report 2388MHz?


----------



## damric

99belle99 said:


> Why does your memory on 6900 XT report 2388MHz?


lol I don't know and I didn't notice until you pointed it out. For the benchmark I dialed it in to 2100MHz since more than that I was getting decreased performance.


----------



## Najenda

guys i cannot undervolt this card, when i set 1150 it crashes,i tried a lot but couldn find magic numbers. im playing at 1080p so for now i use power save mode.i dont need that much performance i want to downclock and achieve lower watt-temps. i also saw 84 core 99c hotspot while trying 4k elden ring with vsr for benchmark. is merc 319 has bad cooling ? 

this card at this stage still hell of a beast, so i want to stay 150w - 60-70c at lower clock, is it possible ?


----------



## EastCoast

Najenda said:


> guys i cannot undervolt this card, when i set 1150 it crashes,i tried a lot but couldn find magic numbers. im playing at 1080p so for now i use power save mode.i dont need that much performance i want to downclock and achieve lower watt-temps. i also saw 84 core 99c hotspot while trying 4k elden ring with vsr for benchmark. is merc 319 has bad cooling ?
> 
> this card at this stage still hell of a beast, so i want to stay 150w - 60-70c at lower clock, is it possible ?


Did you set the power limit to max?
And set clock rates to default boost?


----------



## bernek

kairi_zeroblade said:


> Actually I am already using that before on air..not to tricky to setup (though I have a 6800XT) just needs more time for your test and those reboots in between test are pretty much the annoying part..


Can you please share the settings ? I have a 6800XT Red Devil and temps are great. I tried 1.2V with temp dependant vmin but I had to type in same values for SOC or VCORE because it didnt get to those voltages otherwise. Thanks a lot !


----------



## andreagtr

Hi guys, im sorry to disturb but im back to team red after some months and i need a lil bit help i think...
Ive took an rx6900xt used, and im trying to fix some issue (hard stuttering on warzone 2 (only wz2 other games and mp works very well)), theese are some pic (the first is one ive found here...) ive took after some time spy and I need to knwo if its all good (watt,amps,temps,score) or i have to check some things... 

My specs:
R5 5600x pbo
Rog b550i
Gskill royal @3800c14
750w silverstone
Phanteks shift X

PS ive tryed on rage mode, on downvolt mode, manual, but I keep the same results, same temps (76/110 gpu/hotspot)

Many many thanks to all! And happy holidays
















Inviato dal mio SM-F916B utilizzando Tapatalk


----------



## tsamolotoff

andreagtr said:


> on warzone 2


Pretty sure this has nothing to do with GPU, it's something related to Windows and shader compilation issues in the game itself.


----------



## kairi_zeroblade

tsamolotoff said:


> Pretty sure this has nothing to do with GPU, it's something related to Windows and shader compilation issues in the game itself.


altogether with windows 11 crapping up games as well..


----------



## bernek

kairi_zeroblade said:


> altogether with windows 11 crapping up games as well..


Since I use my PC only for gaming. Should I go with Windows 10 ? What version you recommend ?


----------



## kairi_zeroblade

bernek said:


> Since I use my PC only for gaming. Should I go with Windows 10 ? What version you recommend ?


My best gaming platform so far is on Windows 10 21H2, but I currently use the final Windows 10 build 22H2..all drivers are updated as well..


----------



## damric

Najenda said:


> guys i cannot undervolt this card, when i set 1150 it crashes,i tried a lot but couldn find magic numbers. im playing at 1080p so for now i use power save mode.i dont need that much performance i want to downclock and achieve lower watt-temps. i also saw 84 core 99c hotspot while trying 4k elden ring with vsr for benchmark. is merc 319 has bad cooling ?
> 
> this card at this stage still hell of a beast, so i want to stay 150w - 60-70c at lower clock, is it possible ?


Yes it's how I usually run my card. I had to manually adjust the clock speed to maximum 1850MHz and the voltage to 800mV, then the card sips power around 120-130w but still quite good performance and low temperatures.


----------



## andreagtr

tsamolotoff said:


> Pretty sure this has nothing to do with GPU, it's something related to Windows and shader compilation issues in the game itself.


Ok, i only need to be sure the gpu are good on wattage etc...
And ive read many awesome things about this gpu here, i would know how do you reccomend to do with this gpu, bios mod, washer mod etc...
After ive check the card its good, im pretto interested abput this things...
Regards

Inviato dal mio SM-F916B utilizzando Tapatalk


----------



## chrisz5z

kairi_zeroblade said:


> altogether with windows 11 crapping up games as well..


I've had no issues on Windows 11...performance is the same as it was on 10.


----------



## chrisz5z

andreagtr said:


> Hi guys, im sorry to disturb but im back to team red after some months and i need a lil bit help i think...
> Ive took an rx6900xt used, and im trying to fix some issue (hard stuttering on warzone 2 (only wz2 other games and mp works very well)), theese are some pic (the first is one ive found here...) ive took after some time spy and I need to knwo if its all good (watt,amps,temps,score) or i have to check some things...
> 
> My specs:
> R5 5600x pbo
> Rog b550i
> Gskill royal @3800c14
> 750w silverstone
> Phanteks shift X
> 
> PS ive tryed on rage mode, on downvolt mode, manual, but I keep the same results, same temps (76/110 gpu/hotspot)


I only experience 1-2 stutters at the beginning of the match when jumping from the plane in WZ2, then after that it's smooth the whole match...I'm on Windows 11 by the way.

Could be your CPU, FCLK, RAM OC, Windows config, or even a background process that's causing it. I've had games that didn't like my RAM OC, I relaxed the timings, then the game ran perfect. WZ is very CPU/RAM intensive. How much RAM do you have?


----------



## lestatdk

bernek said:


> Since I use my PC only for gaming. Should I go with Windows 10 ? What version you recommend ?


I had to downgrade to 10 because VR was broken in 11, so going to stay here for a while


----------



## andreagtr

chrisz5z said:


> I only experience 1-2 stutters at the beginning of the match when jumping from the plane in WZ2, then after that it's smooth the whole match...I'm on Windows 11 by the way.
> 
> Could be your CPU, FCLK, RAM OC, Windows config, or even a background process that's causing it. I've had games that didn't like my RAM OC, I relaxed the timings, then the game ran perfect. WZ is very CPU/RAM intensive. How much RAM do you have?


Many thanks, to be honest, ive tightened the ram timings at c14, no errors, no whea, no problems, but tomorrow i go to try at stock! 
Regards

Inviato dal mio SM-F916B utilizzando Tapatalk


----------



## tsamolotoff

chrisz5z said:


> I only experience 1-2 stutters at the beginning of the match when jumping from the plane in WZ2, then after that it's smooth the whole match


I experience roughly the same, there's a huge freeze when I jump from the plane (people say it's fixed if you turn on blur and lower texture quality to low, didn't try that yet), but otherwise it's very smooth, ~ 200 fps (CPU bottlenecked) in 1080p and about 150-190 fps in 1440p (probably also CPU limited)


----------



## chrisz5z

tsamolotoff said:


> about 150-190 fps in 1440p (probably also CPU limited)


Wow, I'm getting 120-165fps on max quality. Did you see huge gains in other games too after you put your 6900XT on a custom loop? I've been contemplating doing that myself


----------



## andreagtr

Oh g... really i cant understand this gpu...
First pic its stock with custom fans
Second pic its -10 power, boost max cutted at 2300 (but mem timings on fast).
Why i get the same score??
















Inviato dal mio SM-F916B utilizzando Tapatalk


----------



## 99belle99

andreagtr said:


> Oh g... really i cant understand this gpu...
> First pic its stock with custom fans
> Second pic its -10 power, boost max cutted at 2300 (but mem timings on fast).
> Why i get the same score??
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> Inviato dal mio SM-F916B utilizzando Tapatalk


I do not know why your scores are so low. Are you using MPT(I'd say not). I forget what score I got stock but I have reference card with stock cooler and with MPT I can score above 23000 in Timespy.


----------



## andreagtr

99belle99 said:


> I do not know why your scores are so low. Are you using MPT(I'd say not). I forget what score I got stock but I have reference card with stock cooler and with MPT I can score above 23000 in Timespy.


No, no mpt, second pic its stock with fans, first its downclocked and downvolted...
I cant understand... my previous gpu (3080) the gpu score was 17000, with this 6900 i get 18000 more or less...

Inviato dal mio SM-F916B utilizzando Tapatalk


----------



## andreagtr

Maybe ive found 2 interesting things...
First, if I try to set power +15, freq at 2400/2500 and2150 mem, i can reach 300w during time spy but system shut down itself during demo... (i suppose low or bad psu???)
Second, im unable to find and activate hardware gpu schedule from windows (ive done the reg edit with no luck), it could be a driver incompatibility with my system??

PS ive disabled fTPM from bios, and now i can get 19.200 graphic score with undervolt and low power/freq set

Inviato dal mio SM-F916B utilizzando Tapatalk


----------



## tsamolotoff

chrisz5z said:


> Wow, I'm getting 120-165fps on max quality. Did you see huge gains in other games too after you put your 6900XT on a custom loop? I've been contemplating doing that myself


I've not compared the GPU on air and on water, I basically installed waterblock after some stock frequency benchmarks. In general, I'd say you can expect around 150-200 mhz over air clocks if you willing to increase voltage over the limit (via vdepmin). Also, warzone is very CPU-dependant, so tuning your memory/CPU clocks might be more useful with regards of increasing in-game FPS as compared to GPU-only overclocking.



andreagtr said:


> Second, im unable to find and activate hardware gpu schedule from windows (ive done the reg edit with no luck), it could be a driver incompatibility with my system??


There is no HAGS (and no need to use it) on Radeon GPUs, there was only one beta driver after which AMD disabled that feature.


----------



## lestatdk

andreagtr said:


> Maybe ive found 2 interesting things...
> First, if I try to set power +15, freq at 2400/2500 and2150 mem, i can reach 300w during time spy but system shut down itself during demo... (i suppose low or bad psu???)
> Second, im unable to find and activate hardware gpu schedule from windows (ive done the reg edit with no luck), it could be a driver incompatibility with my system??
> 
> PS ive disabled fTPM from bios, and now i can get 19.200 graphic score with undervolt and low power/freq set
> 
> Inviato dal mio SM-F916B utilizzando Tapatalk


Monitor your temperatures. My GPU hit the thermal shutdown temperature with the stock cooler when I overclocked. And it resulted in a system shutdown


----------



## andreagtr

tsamolotoff said:


> I've not compared the GPU on air and on water, I basically installed waterblock after some stock frequency benchmarks. In general, I'd say you can expect around 150-200 mhz over air clocks if you willing to increase voltage over the limit (via vdepmin). Also, warzone is very CPU-dependant, so tuning your memory/CPU clocks might be more useful with regards of increasing in-game FPS as compared to GPU-only overclocking.
> 
> 
> There is no HAGS (and no need to use it) on Radeon GPUs, there was only one beta driver after which AMD disabled that feature.


Many thanks! Very helpful info!!!

Inviato dal mio SM-F916B utilizzando Tapatalk


----------



## andreagtr

lestatdk said:


> Monitor your temperatures. My GPU hit the thermal shutdown temperature with the stock cooler when I overclocked. And it resulted in a system shutdown


Gpu temps are always around 75/77, exept on the hotspot it reach easily the 110/115, i havent read the hotspot temp during hard oc on bench...

Inviato dal mio SM-F916B utilizzando Tapatalk


----------



## lestatdk

andreagtr said:


> Gpu temps are always around 75/77, exept on the hotspot it reach easily the 110/115, i havent read the hotspot temp during hard oc on bench...
> 
> Inviato dal mio SM-F916B utilizzando Tapatalk


If hotspot is 115 I'm sure it's shutting down due to thermal shutdown.

Which card is it ? My card is a MSI Gaming X Trio and the stock cooler was horrible


----------



## lestatdk

Here's the data for my card. Others are very similar


----------



## tolis626

Ok, I had given up overclocking my POS 6900XT (Of the Nitro+ SE variety) and had left it running at 2600MHz set for the past 6 months or so. But Witcher 3's next-gen update dropped and, seeing how brutal it is to run, I decided I'd try to overclock the thing a bit more if I can in the hopes that I might get a few more FPS (1440p Ultra+ RT Ultra settings). I then remembered exactly why I had stopped trying with this card. Even at 1.2V with TempblahblahVMin and 400W PL (I'm on the stock cooler, so it's running hot when gulping down 350W+, but it's not reaching 110C, and it was just for testing), I'm not able to do even 2700MHz set in Adrenaline, I crash within a couple minutes in the game. Even 2675MHz set will crash. I don't even bother with TimeSpy, as it will barely pass even at 2600MHz. I've tried a lot of things so far, higher voltages, lower voltages, different RAM settings (PS : my RAM's quite good, can scale performance all the way to 2150MHz FT with 1.3V VMem in MPT), undervolting SOC to 1075-1100mV, pushing SOC clock and Fclk, lowering them, nothing works, the damn thing refuses to go higher.

So, it all comes down to this. If no one has any tips that could help my situation, can I at least get some sort of award from you guys for having the worst overclocking 6900XT here? You know, as a consolation prize. I'm tight on money, otherwise I would've sold the thing and got a 7900XTX (even with all its problems, I refuse to go NVidia).


----------



## 99belle99

2600MHz on a stock air cooler is actually well above the spec of a 6900 XT. Those that are above that are either on a XTXH die or on water cooling or both a XTXH die and water cooling for best results.


----------



## damric

Yeah I was going to say that's very good for air cooling. I think expectations need to be tuned.


----------



## tolis626

99belle99 said:


> 2600MHz on a stock air cooler is actually well above the spec of a 6900 XT. Those that are above that are either on a XTXH die or on water cooling or both a XTXH die and water cooling for best results.


I know what these other results are, I have been around since the beginning, I just didn't participate because, well, I had given up. Still, I disagree. Max stable being 2600MHz (not actual clocks, mind you, 2600MHz set in Wattman, so more like 2550MHz in games when power is pushed) isn't good. It's not even mediocre. It's bad from all the results I've seen. It's not like I want to hit 3GHz with stock volts and power limits. No, this card is screaming with fans at 100%, set to 1.2V with 375W pushed through it, and it can barely breach 2600MHz. Water cooling isn't magic, if I had it under water I would be able to push the voltage and power more, but it wouldn't magically make my card not suck.


damric said:


> Yeah I was going to say that's very good for air cooling. I think expectations need to be tuned.


Again, see above. There's cards (not few either) that will do better than mine at all stock settings by just pushing the "overclock GPU" button in the driver. Heck, there's some that are set above 2600MHz at stock.

Anyway. Forgive me if I sound a bit rude, but I'm extremely salty right now.


----------



## kairi_zeroblade

tolis626 said:


> Ok, I had given up overclocking my POS 6900XT (Of the Nitro+ SE variety) and had left it running at 2600MHz set for the past 6 months or so. But Witcher 3's next-gen update dropped and, seeing how brutal it is to run, I decided I'd try to overclock the thing a bit more if I can in the hopes that I might get a few more FPS (1440p Ultra+ RT Ultra settings). I then remembered exactly why I had stopped trying with this card. Even at 1.2V with TempblahblahVMin and 400W PL (I'm on the stock cooler, so it's running hot when gulping down 350W+, but it's not reaching 110C, and it was just for testing), I'm not able to do even 2700MHz set in Adrenaline, I crash within a couple minutes in the game. Even 2675MHz set will crash. I don't even bother with TimeSpy, as it will barely pass even at 2600MHz. I've tried a lot of things so far, higher voltages, lower voltages, different RAM settings (PS : my RAM's quite good, can scale performance all the way to 2150MHz FT with 1.3V VMem in MPT), undervolting SOC to 1075-1100mV, pushing SOC clock and Fclk, lowering them, nothing works, the damn thing refuses to go higher.


hmm..your Nitro+ seems weird..









here's mine under warm water..doing fine somehow, they say its a crappy chip that will only do around 2.6-ish ghz core clock..

tried repasting/repadding and cleaning it? and give it another go?



99belle99 said:


> 2600MHz on a stock air cooler is actually well above the spec of a 6900 XT. Those that are above that are either on a XTXH die or on water cooling or both a XTXH die and water cooling for best results.


Mine's doing the same clocks I am doing now on water with the Air cooler, yet I am not XTXH, dunno if this is special?

EDIT: Tried my bench settings w/o the MPT settings for Benching (DS States off and other stuff)


----------



## ju-rek

Is it good ASIC Quality?


----------



## andreagtr

lestatdk said:


> Here's the data for my card. Others are very similar
> 
> 
> View attachment 2591037


Reference... stock cooler
PS the fans never go over 2900rpm, afterburner show 89% at full speed... is it normal?
Thanks a lot

Inviato dal mio SM-F916B utilizzando Tapatalk


----------



## kairi_zeroblade

andreagtr said:


> PS the fans never go over 2900rpm, afterburner show 89% at full speed... is it normal?


depends on your temps? is your fan settings on auto?


----------



## Kodo28

tolis626 said:


> I know what these other results are, I have been around since the beginning, I just didn't participate because, well, I had given up. Still, I disagree. Max stable being 2600MHz (not actual clocks, mind you, 2600MHz set in Wattman, so more like 2550MHz in games when power is pushed) isn't good. It's not even mediocre. It's bad from all the results I've seen. It's not like I want to hit 3GHz with stock volts and power limits. No, this card is screaming with fans at 100%, set to 1.2V with 375W pushed through it, and it can barely breach 2600MHz. Water cooling isn't magic, if I had it under water I would be able to push the voltage and power more, but it wouldn't magically make my card not suck.
> 
> Again, see above. There's cards (not few either) that will do better than mine at all stock settings by just pushing the "overclock GPU" button in the driver. Heck, there's some that are set above 2600MHz at stock.
> 
> Anyway. Forgive me if I sound a bit rude, but I'm extremely salty right now.


You right, water cooling is not magic, but it can help quite a lot when trying to push boundaries. 
Did you try to play with undervoltage? Some cards out there, hate and don't do well if pushed on max watts/voltage and do a lot better with undervolting/max frequency.


----------



## andreagtr

kairi_zeroblade said:


> depends on your temps? is your fan settings on auto?


No, ive set a custom curve, ive set the 100% at 70°, but the gpu goes at 77° and the fans wont go over 89%...
I dont know if its a reading error or there are some set ive forgot...

Inviato dal mio SM-F916B utilizzando Tapatalk


----------



## kairi_zeroblade

andreagtr said:


> No, ive set a custom curve, ive set the 100% at 70°, but the gpu goes at 77° and the fans wont go over 89%...
> I dont know if its a reading error or there are some set ive forgot...


77c is kinda hot for my personal taste, is the card new? have you tried repasting it? also how are the other temps? I am not on air cooler so I really don't know what could go wrong, also I never got that high temps when I was on air cooler when the card is still kinda new.


----------



## andreagtr

kairi_zeroblade said:


> 77c is kinda hot for my personal taste, is the card new? have you tried repasting it? also how are the other temps? I am not on air cooler so I really don't know what could go wrong, also I never got that high temps when I was on air cooler when the card is still kinda new.


Buyed used but under warranty, never repasted, its installed in a mini itx shift x, very very very poor airflow it havent any fans direct to it... 

Inviato dal mio SM-F916B utilizzando Tapatalk


----------



## damric

Water was indeed magic for my reference card.

On stock cooler I was constantly 95C+ on the hot spot with barely 300 W. Now barely 60C at 450W.

Cooler chips are also much more stable at high clocks due to less leakage due to thermal runaway.


----------



## thomasck

@andreagtr what's the specs of the rig?

try increasing PL with MPT. In the adrenaline try setting min clock to 2000, max to 2600 and vgpu around 1150. 
You gotta try, no all cards are the same. I get all sort of results, even low scores with higher clocks but it depends a lot on what you are setting. 
I've been using 335pl, 335tdc, min clock 2000 max clock 2725 with vgpu at 1150 with pl sliced all the way to the right. If I set min clock to 2100 and keep all the rest the same, I get lower scores. Considering the rest of your pc is fine, you will only nail down the clocks by trying, and trying..


----------



## tolis626

kairi_zeroblade said:


> hmm..your Nitro+ seems weird..
> View attachment 2591055
> 
> 
> here's mine under warm water..doing fine somehow, they say its a crappy chip that will only do around 2.6-ish ghz core clock..
> 
> tried repasting/repadding and cleaning it? and give it another go?
> 
> 
> 
> Mine's doing the same clocks I am doing now on water with the Air cooler, yet I am not XTXH, dunno if this is special?
> 
> EDIT: Tried my bench settings w/o the MPT settings for Benching (DS States off and other stuff)
> View attachment 2591057


See? That's more like it! Same voltage, same-ish power consumption, I max out at 2600MHz in Adrenaline (maybe I can get away with 2625MHz, MAYBE). These clocks you're showing would probably light my card on fire.

As for cleaning/repasting, well, the card is clean, dusted regularly. It's not overheating (Hotspot stays mostly <100C even when pushed to >350W, edge/junction delta is 20-25C), so I haven't seen any reason to bother with disassembly etc. If it was even a decent clocker, I would've done so long ago. But there's no point pushing it, and with the clocks I am able to run on it, the temperatures are more than fine (<90C hotspot, usually low 80s after prolonged gaming). Other temperatures aren't an issue either, at least as per HWiNFO64, with memory and VRMs maxing out anywhere between the high 50s and mid 60s, depending on the component. Memory clocks aren't the problem, before anyone asks, the card scales all the way to 2150MHz FT in performance (as I said, even with reduced memory and SOC voltage, it doesn't sweat), and I have also tried stock memory (2000MHz default timings, yes) and it crashes just as fast at high core clocks.

Basically, a textbook example of a trash overclocker.


Kodo28 said:


> You right, water cooling is not magic, but it can help quite a lot when trying to push boundaries.
> Did you try to play with undervoltage? Some cards out there, hate and don't do well if pushed on max watts/voltage and do a lot better with undervolting/max frequency.


I didn't mean to sound like an a-hole when I said that, just to be clear. But as I said above, it fails to overclock well before temperatures become an issue. 2650MHz is semi stable, but it's a 50-50 chance whether it'll crash an hour into my gaming session. So yeah, while water would help with temperatures, it would do so at levels that I can't reach anyway. And I ain't going water for it to maybe do 2650 stable.

As for undervolting, well wouldn't you know, it's not great either. I mean, I think it can do its regular 2600MHz at 1125mV in MPT, but I haven't tested extensively, as at this point I'm more concerned about absolute performance. But for a sanity check, I did try running higher clocks at lower voltages, but no dice, it will crash the same way either way. I also thought that it might be my PSU, but I have run the thing through Superposition with a 400W PL and 1.2V while my 5900X was going full tilt at a video encode (that's tuned with PBO+CO, 180W PPT) in Handbrake. The whole PC was consuming north of 650W and the PSU didn't budge for a moment, its 12V rail stayed at 12.04V.


----------



## andreagtr

thomasck said:


> @andreagtr what's the specs of the rig?
> 
> try increasing PL with MPT. In the adrenaline try setting min clock to 2000, max to 2600 and vgpu around 1150.
> You gotta try, no all cards are the same. I get all sort of results, even low scores with higher clocks but it depends a lot on what you are setting.
> I've been using 335pl, 335tdc, min clock 2000 max clock 2725 with vgpu at 1150 with pl sliced all the way to the right. If I set min clock to 2100 and keep all the rest the same, I get lower scores. Considering the rest of your pc is fine, you will only nail down the clocks by trying, and trying..


R5600x
Rog b550i
Gskill 4800c18
Silverstone 750w platinum

Ive constantly the hotspot at 110° at stock or pl on the right, and if I try to oc hard (freq,mem,pl) pc shut down during bench... (300w more or less).
I wondering to use mpt but im afraid for the temps and the system shutdown... i think i can get worse result if I put tdp over 290w...

Inviato dal mio SM-F916B utilizzando Tapatalk


----------



## 99belle99

andreagtr said:


> R5600x
> Rog b550i
> Gskill 4800c18
> Silverstone 750w platinum
> 
> Ive constantly the hotspot at 110° at stock or pl on the right, and if I try to oc hard (freq,mem,pl) pc shut down during bench... (300w more or less).
> I wondering to use mpt but im afraid for the temps and the system shutdown... i think i can get worse result if I put tdp over 290w...
> 
> Inviato dal mio SM-F916B utilizzando Tapatalk


Is that RAM frequency a mis type as it would be better for your system to run 1:1:1


----------



## thomasck

@andreagtr at what speed that ram is set to run? and what voltage?


----------



## tolis626

Well, just to round up my frustration, I tried undervolting the card through MPT. At 1125mV it's insta-crashing in Witcher 3 at 2625MHz, and at 1150mV I did manage to play for a grand total of 5 minutes before it decided to remember that it's a pos. Really, this card is a case study for silicon lottery losers. Screw it. Back to 2600MHz at 1175mV it is. I'll probably let it live out its days in shame and disgrace, knowing that it failed me when I needed that performance the most. Meanwhile, imma go read up on the 7900XTX thread and a) be envious of the performance people with it are getting and b) feel good for myself for not having one because at least the drivers on the 6900XT are working fully well. That last one is probably not gonna last, though.


----------



## chrisz5z

tolis626 said:


> Well, just to round up my frustration, I tried undervolting the card through MPT. At 1125mV it's insta-crashing in Witcher 3 at 2625MHz, and at 1150mV I did manage to play for a grand total of 5 minutes before it decided to remember that it's a pos. Really, this card is a case study for silicon lottery losers. Screw it. Back to 2600MHz at 1175mV it is. I'll probably let it live out its days in shame and disgrace, knowing that it failed me when I needed that performance the most. Meanwhile, imma go read up on the 7900XTX thread and a) be envious of the performance people with it are getting and b) feel good for myself for not having one because at least the drivers on the 6900XT are working fully well. That last one is probably not gonna last, though.


Just sell it & get another 6900XT, a 6950XT...or take your chances with a 7900XTX. They are having issues with the silicon lottery & the quality control lottery...hasn't been all sunshine & rainbows over there.


----------



## andreagtr

99belle99 said:


> Is that RAM frequency a mis type as it would be better for your system to run 1:1:1


I know, usually i set it at 3900c14/1900fclk, bit ive stutter in warzone2, now im triyng at 4800c18/1200fclk, seems stable, but I had to check properly theese days...

Inviato dal mio SM-F916B utilizzando Tapatalk


----------



## andreagtr

thomasck said:


> @andreagtr at what speed that ram is set to run? and what voltage?


4800c18 1.500v stock xmp, fclk to auto set to 1200

Inviato dal mio SM-F916B utilizzando Tapatalk


----------



## lestatdk

tolis626 said:


> Well, just to round up my frustration, I tried undervolting the card through MPT. At 1125mV it's insta-crashing in Witcher 3 at 2625MHz, and at 1150mV I did manage to play for a grand total of 5 minutes before it decided to remember that it's a pos. Really, this card is a case study for silicon lottery losers. Screw it. Back to 2600MHz at 1175mV it is. I'll probably let it live out its days in shame and disgrace, knowing that it failed me when I needed that performance the most. Meanwhile, imma go read up on the 7900XTX thread and a) be envious of the performance people with it are getting and b) feel good for myself for not having one because at least the drivers on the 6900XT are working fully well. That last one is probably not gonna last, though.


Have you tried running it at higher voltages ? If you can keep the temp down that is. My card was horrible with the stock cooler, but with water cooling it can do 2780 at 1245mV, which stock cooler it struggled to do 2600 at 1175mV


----------



## swordsx48

Has anyone used flashing to fix the stuttering? I just wanna raise the max wattage from 255 to 300ish. Stock AMD 6900 XT and this stuttering is killing me, it even happens during general computer work. using it on macOS though so afterburner / adrenalin or other SPPT fixes are out the window.
I got RBE beta to open the MPT file and save the new rom, and I tried flashing in linux but it gives bios authentication failed. It could be bypassed but someone said they won't boot without bios authentication. maybe just no UEFI as everyone says and I should do it and run CSM? any news would be much appreciated! This stuttering is a nightmare


----------



## swordsx48

andreagtr said:


> Hi guys, im sorry to disturb but im back to team red after some months and i need a lil bit help i think...
> Ive took an rx6900xt used, and im trying to fix some issue (hard stuttering on warzone 2 (only wz2 other games and mp works very well)


I've got this stuttering literally everywhere. macOS, Linux, Windows, gaming or not gaming. This card has given me nothing but troubles and AMD has refused to honor their warranty. apparently they intended to make em this way


----------



## damric

Stutter sounds like a bad VRAM overclock


----------



## kairi_zeroblade

tolis626 said:


> See? That's more like it! Same voltage, same-ish power consumption, I max out at 2600MHz in Adrenaline (maybe I can get away with 2625MHz, MAYBE). These clocks you're showing would probably light my card on fire.


Anything else as outlier? I think silicon quality only plays a role on voltages and not majorly on how the card overclocks..


----------



## lestatdk

Low ASIC usually overclocks better,but will require higher voltages


----------



## tolis626

chrisz5z said:


> Just sell it & get another 6900XT, a 6950XT...or take your chances with a 7900XTX. They are having issues with the silicon lottery & the quality control lottery...hasn't been all sunshine & rainbows over there.


Believe me, I've thought about it a lot of times. But my financial situation isn't the best right now, nor will it be for the foreseeable future, so that's a no go. I will probably not be upgrading this rig either, I'll just try to build a new one when I am able to and either sell my current one as a complete system or maybe I'll keep it as a secondary if I can stomach the cost of not selling it.

As for the condition of the 7900XTX, I'm aware of the problems people are facing. 60C edge/hotspot deltas, crashes at stock, inconsistent clocks, weird behaviors, it's got it all. I am quite confident that this is mostly driver problems which will be fixed, and the hardware is very powerful, but I'm not buying a promise. If they fix it, I'll probably buy it when I can. If they don't, I'm sticking with my card until RDNA 4 drops. As I said before, I refuse to fund Jensen's leather jacket addiction.


lestatdk said:


> Have you tried running it at higher voltages ? If you can keep the temp down that is. My card was horrible with the stock cooler, but with water cooling it can do 2780 at 1245mV, which stock cooler it struggled to do 2600 at 1175mV


I tried up to 1.2V with 375W PL (~1.14-1.145V under load vs 1.115-1.12V at stock) and it barely made a difference, apart from thermals. Basically, the cooler with its current paste application is at its limits right there and hotspot will creep towards 105C or a tad above. Won't throttle, won't shut down, won't exceen 110C, but barely. So, realistically, I can't go for more unless I do something about the cooling, which I'm not motivated to do with the results I'm getting. 

I've also tried messing around with secondary settings, like undervolting the SOC to 1.075-1.1V, undervolting the RAM to 1.3V, raising SOC clock and Fclk, none of that really helped. Maybe undervolting the SOC did, but even so it was a miniscule difference not really worth mentioning.


kairi_zeroblade said:


> Anything else as outlier? I think silicon quality only plays a role on voltages and not majorly on how the card overclocks..


The only other things that I can point out is that my RAM seems to be somewhat of a winner (which is irrelevant as I'm stuck at up to 2150MHz FT, but I can do that at 1.3V if I want to, which is nice) and that some blue LEDs on the card have started fading and my purple is ruined. So yeah, nothing relevant. It's not that it doesn't work at stock or even a slight overclock, it's just that it's an exceedingly bad overclocker.


lestatdk said:


> Low ASIC usually overclocks better,but will require higher voltages


Well, my ASIC quality is 82.8% according to OCCT. I have no idea how to interpret that for Navi 21, but it's within the range of values I've seen thrown around by other people.

Slightly off topic, but I used to have a 390X with terrible ASIC quality back in the day, and that thing overclocked like a champ (I was pushing 350-375W through it no problem too). Too bad that card had terrible RAM and I could barely touch it before it would cause issues, otherwise it would've made a fine specimen for benchmarks. Same story happened with my 5700XT, I could push the core like, a lot, but the RAM didn't want to cooperate. It was fate, it seems, for me to get a card with good clocking RAM and a terrible GPU.


----------



## kairi_zeroblade

tolis626 said:


> The only other things that I can point out is that my RAM seems to be somewhat of a winner (which is irrelevant as I'm stuck at up to 2150MHz FT, but I can do that at 1.3V if I want to, which is nice) and that some blue LEDs on the card have started fading and my purple is ruined. So yeah, nothing relevant. It's not that it doesn't work at stock or even a slight overclock, it's just that it's an exceedingly bad overclocker.


That really sucks..well its too early (personally) to jump to the 7900 series, as AMD is just starting to call the lab rats (beta testing with paying customers), and while there won't be price cuts on the Green Team while they are still (somehow) the performance kings, I will enjoy the 6900XT more, as if the games I play even lag with its horsepower..


----------



## tolis626

kairi_zeroblade said:


> That really sucks..well its too early (personally) to jump to the 7900 series, as AMD is just starting to call the lab rats (beta testing with paying customers), and while there won't be price cuts on the Green Team while they are still (somehow) the performance kings, I will enjoy the 6900XT more, as if the games I play even lag with its horsepower..


Yeah, I feel you. I really want to get a 7900XTX, but I'm not beta testing anything. If they fix it, cool. If they don't, I'm sticking with the 6900XT. Despite all the problems I have with overclocking, the thing is actually pretty great. Hell, I'm playing at 1440p, it's actually overkill if we're being realistic. For people that aren't dumb like me who can realize that games can actually run without all settings maxed out, there's no point to even consider upgrading, really.

Also, NVidia should at some boint get slapped hard for these prices.


----------



## lestatdk

tolis626 said:


> Well, my ASIC quality is 82.8% according to OCCT. I have no idea how to interpret that for Navi 21, but it's within the range of values I've seen thrown around by other people.


That's a decent ASIC, I think mine was around 68% or so. But it did take watercooling to make it OC properly. Even though the stock cooler was enormous it was horrible at cooling. It's a known problem for my model.
Stick to your current settings and accept it as is. it's still a good card despite not OC'ing well


----------



## kairi_zeroblade

tolis626 said:


> Hell, I'm playing at 1440p, it's actually overkill if we're being realistic.


True..


tolis626 said:


> For people that aren't dumb like me who can realize that games can actually run without all settings maxed out, there's no point to even consider upgrading, really.


This is really subjective..but I get your point..most games nowadays, between high and ultra its just a few more shimmering effects and the awkward contrast of colors here and there..


----------



## tolis626

lestatdk said:


> That's a decent ASIC, I think mine was around 68% or so. But it did take watercooling to make it OC properly. Even though the stock cooler was enormous it was horrible at cooling. It's a known problem for my model.
> Stick to your current settings and accept it as is. it's still a good card despite not OC'ing well


Yeah, that's what I should do. Can't accept the fact that I lost though, this card has beaten me. It's a matter of honor at this point, lol.

Cheers man!


kairi_zeroblade said:


> True..
> 
> This is really subjective..but I get your point..most games nowadays, between high and ultra its just a few more shimmering effects and the awkward contrast of colors here and there..


Yup, and don't forget puddles of water scattered everywhere, reflecting better and clearer than a well polished mirror!

On a more serious note, usually most games have at least a few settings where you can't tell a difference between high and ultra (or medium and high), but ultra tanks performance for no reason. Case in point used to be Witcher 3. When I first played it, setting foliage visibility distance from ultra to high meant the difference between playing at 50-55 fps to 60+. Visually there was no difference. Not even a few shimmering effects off in the distance, nothing.


----------



## greasemonky89

So finally made the jump back to amd after 13years(5870lol). Picked up a xfx merc 319 6900xt on sale for $664 new. Too bad it wasnt the black ultra or whatever. Whats the best driver currently?.excited to get this card going coming from a 3070 so im expecting a good jump at 1440p. Looking to max out my 144hz monitor in triple a e sports titles at the minimum. Sucks being 2 years late but on amd not really a bad thing. driver maturity is actually better as time goes on.


----------



## Fight Game

Not going to be my 24/7 settings, but decided to throw some juice at this thing today to see what it could do.


----------



## swordsx48

damric said:


> Stutter sounds like a bad VRAM overclock


Completely stock, no clocking done to it. 
Bios is unchanged, and there's no afterburner or the like in macOS or Linux. That also ruled out drivers for me since Apple makes their own drivers


----------



## jonRock1992

lestatdk said:


> Low ASIC usually overclocks better,but will require higher voltages


I agree with this in a way. Mine is just an average ASIC for an XTX-H. I think it's an 83, or something close to that. Mine has a very droopy voltage, so if I set 1.3V with MPT, it's only actually using a little over 1.2V under load. So I've been using 1.3V in MPT and setting 2875MHz max core clock. It's been very stable in everything I've tested it with. Of course I'm using a waterblock though. On the stock air cooler, I could only do around 2700MHz max core clock, and I would hit temp limits.



tolis626 said:


> Yeah, that's what I should do. Can't accept the fact that I lost though, this card has beaten me. It's a matter of honor at this point, lol.


You are going to need water-cooling to get the most out of your GPU. You and I have very similar GPU's in terms of ASIC, memory, and core clocks with the stock air cooler. I was able to get over 26K in Timespy with my GPU after putting a waterblock on it.


----------



## damric

swordsx48 said:


> Completely stock, no clocking done to it.
> Bios is unchanged, and there's no afterburner or the like in macOS or Linux. That also ruled out drivers for me since Apple makes their own drivers


Could be a bad apple


----------



## andreagtr

I think im going to try a repaste... im pretty sure i lost the rest 8 months warranty but I hipe to fix it...


Inviato dal mio SM-F916B utilizzando Tapatalk


----------



## andreagtr

This is what ive found inside...
Replaced with artic mx4 wont solve the issue... 
Stock, 112° on the hotspot with max fans...
[emoji24][emoji24][emoji24]









Inviato dal mio SM-F916B utilizzando Tapatalk


----------



## andreagtr

Maybe ive found a solution...
Very aggressive fan curve seems to fix it, 100% at 50°.
With this settings i can push core to 2300, mem to 2150, pl on 0 and seems to sit at 72° hotspot...

Inviato dal mio SM-F916B utilizzando Tapatalk


----------



## andreagtr

Hell yeah... finally (it seems), those horrible pads was too long, they exeed the space on the copper and goes between the gpu heatsink and the vrm heatsink (this mean too much space), replaced with new one with correct size give me those temps
After 3 hours of warzone i had 45°gpu and 64°hotspot, with 2ghz clock and -10pl, never seen those temps with any settings ive tryed before...
I hope ive fixed it, i love this card...

Many many thanks to you guys, i hope to star here only to read some interesting things!

Regards









Inviato dal mio SM-F916B utilizzando Tapatalk


----------



## Veii

andreagtr said:


> I hotel ive fixed it, i love this card...


That pad is a compressible carbon/graphite-pad which comes in thicker
The only paste that would come close to it , is Alphacools APEX ~ else you will not get better results after opening it

If you plan to use paste, take thinner pads - to offset heightloss
Alternatively grab Honeywell PTM 7950 to eplace that pad & slightly sand down standoffs (too hard, but also to offset heigthloss)

Paste will be thinner than those pads and you will have a gap
If you take a closer look on mosfet pads , they have a deep indentation ~ which is expected
If you got a digital caliper ~ measure the height of the cooler bottom, to the height of the long compressed thermalpad (not the maximum height)
Use that measurement -0.2mm (by going with paste,if you can't replace with PTM 7950 pad)




 he did a great job


----------



## tolis626

jonRock1992 said:


> You are going to need water-cooling to get the most out of your GPU. You and I have very similar GPU's in terms of ASIC, memory, and core clocks with the stock air cooler. I was able to get over 26K in Timespy with my GPU after putting a waterblock on it.


I know. It's not that I'm against water cooling or anything (my next rig will defo be watercooled), I know its advantages are impossible to ignore, and I know that it WILL improve overclocking with my card (still won't do miracles, but it'll allow me to push it). Thing is, I didn't get around to it ever since I bought the card, and the next one is already out. Never mind that I don't have the financial luxury to splurge on computers at the moment, by the time I factor in the cost of a custom loop, just a decent one, not fancy, I'm basically more than halfway there in terms of cost to the 7900XTX, and this dud ain't matching the 7900XTX, ever. So for now I'll pass. I'll plan my next rig a bit better and go water since the start and be done with it (the Evolv X that I have, while beautiful, isn't the best case for performance builds or watercooling after all). Never built a custom loop, should be fun. Although, my already high anxiety will probably kick into overdrive and I may or may not have a heart attack until I'm sure its leak-free.


----------



## cfranko

Yesterday I switched from a 6900 XT to a 3090, games look much more blurry with the same in game graphics settings, I think this may be because the 6900 XT had Radeon Image sharpening while the 3090 doesn’t. Did anyone have a similar experience here when switching to nvidia from amd?


----------



## nordskov

Fight Game said:


> Not going to be my 24/7 settings, but decided to throw some juice at this thing today to see what it could do.
> View attachment 2591196





cfranko said:


> rday I switched from a 6900 XT to a 3090, games look much more blurry with the same in game graphics settings, I think this may be because the 6900 XT had Radeon Image sharpening while the 3090 doesn’t. Did anyone have a similar experience here when switching to n





cfranko said:


> Yesterday I switched from a 6900 XT to a 3090, games look much more blurry with the same in game graphics settings, I think this may be because the 6900 XT had Radeon Image sharpening while the 3090 doesn’t. Did anyone have a similar experience here when switching to nvidia from amd?


ccan confirm this when i went from 2080 to 6900xt, games looks much sharper and crisp due to the image sharpening, even csgo i cant play without it anymore.. and cod mw1 and 2 way clearer picture.. its a must have, but dont really know if nvidia have similar nowadays?


----------



## cfranko

nordskov said:


> ccan confirm this when i went from 2080 to 6900xt, games looks much sharper and crisp due to the image sharpening, even csgo i cant play without it anymore.. and cod mw1 and 2 way clearer picture.. its a must have, but dont really know if nvidia have similar nowadays?


Nvidia does have sharpening, you can enable it using ALT+F3 combo while in games but when you enable sharpening the fps decreases by 20-30 so I don't enable it, but without it everything looks blurry, idk what to do.


----------



## freddy85

cfranko said:


> Nvidia does have sharpening, you can enable it using ALT+F3 combo while in games but when you enable sharpening the fps decreases by 20-30 so I don't enable it, but without it everything looks blurry, idk what to do.


It looks better even without image sharpening. I get the feeling that I need glasses when I use nvidia cards. Thats the main reason why im using amd.


----------



## ilmazzo

Asking for a friend

is anyone aware of MPT not being ckmpatible with a gigabyte 6900xt waterforce? He says that the card still work with default values after a MPT change, doesn’t matter what he does…


----------



## 99belle99

ilmazzo said:


> Asking for a friend
> 
> is anyone aware of MPT not being ckmpatible with a gigabyte 6900xt waterforce? He says that the card still work with default values after a MPT change, doesn’t matter what he does…


That is very strange. Are you sure he is doing it correctly?


----------



## ilmazzo

Ok you indirectly confirm me that MPT is compatible with whatever navi21 model and the guy has some issues eleswhere…thanks


----------



## ilmazzo

Ah in a month or so I’ll join the club. Waiting for a forum buddy to sell me his mba 6900xt with ek radeon branded fullcover. He is asking 750€ hope I can go down to 700 and pull the trigger. I’m on a lc 5700xt anniv edition right now, quite a jump I’m expecting…


----------



## PJVol

ilmazzo said:


> in a month or so I’ll join the club


The jump was tangible for me. Tuned and wc-ed it's ca. 2x of the 5700xt, with a RT as bonus (which honestly is pretty useless from my view, due to its subpar implementation in games. Except maybe Metro Exodus, where rt do improve visuals, I see from zero to negative effect, while tanking performance in any case.
(EK FC cost me ~ $120 last time. )


----------



## tolis626

Ok, first off, happy new year everyone. May your framerates be high and your temperatures low in 2023!

Now, I had some time before and after celebrations and I decided to torment myself a tad more, just to provide some screenies of my failures. Turned on TempDepVMin, set it to 1.2V and I fired up Witcher 3 (next gen update). So, at 1.2V (~1.14V after droop) and at 380W PL (350W set in MPT, increased via Wattman, ignore the low TDC limit in the screenshots, I was testing something but ended up not making even the slightest difference) I was able to play for quite a while at 2675MHz (~2625MHz in game) with no crashes. Thing is, as you'll see, my card's cooler was barely hanging on for dear life. I decided to increase that to 2700MHz (~2650MHz in game) and it crashed after about 5-10 minutes in the game. And since that's about as far as I can go voltage and power wise before I run into a temperature wall, this is it, this is what this card can do. 2675MHz at 1.2V, and that is only tested for a short while with TW3, never mind TimeSpy and such, which it will probably fail. Still, realistically, for the FPS gained it's not really worth it, it was a mind boggling value between 0 and 1 FPS with RT on.

Now, screenies.















On a more positive note, damn this game is still beautiful and such a masterpiece. I know it just received an upgrade, but it's not a night and day difference between the old version (especially with mods) and the new one. Still, I appreciate the upgrade, gave me a reason to play it again









Bonus shame for me, this is my highest score in Superposition 4K.


----------



## ju-rek

Without MPT - vcore 1130v


----------



## andreagtr

Happy new year to all!! Well...

Seems my card overheat with every setting goes up to 200/220w, no matter what pl, freq or mV... 2 question before i put it out the window...
1 im thinking to put it under water, ive read other guys with this problem and seems the only way to fix it, my question is, could be a possibility the water cooling cant fix the thermal issue?
2 *** my fans cant reach the 100% of theyr speed? Ive tryed with afterburner or radeon panel, max speed is 89% and 2900rpm (I dont think this is my issue but I give it a try if I can...)

Many thanks guys...

PS could be a psu problem? I always had a bad silverstone 750w platinum psu, but ive upgraded the cpu with a new 5950x that take 160w...

Inviato dal mio SM-F916B utilizzando Tapatalk


----------



## nordskov

andreagtr said:


> Happy new year to all!! Well...
> 
> Seems my card overheat with every setting goes up to 200/220w, no matter what pl, freq or mV... 2 question before i put it out the window...
> 1 im thinking to put it under water, ive read other guys with this problem and seems the only way to fix it, my question is, could be a possibility the water cooling cant fix the thermal issue?
> 2 *** my fans cant reach the 100% of theyr speed? Ive tryed with afterburner or radeon panel, max speed is 89% and 2900rpm (I dont think this is my issue but I give it a try if I can...)
> 
> Many thanks guys...
> 
> PS could be a psu problem? I always had a bad silverstone 750w platinum psu, but ive upgraded the cpu with a new 5950x that take 160w...
> 
> Inviato dal mio SM-F916B utilizzando Tapatalk


did you try liquid metal? my brother asus strix 6900xt xtxh with stock 240mm aio was thermal throttling (does at 90c) even at stock, he tryed undervolting it etc but jumped directly to 90c.. he tryed repasting with noctua nh-1 paste still the same.. changed to liquid metal from thermal grizzly.. went down to 50c and 65c junction, now he runs 450w limit 1.250v and 2950mhz boost in games and gets only around 63c and 84c junction, i did it to my toxic extreme aswell, im running 1.287v 500w limit +15% in driver, and 2920mhz boost in games, and got around 64c and 82c junction, the liquid metal dropped massive 30c junction temps at stock, we both run around 25000-25600 timespy daily settings.. its a beast once temps got in control


----------



## andreagtr

nordskov said:


> did you try liquid metal? my brother asus strix 6900xt xtxh with stock 240mm aio was thermal throttling (does at 90c) even at stock, he tryed undervolting it etc but jumped directly to 90c.. he tryed repasting with noctua nh-1 paste still the same.. changed to liquid metal from thermal grizzly.. went down to 50c and 65c junction, now he runs 450w limit 1.250v and 2950mhz boost in games and gets only around 63c and 84c junction, i did it to my toxic extreme aswell, im running 1.287v 500w limit +15% in driver, and 2920mhz boost in games, and got around 64c and 82c junction, the liquid metal dropped massive 30c junction temps at stock, we both run around 25000-25600 timespy daily settings.. its a beast once temps got in control


Damn! Thats impressive! But im scared about liquid metal... 
I cant understand if its only a thermal problem or I had some things wrong with my system...

Inviato dal mio SM-F916B utilizzando Tapatalk


----------



## damric

Water.


----------



## nordskov

damric said:


> Water.


even my 6800xt gigabyte windforce on air was able to handle 350w+15% pl (that a max of 402w) after repasting with liquid metal, that card dropped around 25c, made it run allday at 1225mv and a timespy score of 22000 for a 6800xt


----------



## lestatdk

andreagtr said:


> Damn! Thats impressive! But im scared about liquid metal...
> I cant understand if its only a thermal problem or I had some things wrong with my system...
> 
> Inviato dal mio SM-F916B utilizzando Tapatalk


You don't have to go LM. I use the paste that came with my waterblock ( alphacool) and my hotspot dropped 60 degrees or so. Water is the way <3


----------



## damric

Yeah I was rather impressed with the alphacool paste that came with my block. It reminded me of shin-etsu


----------



## nordskov

lestatdk said:


> You don't have to go LM. I use the paste that came with my waterblock ( alphacool) and my hotspot dropped 60 degrees or so. Water is the way <3


But your changed to custom water i suppose plus your hotspot must have been extremely high then? all i did on 3 cards (6800xt air) 2x stock watercoooled 6900xt's (asus strix 240 and sapphire 360) was repasting with liquid metal and saw drops around 25-35c drops on all of them, just from the paste, your low temps might be due to really good custom water, and keep in mind that at 64c and 82c hotspot from the stock 360 aio on the sapphire its cooling daily 540w and 1.287voltage.


----------



## lestatdk

My hotspot hit the shutdown temp at 118. Which was why I decided to go with watercooling. The stock cooler was not very good, I have seen people trying to re-paste and still have high temps with it.

Even though it's not LM I doubt it'll drop temps much more than it has already by re-pasting with LM. Not worth the risk or trouble of doing so in my opinion


----------



## nordskov

lestatdk said:


> My hotspot hit the shutdown temp at 118. Which was why I decided to go with watercooling. The stock cooler was not very good, I have seen people trying to re-paste and still have high temps with it.
> 
> Even though it's not LM I doubt it'll drop temps much more than it has already.


Well my 6800xt was at Stock voltage and hitting junction temps at 108. And after it around 65-70. And after oc and 22000 timespy score with 380w limit i managed to keep it below 100c junction with almost 400w at Air. With fan curve Max 85% so was kinda a huge step Down in temps. Tryed with noctua nh1 paste and mx4. But wasnt much better Then 5c drop


----------



## lestatdk

I see your point, but it won't give such a massive drop compared to what I have now. That would mean my temp would max out at around room temperature


----------



## nordskov

lestatdk said:


> I see your point, but it won't give such a massive drop compared to what I have now. That would mean my temp would max out at around room temperature


true, but i guess most parts of the time the 6000 series sees huge gains from liquid metal due to the normal and extreme paste cant transfer heat fast enough 5-14w/m3 vs liquid metal around 70-90w/m3


----------



## bernek

Where can I get a nice block in EU that ships to Romania for PowerColor 6800XT Red Devil ? Can someone point me to some components that I need ? Maybe a 240mm rad is enough ? or I need 360 ? Thanks !


----------



## lestatdk

Alphacool Eisblock Aurora Acryl GPX-A Radeon RX 6800(XT)/6900XT Red Devil mit Backplate


Der Alphacool Eisblock Aurora Acryl GPX Grafikkarten Wasserkühler mit Backplate vereint Style mit Performance. Extreme Kühlperformance und eine umfangreiche digital RGB Beleuchtung zeichnen ihn aus. Erfahrung und technisches Know-how aus...




www.alphacool.com





If it's only for a GPU then a 240 is fine. I have a 280 and that's more than enough for my 6900XT


----------



## andreagtr

I had one last question (i hope...) guys...
Im under win11, and i cant push fans over 89% (2950rpm), ive tryed every set, ive tryed with afterburner (it show 100% but same rpm...), from mpt they can reach 3300rpm more or less...
Thoughs?

Inviato dal mio SM-F916B utilizzando Tapatalk


----------



## PJVol

nordskov said:


> true, but i guess most parts of the time the 6000 series sees huge gains from liquid metal due to the normal and extreme paste cant transfer heat fast enough 5-14w/m3 vs liquid metal around 70-90w/m3


Just curious, has anyone used LM with an Al waterblock? And can LM be safely removed from the base's surface (even in case of copper block, e.g. nickel plated)?


----------



## Blameless

PJVol said:


> Just curious, has anyone used LM with an Al waterblock?


Anyone who has quickly destroyed their block, if not what it was attached to. The exact formulas vary, but all [liquid metal] TIMs are predominantly gallium, which reacts quite destructively with aluminum.



PJVol said:


> And can LM be safely removed from the base's surface (even in case of copper block, e.g. nickel plated)?


It will alloy readily with bare copper, usually requiring lapping to completely remove. Even nickel plated surfaces tend to become permanently stained, though in this case a quick buffing with fine steel or bronze wool can usually remove it completely.


----------



## GreedyMuffin

Ordered some LM to test with my GB Extreme. Re-pasted with some random TIM and sitting at max 49/81'C peak during TS at 1.250V. Hopefully the LM will drop me even lower. 

Anyone tested with 0,5mm mem pads instead of 1mm on the GB WF extreme? Read a post on reddit where a guy got very good gains doing that. My normal mem temp is around 50'C max I believe.


----------



## thomasck

I've tried to resist because in fact, I don't need it but I upgraded to the 7900 xtx. See you all around there at some point! Thanks for all the tips guys 👊


----------



## Kodo28

thomasck said:


> I've tried to resist because in fact, I don't need it but I upgraded to the 7900 xtx. See you all around there at some point! Thanks for all the tips guys 👊


Which one u got?


----------



## thomasck

@Kodo28 got the asus one (figured out too late it's a MBA model). The nitro would be a better choice but it's out of stock. TBH, I just got the 7900 xtx because I could finance it free of interest. Otherwise the 6900 xt stills very very solid in anything I play. It was more a move of selling the 6900 xt now and losing less money on it.


----------



## BHS1975

thomasck said:


> @Kodo28 got the asus oc one. The nitro would be a better choice but it's out of stock. TBH, I just got the 7900 xtx because I could finance it free of interest. Otherwise the 6900 xt stills very very solid in anything I play. It was more a move of selling the 6900 xt now and losing less money on it.


Still haven't seen the ASUS card here in the US yet.


----------



## thomasck

harderthanfire said:


> Sending it back because it gets hot in a totally unrealistic heat generating test? If it is fine in benchmarks and games who cares?


Vram temp is absurd even gaming. Around 90-100c, and 60-70 just browsing.. what's your vram temp? And that is, it's winter...




BHS1975 said:


> Still haven't seen the ASUS card here in the US yet.


If is a MBA, you really don't want so see it.. if is a Asus boas, tuf or tufOC, fine, because it uses different cooler, beefy as 4000 series


----------



## GreedyMuffin

Incredible results with LM on my Gigabyte Waterforce Extreme. Changed all the thermal pads to K3S or something. Decreased VRAM thermal pads to 0,5mm instead. Better temps than stock. 

What is considered safe voltage for 24/7? At 1,350V I'm not pushing core nor hotspot temp. Decreased to 1,325V. 

Anyone with a waterforce that has flashed on the LC bios? I have a MSI Z690 Edge, and not an ASUS mobo at least.


----------



## nordskov

GreedyMuffin said:


> Incredible results with LM on my Gigabyte Waterforce Extreme. Changed all the thermal pads to K3S or something. Decreased VRAM thermal pads to 0,5mm instead. Better temps than stock.
> 
> What is considered safe voltage for 24/7? At 1,350V I'm not pushing core nor hotspot temp. Decreased to 1,325V.
> 
> Anyone with a waterforce that has flashed on the LC bios? I have a MSI Z690 Edge, and not an ASUS mobo at least.


whats your watt limit daily if thats your daily voltage? im at 1.287v and 575w limit on my toxic extreme, just repastet with liquid metal, havent touched the thermal pads yet


----------



## fyzzz

Long time since i've been on this site. Maybe going to repaste my Sapphire 6900XT EE. Temps are not bad, but could be better. Currently running the AMD LC bios, 370W limit MPT +15% and 1.175v 2800mhz set (around 2750mhz under load). Max temp i've seen during Timespy is 60C and around 84C hotspot with a pretty aggressive fan curve. Score doesn't increase really increase much going over these settings, it just gets hot and unstable. I will continue tweaking the settings, pretty fun card to overclock.
Best score so far; I scored 18 334 in Time Spy


----------



## Acegr

if only my card wouldnt crash with anything above 2780 at max... This is with 1.350v on core... Reaches 92c hotspot.
edit: it's a 6900xt, not a 6950xt.


----------



## nordskov

Acegr said:


> View attachment 2592819
> 
> 
> if only my card wouldnt crash with anything above 2780 at max... This is with 1.350v on core... Reaches 92c hotspot.
> edit: it's a 6900xt, not a 6950xt.


Thats on lc bios ?? If not what settings do u run thats incredible for such low clocks


----------



## Acegr

nordskov said:


> Thats on lc bios ?? If not what settings do u run thats incredible for such low clocks


ye, 2680 min, 2780 max, 2400 fast 2.

Here are my full settings.


----------



## chrisz5z

Acegr said:


> ye, 2680 min, 2780 max, 2400 fast 2.
> 
> Here are my full settings.
> 
> View attachment 2592834
> View attachment 2592834


How much of that 920W limit does it actually pull while under load?


----------



## nordskov

Acegr said:


> ye, 2680 min, 2780 max, 2400 fast 2.
> 
> Here are my full settings.
> 
> View attachment 2592834
> View attachment 2592834


Would love to do that lc bios on my toxic Extreme aswell. But dont have Linux 😩😩


----------



## lestatdk

Just boot it off a USB stick


----------



## nordskov

Is there any guide i Can use ? Would be sooo cool to get ram to 18gbps.


----------



## nordskov

Acegr said:


> ye, 2680 min, 2780 max, 2400 fast 2.
> 
> Here are my full settings.
> 
> View attachment 2592834
> View attachment 2592834


Can you provide me a link for the bios ? And Maybe explain in details how its done. Really appreciate any help i Can get so i dont screw up my card lol 🫣😅.


----------



## GreedyMuffin

nordskov said:


> whats your watt limit daily if thats your daily voltage? im at 1.287v and 575w limit on my toxic extreme, just repastet with liquid metal, havent touched the thermal pads yet


550 + 15%. Does not go much above 500W in TS2. Gaming is a few hundred lower. Folding 250-290W.

I also want to flash my Waterforce extreme to the LC bios, auto mem OC is 2400 according to wattman**


----------



## nordskov

GreedyMuffin said:


> 550 + 15%. Does not go much above 500W in TS2. Gaming is a few hundred lower. Folding 250-290W.
> 
> I also want to flash my Waterforce extreme to the LC bios, auto mem OC is 2400 according to wattman**


But dont u have the lc bios Then ??? Mine sets auto oc to 2120.


----------



## GreedyMuffin

nordskov said:


> But dont u have the lc bios Then ??? Mine sets auto oc to 2120.


Nope, stock ROM. I guess I've got the native 18 mem chips on my card tbh. I have pictures of my mem chips.


----------



## nordskov

GreedyMuffin said:


> Nope, stock ROM. I guess I've got the native 18 mem chips on my card tbh. I have pictures of my mem chips.


Hmm But Then u dont really benefit from flashing i suppose. Isnt the idea to go from 16 to 18gbps. ?? If u already Can run 2400mhz ram ur already there. Then just use mpt to up voltage and watt limit and get one flying card. My 6900xt toxic Extreme wont run above 2150 (highest scores around 2132mhz ft). Thats why i want to try the lc bios. But dont know where to download it.


----------



## tsamolotoff

If you don't already have an XTXH chip (1.2V limit for chip), flashing is useless, as far as I understood the discussion a few (hundred) pages ago


----------



## nordskov

tsamolotoff said:


> If you don't already have an XTXH chip (1.2V limit for chip), flashing is useless, as far as I understood the discussion a few (hundred) pages ago


i got toxic extreme xtxh already running 1.287v and 2882 mhz boost daily, but i belive this card could actually shine from faster ram speeds.. but ram wont run faster then 2132mhz, thats why i want to try the lc bios, i now downloaded the bios, but just want someone whos done it to tell me how exactly i should do so i dont screw up my card lol. really hope any 1 of you guys can assist me


----------



## Acegr

nordskov said:


> i got toxic extreme xtxh already running 1.287v and 2882 mhz boost daily, but i belive this card could actually shine from faster ram speeds.. but ram wont run faster then 2132mhz, thats why i want to try the lc bios, i now downloaded the bios, but just want someone whos done it to tell me how exactly i should do so i dont screw up my card lol. really hope any 1 of you guys can assist me


@Veii is the master regarding flash. He did it for me.


----------



## freddy85

GreedyMuffin said:


> 550 + 15%. Does not go much above 500W in TS2. Gaming is a few hundred lower. Folding 250-290W.
> 
> I also want to flash my Waterforce extreme to the LC bios, auto mem OC is 2400 according to wattman**


Are you sure it is running at 2400mhz?
What bios version do you have? Can you share it?

Edit, mine does it as well with auto tuning.
But it is a fluke, the performance drops because of error correction.

The only way to get those speeds to work is with the LC bios.


----------



## sry.fat.irl

nordskov said:


> i got toxic extreme xtxh already running 1.287v and 2882 mhz boost daily, but i belive this card could actually shine from faster ram speeds.. but ram wont run faster then 2132mhz, thats why i want to try the lc bios, i now downloaded the bios, but just want someone whos done it to tell me how exactly i should do so i dont screw up my card lol. really hope any 1 of you guys can assist me


i have the toxic 6900 xt air cooled and using moreclockcontrol and mpt .. i lock it at 1200 and my card gets to 2697mhz and i set thge mclk to 2170.. i i been messing with mpt specifically with the memory dpm and i changed them from 1350 to 1300 and i can get the ram to 2230mhz but trade off in cclock which gets to the toxic boost range before getting tons of artifacts. then just today i learned that some air cooled toxics have a the bios update that unlocks it to the xtxh. 









I scored 30 873 in Fire Strike Extreme


AMD Ryzen 9 5950X, AMD Radeon RX 6900 XT x 1, 16384 MB, 64-bit Windows 11}




www.3dmark.com


----------



## nordskov

oh well.. i tryed and i failed... my silent bios is now dead... thank god for dual bios lol... my toxic extreme xtxh i tryed via linux (ubuntu) to flash the bios, went great... until restart no picture..) used the LC bios from here: AMD RX 6900 XT VBIOS

but didnt work out on my 6900xt toxic xtreme xtxh.. why didnt it work??? used linux, and flashed it, said succesful but no picture, so had to use oc bios on my card to get picture again

EDIT: tryed reflash again, fingers crossed, booted up with working bios.. as soon as i entered linux succesful, i flipped the switch to the silent bios (still picture) reflashed with old stock silent bios, rebootet, and baaam picture again, so now both bioses is working again.. can any 1 provide me a link for a working XTXH toxic extreme bios that allows for the faster memory ?


----------



## GreedyMuffin

nordskov said:


> oh well.. i tryed and i failed... my silent bios is now dead... thank god for dual bios lol... my toxic extreme xtxh i tryed via linux (ubuntu) to flash the bios, went great... until restart no picture..) used the LC bios from here: AMD RX 6900 XT VBIOS
> 
> but didnt work out on my 6900xt toxic xtreme xtxh.. why didnt it work??? used linux, and flashed it, said succesful but no picture, so had to use oc bios on my card to get picture again
> 
> EDIT: tryed reflash again, fingers crossed, booted up with working bios.. as soon as i entered linux succesful, i flipped the switch to the silent bios (still picture) reflashed with old stock silent bios, rebootet, and baaam picture again, so now both bioses is working again.. can any 1 provide me a link for a working XTXH toxic extreme bios that allows for the faster memory ?


What motherboard do you have?


----------



## sry.fat.irl

nordskov said:


> oh well.. i tryed and i failed... my silent bios is now dead... thank god for dual bios lol... my toxic extreme xtxh i tryed via linux (ubuntu) to flash the bios, went great... until restart no picture..) used the LC bios from here: AMD RX 6900 XT VBIOS
> 
> but didnt work out on my 6900xt toxic xtreme xtxh.. why didnt it work??? used linux, and flashed it, said succesful but no picture, so had to use oc bios on my card to get picture again
> 
> EDIT: tryed reflash again, fingers crossed, booted up with working bios.. as soon as i entered linux succesful, i flipped the switch to the silent bios (still picture) reflashed with old stock silent bios, rebootet, and baaam picture again, so now both bioses is working again.. can any 1 provide me a link for a working XTXH toxic extreme bios that allows for the faster memory ?



more than likely the cause of a failed flashed is cuz that model id 2x8pin pcie and the output of displays is not like the our toxic pcbs. Toxic specifically is actually really easy to flash across the the versions. you should have grabbed either the

and being sapphire toxic cards have 2x8 +1x6 pin pcie plugs. haha if you try and of the 3x8pins the extra groun on the plug will send your card to safe mode. 


#SKU#: 11308-08-20G SAPPHIRE TOXIC AMD Radeon™ RX 6900 XT Extreme Edition Gaming Graphics Card with 16GB GDDR6, AMD RDNA™ 2
being you have the toxic extreme which is 11308-08-20G (6900XT Extreme edition)
boost clockup to 2525 and toxic boost to 2730


how my air cooled is better than the AIO


#SKU#: 11308-06-20G SAPPHIRE TOXIC AMD Radeon™ RX 6900 XT Limited Edition Gaming Graphics Card with 16GB GDDR6, AMD RDNA™ 2 
Boost Clock: Up to 2365 MHz
Game Clock: Up to 2135 MHz

 One Click TOXIC BOOST Up to 2660 MHz


#SKU#: 11308-04-20G SAPPHIRE TOXIC AMD Radeon™ RX 6900 XT Limited Edition
Boost Clock: Up to 2365 MHz
Game Clock: Up to 2135 MHz
One Click TOXIC BOOST Up to 2660 MHz



#SKU#: 11308-13-20G SAPPHIRE TOXIC AMD Radeon™ RX 6900 XT Limited Edition
Boost Clock: Up to 2365 MHz
Game Clock: Up to 2135 MHz
One Click TOXIC BOOST Up to 2600 MHz

then my aircooled

#SKU#: 11308-11-20G SAPPHIRE TOXIC AMD Radeon™ RX 6900 XT Air Cooled Gaming Graphics Card with 16GB GDDR6, AMD RDNA™
Boost Clock: Up to 2425 MHz

Game Clock: Up to 2235 MHz
One Click TOXIC BOOST Up to 2500 MHz

but my bios version is literally listed as a 6950XT


----------



## sry.fat.irl

nordskov said:


> oh well.. i tryed and i failed... my silent bios is now dead... thank god for dual bios lol... my toxic extreme xtxh i tryed via linux (ubuntu) to flash the bios, went great... until restart no picture..) used the LC bios from here: AMD RX 6900 XT VBIOS
> 
> but didnt work out on my 6900xt toxic xtreme xtxh.. why didnt it work??? used linux, and flashed it, said succesful but no picture, so had to use oc bios on my card to get picture again
> 
> EDIT: tryed reflash again, fingers crossed, booted up with working bios.. as soon as i entered linux succesful, i flipped the switch to the silent bios (still picture) reflashed with old stock silent bios, rebootet, and baaam picture again, so now both bioses is working again.. can any 1 provide me a link for a working XTXH toxic extreme bios that allows for the faster memory ?



when i switch my bios switch to performance mode i get a 6950xt and in silent i get a 6900xt lol


----------



## nordskov

sry.fat.irl said:


> when i switch my bios switch to performance mode i get a 6950xt and in silent i get a 6900xt lol
> 
> View attachment 2593190
> View attachment 2593191


Made me really confused lol. What bios exactly to use for the toxic Extreme ? 😅


----------



## nordskov

sry.fat.irl said:


> when i switch my bios switch to performance mode i get a 6950xt and in silent i get a 6900xt lol
> 
> View attachment 2593190
> View attachment 2593191


But even when it says 6950xt the ram still seems to run 2000mhz Stock 16gbps.


----------



## Ithilain

Hey guys, I recently picked up a reference 6900xt LC (the one with the 120mm rad) and I'm having trouble keeping the liquid temps under control during heavy gaming. The core and hotspot temps are hot, but not worryingly so (~75 & 85 respectively), the liquid temps end up shooting up to the mid 60's, though, which has me a bit worried as I've heard that liquid temps over 60 can potentially damage the pump. I've already set power to -10%, undervolted as much as is stable, and set the fan as an intake, is there anything else I can do to reduce the liquid temps short of cranking the fan? Or is ~65° liquid temps not as bad as I've heard?


----------



## lestatdk

Liquid temps are 65 ? That is very very bad. My water temp is in high 20s low 30s under heavy load.

Are you sure there's proper circulation ? Something is very wrong if the water temp is 65. Be careful it doesn't break something. it's not good for any part of the loop be it pump or other.


----------



## Veii

@Ithilain 
In an enclosed system (designed by coolermaster), 
where the pump is on the bock itself and the pump is cooled by the fluid

A temperature is within range of manufactures design.
If it wasn't in range, the whole setup would be insulated and differently designed
Only the Original manufacture (Cooler Master) ~ will "potentially" be able to answer you if your product operates within safe limits

As for the general public,
A higher water temperature can indicate bad cooling capability and/or be dangerous to different type of tubes.
In this specific case ~ it is designed normally and the card guarantee's stable operation till 105° , starting to throttle after 95° (LC Bios is different)

Yes, unless i have documents by coolermaster to decline (agree with your worry)
Everything behaves up to spec ~ and i haven't heard of LC Cards having design issues. A friend had a 6900 LC Ref 
I would not suggest to open it, there is a pressure-balancing system in there.


----------



## Ithilain

Yeah, liquid temps. Apparently this isn't uncommon for this card as it was mentioned in a German review last year, though I didn't see it until a few days ago. I'm also running the card in a sff case, which isn't great for thermals either. 

As far as blockages go, there is one tightish bend going into (or maybe out of) the radiator, though idk if it's tight enough to cause a blockage. I just pulled the rad from the case and am running it through furmark, I'll update with any results


----------



## lestatdk

90 hotspot..Wow. I can barely get mine above 60 . Well, guess the liquid is within spec then, I must admit it sounds extreme with 60+ degrees temp


----------



## Ithilain

@Veii thanks for the info! It's definitely done a lot to help reassure me that there's nothing catastrophically bad going to happen. I wasn't planning on opening the card at all except to maybe replace the fan with a noctua one, though I think that only involves removing the shroud which I don't think would affect anything.

Small update on my testing, after ~45 minutes of furmark with the rad outside the case, liquid temps maxed out at 47° and stayed stable there, though not being in a hotbox of a case definitely helped with that. I'm going to try changing the routing of the tubes to minimize any tight bends and retest inside the case and see what kind of temps I get.


----------



## nordskov

lestatdk said:


> Liquid temps are 65 ? That is very very bad. My water temp is in high 20s low 30s under heavy load.
> 
> Are you sure there's proper circulation ? Something is very wrong if the water temp is 65. Be careful it doesn't break something. it's not good for any part of the loop be it pump or other.


Its pretty normal temps as its a 120mm rad. My brothers and mine 240/360 rads on our strix and toxic Extreme were Stock at edge 70/ hotspot 93c with Stock paste. After repaste with lm it was Down at 35/45 during gaming. Now we both run 1.287v 500w limit Daily and gets around 50/64 temps. Timespy on the other hand we get around 66/84 But thats at 530-540w peaks


----------



## kratosatlante

Azazil1190 said:


> 22.5.2
> And i think they improve my sotr bench too.
> Freshhhh 1080p 1440p 4k
> For better results i need 12900k or 5800x3D
> 
> View attachment 2564674
> 
> View attachment 2564675
> 
> View attachment 2564673


6900xt asrock formula, stock cooler , get this with gelid gc extreme (stock paste was better)

mpt 430w 1.2v stock, i try 6950xt bios but i have dark hero mother, no work bar enable, and return to stock bios, time ago try lc bios but problem with tem, now y repaste, try again this week.
ram 3800 cl 14 5950x pbo


----------



## damric

I've never had coolant temps over like 35C even in the summer on the hottest days. 65C is an engineering design flaw or defect.


----------



## 99belle99

Guys he isn't running a custom loop the guy has an AIO GPU with a 120mm fan and radiator.


----------



## Ithilain

Yeah, it's an aio, so I don't have much options in terms of changing stuff lol. That said, after changing the rad orientation and tube routes I maxed at 56° running furmark for an hour, so I guess I did have a bit of a kink in one of the tubes before. It's still a bit concerned to me, but at least it's under 60 now. 

Maybe it was designed to have really hot liquid temps? 47° under basically ideal conditions is pretty toasty, though higher liquid temps should increase cooling efficiency since the temperature delta would be higher, so maybe they made everything extra heat tolerant as a way to get away with cooling a 300W+ card with only a single 120mm rad? Who knows


----------



## tolis626

Ithilain said:


> Yeah, it's an aio, so I don't have much options in terms of changing stuff lol. That said, after changing the rad orientation and tube routes I maxed at 56° running furmark for an hour, so I guess I did have a bit of a kink in one of the tubes before. It's still a bit concerned to me, but at least it's under 60 now.
> 
> Maybe it was designed to have really hot liquid temps? 47° under basically ideal conditions is pretty toasty, though higher liquid temps should increase cooling efficiency since the temperature delta would be higher, so maybe they made everything extra heat tolerant as a way to get away with cooling a 300W+ card with only a single 120mm rad? Who knows


Meh, still sounds like something is kinda wrong. Remember the 295x2? That crazy thing would cool 2 Hawaii dies, themselves notoriously hot and power hungry, with a single 120mm rad. I have no recollection of AIOs dying in these cards, nor do I remember reading about such high coolant temps, even by people who pushed them. And considering that when overclocked we were easily talking over 500W, it's not about how power hungry the 6900XT can get. Anyways, high coolant temps point to poor heat removal from the system. Now, if that is due to problematic coolant flow or something else, I dunno, but the quickest and easiest way to increase it is airflow. Are you running a single fan or a push pull setup? If it's the former and you can fit a second fan, going push pull should net you at least some improvement. Also, while the card is operational, try to listen for any weird noises coming from the pump. A dying pump or air in the system would produce noise.


----------



## Ithilain

tolis626 said:


> Meh, still sounds like something is kinda wrong. Remember the 295x2? That crazy thing would cool 2 Hawaii dies, themselves notoriously hot and power hungry, with a single 120mm rad. I have no recollection of AIOs dying in these cards, nor do I remember reading about such high coolant temps, even by people who pushed them. And considering that when overclocked we were easily talking over 500W, it's not about how power hungry the 6900XT can get. Anyways, high coolant temps point to poor heat removal from the system. Now, if that is due to problematic coolant flow or something else, I dunno, but the quickest and easiest way to increase it is airflow. Are you running a single fan or a push pull setup? If it's the former and you can fit a second fan, going push pull should net you at least some improvement. Also, while the card is operational, try to listen for any weird noises coming from the pump. A dying pump or air in the system would produce noise.


I think a big difference with the 295x2 was it had a dedicated fan for cooling the vrms and i think memory, while this thing is dumping all its heat into the radiator. 

As for my fan setup, I'm running a single fan in a pull configuration to try and get the coldest air possible running through the rad, I've only got ~10mm of space on the other side, though. I guess I could run a couple of super slim 60mm fans, but i doubt it would help much. It would probably be more beneficial to swap out the included fan with a noctua one or something.

I'm not noticing any pump noise, either, so that's good I guess


----------

